CaraJ commited on
Commit
70ad7cf
1 Parent(s): 1bb2b45

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +192 -0
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - multiple-choice
4
+ - question-answering
5
+ - visual-question-answering
6
+ language:
7
+ - en
8
+ size_categories:
9
+ - 1K<n<10K
10
+ configs:
11
+ - config_name: testmini
12
+ data_files:
13
+ - split: testmini
14
+ path: "testmini.parquet"
15
+ - config_name: testmini_version_split
16
+ data_files:
17
+ - split: testmini_text_only
18
+ path: "testmini_text_only.parquet"
19
+ - split: testmini_text_lite
20
+ path: "testmini_text_lite.parquet"
21
+ - split: testmini_text_dominant
22
+ path: "testmini_text_dominant.parquet"
23
+ - split: testmini_vision_dominant
24
+ path: "testmini_vision_dominant.parquet"
25
+ - split: testmini_vision_intensive
26
+ path: "testmini_vision_intensive.parquet"
27
+ - split: testmini_vision_only
28
+ path: "testmini_vision_only.parquet"
29
+ dataset_info:
30
+ - config_name: testmini
31
+ features:
32
+ - name: sample_index
33
+ dtype: string
34
+ - name: problem_index
35
+ dtype: string
36
+ - name: problem_version
37
+ dtype: string
38
+ - name: question
39
+ dtype: string
40
+ - name: image
41
+ dtype: image
42
+ - name: answer
43
+ dtype: string
44
+ - name: question_type
45
+ dtype: string
46
+ - name: metadata
47
+ struct:
48
+ - name: split
49
+ dtype: string
50
+ - name: source
51
+ dtype: string
52
+ - name: subject
53
+ dtype: string
54
+ - name: subfield
55
+ dtype: string
56
+ - name: query_wo
57
+ dtype: string
58
+ - name: query_cot
59
+ dtype: string
60
+ splits:
61
+ - name: testmini
62
+ num_bytes: 166789963
63
+ num_examples: 3940
64
+ - config_name: testmini_version_split
65
+ features:
66
+ - name: sample_index
67
+ dtype: string
68
+ - name: problem_index
69
+ dtype: string
70
+ - name: problem_version
71
+ dtype: string
72
+ - name: question
73
+ dtype: string
74
+ - name: image
75
+ dtype: image
76
+ - name: answer
77
+ dtype: string
78
+ - name: question_type
79
+ dtype: string
80
+ - name: metadata
81
+ struct:
82
+ - name: split
83
+ dtype: string
84
+ - name: source
85
+ dtype: string
86
+ - name: subject
87
+ dtype: string
88
+ - name: subfield
89
+ dtype: string
90
+ - name: query_wo
91
+ dtype: string
92
+ - name: query_cot
93
+ dtype: string
94
+ splits:
95
+ - name: testmini_text_only
96
+ num_bytes: 250959
97
+ num_examples: 788
98
+ - name: testmini_text_lite
99
+ num_examples: 788
100
+ - name: testmini_text_dominant
101
+ num_examples: 788
102
+ - name: testmini_vision_dominant
103
+ num_examples: 788
104
+ - name: testmini_vision_intensive
105
+ num_examples: 788
106
+ - name: testmini_vision_only
107
+ num_examples: 788
108
+ ---
109
+ # Dataset Card for MathVerse
110
+
111
+ - [Dataset Description](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-description)
112
+ - [Paper Information](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#paper-information)
113
+ - [Dataset Examples](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-examples)
114
+ - [Leaderboard](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#leaderboard)
115
+ - [Citation](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#citation)
116
+
117
+ ## Dataset Description
118
+ The capabilities of **Multi-modal Large Language Models (MLLMs)** in **visual math problem-solving** remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams.
119
+
120
+ <p align="center">
121
+ <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig1.png" width="90%"> <br>
122
+ </p>
123
+
124
+ To this end, we introduce **MathVerse**, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into **six distinct versions**, each offering varying degrees of information content in multi-modality, contributing to **15K** test samples in total. This approach allows MathVerse to comprehensively assess ***whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning.***
125
+
126
+ <p align="center">
127
+ <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig2.png" width="90%"> <br>
128
+ Six different versions of each problem in <b>MathVerse</b> transformed by expert annotators.
129
+ </p>
130
+
131
+ In addition, we propose a **Chain-of-Thought (CoT) Evaluation strategy** for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs.
132
+
133
+ <p align="center">
134
+ <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig3.png" width="90%"> <br>
135
+ The two phases of the CoT evaluation strategy.
136
+ </p>
137
+
138
+ ## Paper Information
139
+ - Code: https://github.com/ZrrSkywalker/MathVerse
140
+ - Project: https://mathverse-cuhk.github.io/
141
+ - Visualization: https://mathverse-cuhk.github.io/#visualization
142
+ - Leaderboard: https://mathverse-cuhk.github.io/#leaderboard
143
+ - Paper: https://arxiv.org/abs/2403.14624
144
+
145
+ ## Dataset Examples
146
+ 🖱 Click to expand the examples for six problems versions within three subjects</summary>
147
+
148
+ <details>
149
+ <summary>🔍 Plane Geometry</summary>
150
+
151
+ <p align="center">
152
+ <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver1.png" width="50%"> <br>
153
+ </p>
154
+ </details>
155
+
156
+ <details>
157
+ <summary>🔍 Solid Geometry</summary>
158
+
159
+ <p align="center">
160
+ <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver2.png" width="50%"> <br>
161
+ </p>
162
+ </details>
163
+
164
+ <details>
165
+ <summary>🔍 Functions</summary>
166
+
167
+ <p align="center">
168
+ <img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver3.png" width="50%"> <br>
169
+ </p>
170
+ </details>
171
+
172
+ ## Leaderboard
173
+ ### Contributing to the Leaderboard
174
+
175
+ 🚨 The [Leaderboard](https://mathverse-cuhk.github.io/#leaderboard) is continuously being updated.
176
+
177
+ The evaluation instructions and tools will be released soon. For now, please send your results on the ***testmini*** set to this email: [email protected]. Please refer to the following template to prepare your result json file.
178
+
179
+ - [output_testmini_template.json]()
180
+
181
+ ## Citation
182
+
183
+ If you find **MathVerse** useful for your research and applications, please kindly cite using this BibTeX:
184
+
185
+ ```latex
186
+ @inproceedings{zhang2024mathverse,
187
+ title={MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?},
188
+ author={Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li},
189
+ booktitle={arXiv},
190
+ year={2024}
191
+ }
192
+ ```