winvswon78 commited on
Commit
d9993d3
·
verified ·
1 Parent(s): ad12255

Upload folder using huggingface_hub

Browse files
Chemistry/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4e65177accf53044d17e59f34864e7de83c9f7cfd2db563cb286b1fbd4397e5
3
+ size 38090732
Coding/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0199c08ae1ea1d67d24d75bc9ea1b614ebc6edbf219d98685d7c4d8593ade191
3
+ size 156921633
Math/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95f40e267c808e1182916cb490d706f362901c09ea4f343f176f1c68261410c4
3
+ size 49594723
Physics/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fcdd78d6f8f335224e0c9f54dc59844cebbb82c894cba9eb8448a5be962a375
3
+ size 13597019
README.md ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ size_categories:
5
+ - 1K<n<10K
6
+ task_categories:
7
+ - question-answering
8
+ - visual-question-answering
9
+ - multiple-choice
10
+ dataset_info:
11
+ - config_name: Chemistry
12
+ features:
13
+ - name: pid
14
+ dtype: string
15
+ - name: question
16
+ dtype: string
17
+ - name: options
18
+ sequence: string
19
+ - name: answer
20
+ dtype: string
21
+ - name: image_1
22
+ dtype: image
23
+ - name: image_2
24
+ dtype: image
25
+ - name: image_3
26
+ dtype: image
27
+ - name: image_4
28
+ dtype: image
29
+ - name: image_5
30
+ dtype: image
31
+ - name: solution
32
+ dtype: string
33
+ - name: subject
34
+ dtype: string
35
+ - name: task
36
+ dtype: string
37
+ - name: category
38
+ dtype: string
39
+ - name: source
40
+ dtype: string
41
+ - name: type
42
+ dtype: string
43
+ - name: context
44
+ dtype: string
45
+ splits:
46
+ - name: test
47
+ num_bytes: 49337131.36
48
+ num_examples: 1176
49
+ download_size: 38090732
50
+ dataset_size: 49337131.36
51
+ - config_name: Coding
52
+ features:
53
+ - name: pid
54
+ dtype: string
55
+ - name: question
56
+ dtype: string
57
+ - name: options
58
+ sequence: string
59
+ - name: answer
60
+ dtype: string
61
+ - name: image_1
62
+ dtype: image
63
+ - name: image_2
64
+ dtype: image
65
+ - name: image_3
66
+ dtype: image
67
+ - name: image_4
68
+ dtype: image
69
+ - name: image_5
70
+ dtype: image
71
+ - name: solution
72
+ dtype: string
73
+ - name: subject
74
+ dtype: string
75
+ - name: task
76
+ dtype: string
77
+ - name: category
78
+ dtype: string
79
+ - name: source
80
+ dtype: string
81
+ - name: type
82
+ dtype: string
83
+ - name: context
84
+ dtype: string
85
+ splits:
86
+ - name: test
87
+ num_bytes: 201047028.0
88
+ num_examples: 564
89
+ download_size: 156921633
90
+ dataset_size: 201047028.0
91
+ - config_name: Math
92
+ features:
93
+ - name: pid
94
+ dtype: string
95
+ - name: question
96
+ dtype: string
97
+ - name: options
98
+ sequence: string
99
+ - name: answer
100
+ dtype: string
101
+ - name: image_1
102
+ dtype: image
103
+ - name: image_2
104
+ dtype: image
105
+ - name: image_3
106
+ dtype: image
107
+ - name: image_4
108
+ dtype: image
109
+ - name: image_5
110
+ dtype: image
111
+ - name: solution
112
+ dtype: string
113
+ - name: subject
114
+ dtype: string
115
+ - name: task
116
+ dtype: string
117
+ - name: category
118
+ dtype: string
119
+ - name: source
120
+ dtype: string
121
+ - name: type
122
+ dtype: string
123
+ - name: context
124
+ dtype: string
125
+ splits:
126
+ - name: test
127
+ num_bytes: 55727097.0
128
+ num_examples: 892
129
+ download_size: 49594723
130
+ dataset_size: 55727097.0
131
+ - config_name: Physics
132
+ features:
133
+ - name: pid
134
+ dtype: string
135
+ - name: question
136
+ dtype: string
137
+ - name: options
138
+ sequence: string
139
+ - name: answer
140
+ dtype: string
141
+ - name: image_1
142
+ dtype: image
143
+ - name: image_2
144
+ dtype: image
145
+ - name: image_3
146
+ dtype: image
147
+ - name: image_4
148
+ dtype: image
149
+ - name: image_5
150
+ dtype: image
151
+ - name: solution
152
+ dtype: string
153
+ - name: subject
154
+ dtype: string
155
+ - name: task
156
+ dtype: string
157
+ - name: category
158
+ dtype: string
159
+ - name: source
160
+ dtype: string
161
+ - name: type
162
+ dtype: string
163
+ - name: context
164
+ dtype: string
165
+ splits:
166
+ - name: test
167
+ num_bytes: 20512520.0
168
+ num_examples: 156
169
+ download_size: 13597019
170
+ dataset_size: 20512520.0
171
+ configs:
172
+ - config_name: Chemistry
173
+ data_files:
174
+ - split: test
175
+ path: Chemistry/test-*
176
+ - config_name: Coding
177
+ data_files:
178
+ - split: test
179
+ path: Coding/test-*
180
+ - config_name: Math
181
+ data_files:
182
+ - split: test
183
+ path: Math/test-*
184
+ - config_name: Physics
185
+ data_files:
186
+ - split: test
187
+ path: Physics/test-*
188
+ tags:
189
+ - chemistry
190
+ - physics
191
+ - math
192
+ - coding
193
+ ---
194
+
195
+ ## Dataset Description
196
+
197
+ We introduce **EMMA (Enhanced MultiModal reAsoning)**, a benchmark targeting organic multimodal reasoning across mathematics, physics, chemistry, and coding.
198
+ EMMA tasks demand advanced cross-modal reasoning that cannot be solved by thinking separately in each modality, offering an enhanced test suite for MLLMs' reasoning capabilities.
199
+
200
+ EMMA is composed of 2,788 problems, of which 1,796 are newly constructed, across four domains. Within each subject, we further provide fine-grained labels for each question based on the specific skills it measures.
201
+
202
+ <p align="center">
203
+ <img src="https://huggingface.co/datasets/luckychao/EMMA/resolve/main/emma_composition.jpg" width="30%"> <br>
204
+ </p>
205
+
206
+ ## Paper Information
207
+
208
+ - Paper: https://www.arxiv.org/abs/2501.05444
209
+ - Code: https://github.com/hychaochao/EMMA
210
+ - Project: https://emma-benchmark.github.io/
211
+
212
+ ## Dataset Usage
213
+
214
+ ### Data Downloading
215
+
216
+ You can download the dataset by the following command (Taking downloading math data as an example):
217
+
218
+ ```python
219
+ from datasets import load_dataset
220
+
221
+ dataset = load_dataset("luckychao/EMMA", "Math", split="test")
222
+ ```
223
+
224
+
225
+ ### Data Format
226
+
227
+ The dataset is provided in jsonl format and contains the following attributes:
228
+
229
+ ```
230
+ {
231
+ "pid": [string] Problem ID, e.g., “math_1”,
232
+ "question": [string] The question text,
233
+ "options": [list] Choice options for multiple-choice problems. For free-form problems, this could be a 'none' value,
234
+ "answer": [string] The correct answer for the problem,
235
+ "image_1": [image] ,
236
+ "image_2": [image] ,
237
+ "image_3": [image] ,
238
+ "image_4": [image] ,
239
+ "image_5": [image] ,
240
+ "solution": [string] The detailed thinking steps required to solve the problem,
241
+ "subject": [string] The subject of data, e.g., “Math”, “Physics”...,
242
+ "task": [string] The task of the problem, e.g., “Code Choose Vis”,
243
+ "category": [string] The category of the problem, e.g., “2D Transformation”,
244
+ "source": [string] The original source dataset of the data, e.g., “math-vista”. For handmade data, this could be “Newly annotated” ,
245
+ "type": [string] Types of questions, e.g., “Multiple Choice”, “Open-ended”,
246
+ "context": [string] Background knowledge required for the question. For problems without context, this could be a 'none' value,
247
+ }
248
+ ```
249
+
250
+ ### Automatic Evaluation
251
+
252
+ To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/hychaochao/EMMA).
253
+
254
+ ## Citation
255
+
256
+ ```
257
+ @misc{hao2025mllmsreasonmultimodalityemma,
258
+ title={Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark},
259
+ author={Yunzhuo Hao and Jiawei Gu and Huichen Will Wang and Linjie Li and Zhengyuan Yang and Lijuan Wang and Yu Cheng},
260
+ year={2025},
261
+ eprint={2501.05444},
262
+ archivePrefix={arXiv},
263
+ primaryClass={cs.CV},
264
+ url={https://arxiv.org/abs/2501.05444},
265
+ }
266
+ ```
emma_composition.jpg ADDED

Git LFS Details

  • SHA256: 85b5f2833a59897494016a82e5201cca292441af0f87c035ca3cf872c900c857
  • Pointer size: 132 Bytes
  • Size of remote file: 3.45 MB