File size: 1,571 Bytes
476738e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0907c0
da0a56e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
- zh
tags:
- chemistry
- biology
- astronomy
- earth
- material
- benchmark
pretty_name: Scientists' First Exam
size_categories:
- 1K<n<10K
---

# Scientists' First Exam

> Scientific discoveries are driven by complex multimodal reasoning based on information-intensive scientific data and domain-specific expertise. With supervision from expert-level scientific benchmarks, scientific multimodal Large Language Models (MLLMs) could significantly enhance this discovery process in realistic workflows. However, current scientific benchmarks current scientific benchmarks inadequately assess MLLMs’ perception, understanding, and reasoning skills necessary for scientific breakthroughs across multiple disciplines. To address this gap, we present the Scientists’ First Example (SFE) benchmark, designed to comprehensively evaluate the scientific cognitive capacities of MLLMs through three interconnected levels: \textit{scientific signal perception}, \textit{scientific attribute understanding}, \textit{scientific comparative reasoning}. Specifically, SFE comprises 839 expert-verified MQA/VQA pairs spanning 66 multimodal tasks across five high-value disciplines. Extensive experimental results reveal that current \textit{state-of-the-art} GPT-4.1 and InternVL-2.5 achieve only 30.8% and 24.43% on SFE, highlighting a significant room for MLLMs to improve in scientific realms. We hope insights obtained in SFE could facilitate further developments in AI-enhanced scientific discoveries.