|
--- |
|
configs: |
|
- config_name: fig2cap |
|
- config_name: cap2fig |
|
task_categories: |
|
- question-answering |
|
tags: |
|
- science |
|
pretty_name: Scientific Figure Interpretation Benchmark |
|
size_categories: |
|
- n<1k |
|
language: |
|
- en |
|
--- |
|
|
|
# Dataset Card for SciFIBench |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [https://github.com/jonathan-roberts1/SciFIBench](https://github.com/jonathan-roberts1/SciFIBench) |
|
- **Paper:** [SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation](https://github.com/jonathan-roberts1/SciFIBench/blob/main/SciFIBench.pdf) |
|
|
|
### Dataset Summary |
|
The SciFIBench (Scientific Figure Interpretation Benchmark) contains 1000 multiple-choice scientific figure interpretation questions covering two tasks. Task 1: |
|
Figure -> Caption involves selecting the most appropriate caption given a figure; Task 2: Caption -> Figure involves the opposite -- selecting the most appropriate |
|
figure given a caption. This benchmark was curated from the SciCap dataset, using adversarial filtering to obtain hard negatives. Human verification has been performed |
|
on each question to ensure high-quality, |
|
answerable questions. |
|
|
|
|
|
### Source Data |
|
|
|
More information regarding the source data can be found in: https://arxiv.org/abs/2110.11624 |
|
|
|
### Dataset Curators |
|
|
|
This dataset was curated by Jonathan Roberts, Kai Han, Neil Houlsby, and Samuel Albanie |
|
|
|
|
|
### Citation Information |
|
Coming soon! |