File size: 3,100 Bytes
634a7e6 55fe1c2 634a7e6 9f1fe92 634a7e6 758366c 634a7e6 08573c8 634a7e6 9f1fe92 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
task_categories:
- question-answering
tags:
- science
pretty_name: Scientific Figure Interpretation Benchmark
size_categories:
- 1k<n<10k
language:
- en
---
# Dataset Card for SciFIBench
## Dataset Description
- **Homepage:** [https://github.com/jonathan-roberts1/SciFIBench](https://github.com/jonathan-roberts1/SciFIBench)
- **Paper:** [SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation](https://arxiv.org/pdf/2405.08807)
### Dataset Summary
The SciFIBench (Scientific Figure Interpretation Benchmark) contains 1000 multiple-choice scientific figure interpretation questions covering two tasks. Task 1:
Figure -> Caption involves selecting the most appropriate caption given a figure; Task 2: Caption -> Figure involves the opposite -- selecting the most appropriate
figure given a caption. This benchmark was curated from the SciCap dataset, using adversarial filtering to obtain hard negatives. Human verification has been performed
on each question to ensure high-quality,
answerable questions.
### Example Usage
```python
from datasets import load_dataset
# load dataset
dataset = load_dataset("jonathan-roberts1/SciFIBench") # optional: set cache_dir="PATH/TO/MY/CACHE/DIR"
# figure2caption_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="Figure2Caption")
# caption2figure_dataset = load_dataset("jonathan-roberts1/SciFIBench", split="Caption2Figure")
"""
DatasetDict({
Caption2Figure: Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
Figure2Caption: Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
})
"""
# select task
figure2caption_dataset = dataset['Figure2Caption']
"""
Dataset({
features: ['ID', 'Question', 'Options', 'Answer', 'Category', 'Images'],
num_rows: 500
})
"""
# query items
figure2caption_dataset[40] # e.g., the 41st element
"""
{'ID': 40,
'Question': 'Which caption best matches the image?',
'Options': ['A) ber vs snr for fft size=2048 using ls , lmmse , lr-lmmse .',
'B) ber vs snr for fft size=1024 using ls , lmmse , lr-lmmse algorithms .',
'C) ber vs snr for fft size=512 using ls , lmmse , lr-lmmse algorithms .',
'D) ber vs snr for fft size=256 using ls , lmmse , lr-lmmse algorithms with a 16 qam modulation .',
'E) ber vs snr for a bpsk modulation .'],
'Answer': 'D',
'Category': 'other cs',
'Images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=501x431>]}
"""
```
### Source Data
More information regarding the source data can be found at: https://github.com/tingyaohsu/SciCap
### Dataset Curators
This dataset was curated by Jonathan Roberts, Kai Han, Neil Houlsby, and Samuel Albanie
### Citation Information
```
@article{roberts2024scifibench,
title={SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation},
author={Jonathan Roberts and Kai Han and Neil Houlsby and Samuel Albanie},
year={2024},
journal={arXiv preprint arXiv:2405.08807},
}
``` |