|
--- |
|
dataset_info: |
|
- config_name: ScienceQA-FULL |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: question |
|
dtype: string |
|
- name: choices |
|
sequence: string |
|
- name: answer |
|
dtype: int8 |
|
- name: hint |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: grade |
|
dtype: string |
|
- name: subject |
|
dtype: string |
|
- name: topic |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: skill |
|
dtype: string |
|
- name: lecture |
|
dtype: string |
|
- name: solution |
|
dtype: string |
|
splits: |
|
|
|
|
|
|
|
- name: validation |
|
num_bytes: 140142913.699 |
|
num_examples: 4241 |
|
- name: test |
|
num_bytes: 138277282.051 |
|
num_examples: 4241 |
|
download_size: 679275875 |
|
dataset_size: 700620101.932 |
|
- config_name: ScienceQA-IMG |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: question |
|
dtype: string |
|
- name: choices |
|
sequence: string |
|
- name: answer |
|
dtype: int8 |
|
- name: hint |
|
dtype: string |
|
- name: task |
|
dtype: string |
|
- name: grade |
|
dtype: string |
|
- name: subject |
|
dtype: string |
|
- name: topic |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
- name: skill |
|
dtype: string |
|
- name: lecture |
|
dtype: string |
|
- name: solution |
|
dtype: string |
|
splits: |
|
|
|
|
|
|
|
- name: validation |
|
num_bytes: 137253441.0 |
|
num_examples: 2097 |
|
- name: test |
|
num_bytes: 135188432.0 |
|
num_examples: 2017 |
|
download_size: 663306124 |
|
dataset_size: 685752524.0 |
|
configs: |
|
- config_name: ScienceQA-FULL |
|
data_files: |
|
|
|
|
|
- split: validation |
|
path: ScienceQA-FULL/validation-* |
|
- split: test |
|
path: ScienceQA-FULL/test-* |
|
- config_name: ScienceQA-IMG |
|
data_files: |
|
|
|
|
|
- split: validation |
|
path: ScienceQA-IMG/validation-* |
|
- split: test |
|
path: ScienceQA-IMG/test-* |
|
--- |
|
<p align="center" width="100%"> |
|
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%"> |
|
</p> |
|
|
|
# Large-scale Multi-modality Models Evaluation Suite |
|
|
|
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval` |
|
|
|
๐ [Homepage](https://lmms-lab.github.io/) | ๐ [Documentation](docs/README.md) | ๐ค [Huggingface Datasets](https://huggingface.co/lmms-lab) |
|
|
|
|
|
|
|
# This Dataset |
|
This is a formatted version of [derek-thomas/ScienceQA](https://huggingface.co/datasets/derek-thomas/ScienceQA). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models. |
|
|
|
``` |
|
@inproceedings{lu2022learn, |
|
title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering}, |
|
author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan}, |
|
booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)}, |
|
year={2022} |
|
} |
|
``` |