--- annotations_creators: - expert-generated language_creators: - expert-generated language: - en license: gpl-3.0 multilinguality: - monolingual size_categories: - n<1K source_datasets: - original task_categories: - multiple-choice - question-answering - visual-question-answering task_ids: - multiple-choice-qa - visual-question-answering - multi-class-classification tags: - multi-modal-qa - figure-qa - vqa - scientific-figure - geometry-diagram - chart - chemistry dataset_info: features: - name: image_path dtype: string - name: question dtype: string - name: answer dtype: string - name: prompt_reasoning dtype: string - name: prompt_no_reasoning dtype: string - name: image_category dtype: string - name: task_category dtype: string - name: question_type dtype: string - name: response_options sequence: string - name: source dtype: string - name: id dtype: string - name: decoded_image dtype: image splits: - name: geometry__triangle num_bytes: 242889.0 num_examples: 50 - name: geometry__quadrilateral num_bytes: 210787.0 num_examples: 50 - name: geometry__length num_bytes: 271748.0 num_examples: 50 - name: geometry__angle num_bytes: 255692.0 num_examples: 50 - name: geometry__area num_bytes: 255062.0 num_examples: 50 - name: geometry__diameter_radius num_bytes: 269208.0 num_examples: 50 - name: chemistry__shape_single num_bytes: 1198593.0 num_examples: 50 - name: chemistry__shape_multi num_bytes: 1855862.0 num_examples: 50 - name: charts__extraction num_bytes: 3735234.0 num_examples: 50 - name: charts__intersection num_bytes: 2896121.0 num_examples: 50 download_size: 8276769 dataset_size: 11191196.0 configs: - config_name: default data_files: - split: geometry__triangle path: data/geometry__triangle-* - split: geometry__quadrilateral path: data/geometry__quadrilateral-* - split: geometry__length path: data/geometry__length-* - split: geometry__angle path: data/geometry__angle-* - split: geometry__area path: data/geometry__area-* - split: geometry__diameter_radius path: data/geometry__diameter_radius-* - split: chemistry__shape_single path: data/chemistry__shape_single-* - split: chemistry__shape_multi path: data/chemistry__shape_multi-* - split: charts__extraction path: data/charts__extraction-* - split: charts__intersection path: data/charts__intersection-* --- # VisOnlyQA This repository contains the code and data for the paper "[VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://arxiv.org/abs/2412.00947)". VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances. * Datasets: * VisOnlyQA is available at [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) 🔥🔥🔥 * VisOnlyQA in VLMEvalKit is different from the original one. Refer to [this section](#vlmevalkit) for details. * Hugging Face * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real) * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic) * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train) * Code: [https://github.com/psunlpgroup/VisOnlyQA](https://github.com/psunlpgroup/VisOnlyQA)

```bibtex @misc{kamoi2024visonlyqa, title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information}, author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang}, year={2024}, journal={arXiv preprint arXiv:2412.00947} } ``` ## Dataset VisOnlyQA is provided in two formats: VLMEvalKit and Hugging Face Dataset. You can use either of them to evaluate your models and report the results in your papers. However, when you report the results, please explicitly mention which version of the dataset you used because the two versions are different. ### Examples

### VLMEvalKit [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) provides one-command evaluation. However, VLMEvalKit is not designed to reproduce the results in the paper. We welcome using it to report the results on VisOnlyQA in your papers, but please explicitly mention that you used VLMEvalKit. The major differences are: * VisOnlyQA on VLMEvalKit does not include the `chemistry__shape_multi` split * VLMEvalKit uses different prompts and postprocessing. Refer to [this document](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) for the installation and setup of VLMEvalKit. After setting up the environment, you can evaluate any supported models on VisOnlyQA with the following command (this example is for InternVL2-26B). ```bash python run.py --data VisOnlyQA-VLMEvalKit --model InternVL2-26B ``` ### Hugging Face Dataset The original VisOnlyQA dataset is provided in Hugging Face Dataset. If you want to reproduce the results in our paper, please use this version and code in the GitHub repository. * Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real) * 500 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv) * Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic) * 700 instances for questions on synthetic figures * Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train) * 70,000 instances for training (synthetic figures) [dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data. ```python from datasets import load_dataset real_eval = load_dataset("ryokamoi/VisOnlyQA_Eval_Real") real_synthetic = load_dataset("ryokamoi/VisOnlyQA_Eval_Synthetic") # Splits print(real_eval.keys()) # dict_keys(['geometry__triangle', 'geometry__quadrilateral', 'geometry__length', 'geometry__angle', 'geometry__area', 'geometry__diameter_radius', 'chemistry__shape_single', 'chemistry__shape_multi', 'charts__extraction', 'charts__intersection']) print(real_synthetic.keys()) # dict_keys(['syntheticgeometry__triangle', 'syntheticgeometry__quadrilateral', 'syntheticgeometry__length', 'syntheticgeometry__angle', 'syntheticgeometry__area', '3d__size', '3d__angle']) # Prompt print(real_eval['geometry__triangle'][0]['prompt_no_reasoning']) # There is no triangle ADP in the figure. True or False? # A triangle is a polygon with three edges and three vertices, which are explicitly connected in the figure. # Your response should only include the final answer (True, False). Do not include any reasoning or explanation in your response. # Image print(real_eval['geometry__triangle'][0]['decoded_image']) # # Answer print(real_eval['geometry__triangle'][0]['answer']) # False ``` ### Data Format Each instance of VisOnlyQA dataset has the following attributes: #### Features * `decoded_image`: [PIL.Image] Input image * `question`: [string] Question (without instruction) * `prompt_reasoning`: [string] Prompt with intstruction to use chain-of-thought * `prompt_no_reasoning`: [string] Prompt with intstruction **not** to use chain-of-thought * `answer`: [string] Correct answer (e.g., `True`, `a`) #### Metadata * `image_path`: [string] Path to the image file * `image_category`: [string] Category of the image (e.g., `geometry`, `chemistry`) * `question_type`: [string] `single_answer` or `multiple answers` * `task_category`: [string] Category of the task (e.g., `triangle`) * `response_options`: [List[string]] Multiple choice options (e.g., `['True', 'False']`, `['a', 'b', 'c', 'd', 'e']`) * `source`: [string] Source dataset * `id`: [string] Unique ID ### Statistics

## License Please refer to [LICENSE.md](./LICENSE.md). ## Contact If you have any questions, feel free to open an issue or reach out directly to [Ryo Kamoi](https://ryokamoi.github.io/) (ryokamoi@psu.edu).