Datasets:
license: apache-2.0
task_categories:
- visual-question-answering
language:
- en
tags:
- Video
- Text
size_categories:
- 1K<n<10K
Visual Spatial Intelligence Benchmark (VSI-Bench)
This repository contains the visual spatial intelligence benchmark (VSI-Bench), introduced in Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces.
Files
The test-00000-of-00001.parquet
file contains the complete dataset annotations and pre-loaded images, ready for processing with HF Datasets. It can be loaded using the following code:
from datasets import load_dataset
vsi_bench = load_dataset("nyu-visionx/VSI-Bench")
Additionally, we provide the videos in *.zip
.
Dataset Description
VSI-Bench quantitatively evaluates the visual-spatial intelligence of MLLMs from egocentric video. VSI-Bench comprises over 5,000 question-answer pairs derived from 288 real videos. These videos are sourced from the validation sets of the public indoor 3D scene reconstruction datasets ScanNet
, ScanNet++
, and ARKitScenes
, and represent diverse environments -- including residential spaces, professional settings (e.g., offices, labs), and industrial spaces (e.g., factories) and multiple geographic regions. By repurposing these existing 3D reconstruction and understanding datasets, VSI-Bench benefits from accurate object-level annotations, which are used in question generation and could support future studies exploring the connection between MLLMs and 3D reconstruction.
The dataset contains the following fields:
Field Name | Description |
---|---|
idx |
Global index of the entry in the dataset |
dataset |
Video source: scannet , arkitscenes or scannetpp |
scene_name |
Scene (video) name for each question-answer pair |
question_type |
The type of task for question |
question |
Question asked about the video |
options |
Choices for the question (only for multiple choice questions) |
ground_truth |
Ground truth answer for the question |
Evaluation
VSI-Bench evaluates performance using two metrics: for multiple-choice questions, we use Accuracy
, calculated based on exact matches. For numerical-answer questions, we introduce a new metric, MRA (Mean Relative Accuracy)
, to assess how closely model predictions align with ground truth values.
We provide an out-of-the-box evaluation of VSI-Bench in our GitHub repository, including the metrics implementation used in our framework. For further detailes, users can refer to our paper and GitHub repository.
Citation
@article{yang2024think,
title={{Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces}},
author={Yang, Jihan and Yang, Shusheng and Gupta, Anjali and Han, Rilyn and Fei-Fei, Li and Xie, Saining},
year={2024},
journal={arXiv preprint arXiv:2412.14171},
}