Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
jihanyang commited on
Commit
f2d2d1a
·
1 Parent(s): 20e83ec

init README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -1
README.md CHANGED
@@ -9,4 +9,95 @@ tags:
9
  - Text
10
  size_categories:
11
  - 1K<n<10K
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - Text
10
  size_categories:
11
  - 1K<n<10K
12
+ ---
13
+
14
+ <a href="" target="_blank">
15
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-thinking--in--space-red?logo=arxiv" height="20" />
16
+ </a>
17
+ <a href="" target="_blank">
18
+ <img alt="Website" src="https://img.shields.io/badge/🌎_Website-thinking--in--space-blue.svg" height="20" />
19
+ </a>
20
+ <a href="https://github.com/vision-x-nyu/thinking-in-space" target="_blank" style="display: inline-block; margin-right: 10px;">
21
+ <img alt="GitHub Code" src="https://img.shields.io/badge/Code-thinking--in--space-white?&logo=github&logoColor=white" />
22
+ </a>
23
+
24
+
25
+ # Visual Spatial Intelligence Benchmark (VSI-Bench)
26
+ This repository contains the visual spatial intelligence benchmark (VSI-Bench), introduced in [Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces](https://arxiv.org/pdf/).
27
+
28
+
29
+ ## Files
30
+ The `test-00000-of-00001.parquet` contains the full dataset annotations and images pre-loaded for processing with HF Datasets. It can be loaded as follows:
31
+
32
+ <!-- @shusheng -->
33
+ ```python
34
+ from datasets import load_dataset
35
+ vsi_bench = load_dataset("nyu-visionx/VSI-Bench")
36
+ ```
37
+
38
+ Additionally, we provide the compressed raw videos in `*.zip`.
39
+
40
+
41
+ ## Dataset Description
42
+ VSI-Bench quantitatively evaluate the visual-spatial intelligence of MLLMs from egocentric video. VSI-Bench comprises over 5,000 question-answer pairs derived from 288 real videos. These videos are sourced from the validation sets of the public indoor 3D scene reconstruction datasets `ScanNet`, `ScanNet++`, and `ARKitScenes` and represent diverse environments -- including residential spaces, professional settings (e.g., offices, labs), and industrial spaces (e.g., factories) and multiple geographic regions. Repurposing these existing 3D reconstruction and understanding datasets offers accurate object-level annotations which we use in question generation and could enable future study into the connection between MLLMs and 3D reconstruction.
43
+
44
+ The dataset contains the following fields:
45
+
46
+ | Field Name | Description |
47
+ | :--------- | :---------- |
48
+ | `idx` | Global index of the entry in the dataset |
49
+ | `dataset` | Video source: `scannet`, `arkitscenes` or `scannetpp` |
50
+ | `question_type` | The type of task for question |
51
+ | `question` | Question asked about the video |
52
+ | `options` | Answer choices for the question (only for multiple choice questions) |
53
+ | `ground_truth` | Correct answer to the question |
54
+ | `video_suffix` | Suffix of the video |
55
+
56
+
57
+ <br>
58
+
59
+
60
+ ### Example Code
61
+
62
+ <!-- @shusheng -->
63
+ ```python
64
+ import pandas as pd
65
+ # Load the CSV file into a DataFrame
66
+ df = pd.read_csv('cv_bench_results.csv')
67
+ # Define a function to calculate accuracy for a given source
68
+ def calculate_accuracy(df, source):
69
+ source_df = df[df['source'] == source]
70
+ accuracy = source_df['result'].mean() # Assuming 'result' is 1 for correct and 0 for incorrect
71
+ return accuracy
72
+ # Calculate accuracy for each source
73
+ accuracy_2d_ade = calculate_accuracy(df, 'ADE20K')
74
+ accuracy_2d_coco = calculate_accuracy(df, 'COCO')
75
+ accuracy_3d_omni = calculate_accuracy(df, 'Omni3D')
76
+ # Calculate the accuracy for each type
77
+ accuracy_2d = (accuracy_2d_ade + accuracy_2d_coco) / 2
78
+ accuracy_3d = accuracy_3d_omni
79
+ # Compute the combined accuracy as specified
80
+ combined_accuracy = (accuracy_2d + accuracy_3d) / 2
81
+ # Print the results
82
+ print(f"CV-Bench Accuracy: {combined_accuracy:.4f}")
83
+ print()
84
+ print(f"Type Accuracies:")
85
+ print(f"2D Accuracy: {accuracy_2d:.4f}")
86
+ print(f"3D Accuracy: {accuracy_3d:.4f}")
87
+ print()
88
+ print(f"Source Accuracies:")
89
+ print(f"ADE20K Accuracy: {accuracy_2d_ade:.4f}")
90
+ print(f"COCO Accuracy: {accuracy_2d_coco:.4f}")
91
+ print(f"Omni3D Accuracy: {accuracy_3d_omni:.4f}")
92
+ ```
93
+
94
+ ## Citation
95
+
96
+ ```bibtex
97
+ @article{yang2024think,
98
+ title={{Thinking in Space: How Multimodal Large Language Models See, Remember and Recall Spaces}},
99
+ author={Yang, Jihan and Yang, Shusheng and Gupta, Anjali and Han, Rilyn and Fei-Fei, Li and Xie, Saining},
100
+ year={2024},
101
+ journal={arXiv preprint},
102
+ }
103
+ ```