RyanWW commited on
Commit
132bdab
·
verified ·
1 Parent(s): 62f89d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -43
README.md CHANGED
@@ -1,26 +1,20 @@
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
- - visual-question-answering
5
  language:
6
- - en
7
  tags:
8
- - spatial-reasoning
9
- - multimodal
10
  pretty_name: Spatial457
11
  size_categories:
12
- - 10K<n<100K
13
  ---
14
- <!-- # Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models -->
15
 
16
- <p align="center">
17
- <table>
18
- <tr>
19
- <td><img src="https://xingruiwang.github.io/projects/Spatial457/static/images/icon.png" width="48" alt="icon"/></td>
20
- <td><h1 style="margin: 0;">&nbsp;Spatial457</h1></td>
21
- </tr>
22
- </table>
23
- </p>
24
 
25
  <h1 align="center">
26
  <a href="https://arxiv.org/abs/2502.08636">
@@ -28,63 +22,111 @@ size_categories:
28
  </a>
29
  </h1>
30
 
31
-
32
  <p align="center">
33
- <a href=".">Xingrui Wang</a><sup>1</sup>,
34
- <a href=".">Wufei Ma</a><sup>1</sup>,
35
- <a href=".">Tiezheng Zhang</a><sup>1</sup>,
36
- <a href=".">Celso M de Melo</a><sup>2</sup>,
37
- <a href=".">Jieneng Chen</a><sup>1</sup>,
38
- <a href=".">Alan Yuille</a><sup>1</sup>
39
  </p>
40
 
41
  <p align="center">
42
- <sup>1</sup> Johns Hopkins University &nbsp;&nbsp;
43
  <sup>2</sup> DEVCOM Army Research Laboratory
44
  </p>
45
 
46
-
47
  <p align="center">
48
- <a href="https://xingruiwang.github.io/projects/Spatial457/">Project Page</a> /
49
- <a href="https://arxiv.org/abs/2502.08636">Paper</a> /
50
- <a href="https://huggingface.co/datasets/RyanWW/Spatial457">🤗 Huggingface</a> /
51
- <a href="https://github.com/XingruiWang/Spatial457">Code</a>
52
  </p>
53
 
54
  <p align="center">
55
- <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/teaser.png" alt="Spatial457 Teaser" width="80%">
56
  </p>
57
 
58
- <!-- <p align="center"><i>
59
- Official implementation of the CVPR 2025 (Highlight) paper:
60
- <strong>Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models</strong>
61
- </i></p> -->
62
-
63
  ---
64
 
65
  ## 🧠 Introduction
66
 
67
- Spatial457 is a diagnostic benchmark designed to evaluate the 6D spatial reasoning capabilities of large multimodal models (LMMs). It systematically introduces four key capabilities—multi-object understanding, 2D and 3D localization, and 3D orientation—across five difficulty levels and seven question types, progressing from basic recognition to complex physical interaction.
 
 
 
 
 
 
 
68
 
69
  ---
70
 
71
- ## 📦 Download
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
- You can access the full dataset and evaluation toolkit:
74
 
75
- - **Dataset**: [Hugging Face](https://huggingface.co/datasets/RyanWW/Spatial457)
76
- - **Code**: [GitHub Repository](https://github.com/XingruiWang/Spatial457)
77
- - **Paper**: [arXiv 2502.08636](https://arxiv.org/abs/2502.08636)
 
 
 
 
78
 
79
  ---
80
 
81
- ## 📊 Benchmark
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
- We benchmarked a wide range of state-of-the-art models—including GPT-4o, Gemini, Claude, and several open-source LMMs—on all subsets. Performance consistently drops as task difficulty increases. PO3D-VQA and humans remain most robust across all levels.
 
 
 
 
 
84
 
85
- The table below summarizes model performance across 7 subsets:
86
 
87
- <!-- Include table image or markdown table here if needed -->
88
 
89
  ---
90
 
@@ -98,5 +140,6 @@ The table below summarizes model performance across 7 subsets:
98
  year = {2025},
99
  url = {https://arxiv.org/abs/2502.08636}
100
  }
 
101
 
102
 
 
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
+ - visual-question-answering
5
  language:
6
+ - en
7
  tags:
8
+ - spatial-reasoning
9
+ - multimodal
10
  pretty_name: Spatial457
11
  size_categories:
12
+ - 10K<n<100K
13
  ---
 
14
 
15
+ <div align="center">
16
+ <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/icon_name.png" alt="Spatial457 Logo" width="240"/>
17
+ </div>
 
 
 
 
 
18
 
19
  <h1 align="center">
20
  <a href="https://arxiv.org/abs/2502.08636">
 
22
  </a>
23
  </h1>
24
 
 
25
  <p align="center">
26
+ <a href="https://xingruiwang.github.io/">Xingrui Wang</a><sup>1</sup>,
27
+ <a href="#">Wufei Ma</a><sup>1</sup>,
28
+ <a href="#">Tiezheng Zhang</a><sup>1</sup>,
29
+ <a href="#">Celso M. de Melo</a><sup>2</sup>,
30
+ <a href="#">Jieneng Chen</a><sup>1</sup>,
31
+ <a href="#">Alan Yuille</a><sup>1</sup>
32
  </p>
33
 
34
  <p align="center">
35
+ <sup>1</sup> Johns Hopkins University &nbsp;&nbsp;&nbsp;&nbsp;
36
  <sup>2</sup> DEVCOM Army Research Laboratory
37
  </p>
38
 
 
39
  <p align="center">
40
+ <a href="https://xingruiwang.github.io/projects/Spatial457/">🌐 Project Page</a>
41
+ <a href="https://arxiv.org/abs/2502.08636">📄 Paper</a>
42
+ <a href="https://huggingface.co/datasets/RyanWW/Spatial457">🤗 Dataset</a>
43
+ <a href="https://github.com/XingruiWang/Spatial457">💻 Code</a>
44
  </p>
45
 
46
  <p align="center">
47
+ <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/teaser.png" alt="Spatial457 Teaser" width="80%"/>
48
  </p>
49
 
 
 
 
 
 
50
  ---
51
 
52
  ## 🧠 Introduction
53
 
54
+ **Spatial457** is a diagnostic benchmark designed to evaluate **6D spatial reasoning** in large multimodal models (LMMs). It systematically introduces four core spatial capabilities:
55
+
56
+ - 🧱 Multi-object understanding
57
+ - 🧭 2D spatial localization
58
+ - 📦 3D spatial localization
59
+ - 🔄 3D orientation estimation
60
+
61
+ These are assessed across **five difficulty levels** and **seven diverse question types**, ranging from simple object queries to complex reasoning about physical interactions.
62
 
63
  ---
64
 
65
+ ## 📂 Dataset Structure
66
+
67
+ The dataset is organized as follows:
68
+
69
+ ```
70
+ Spatial457/
71
+ ├── images/ # RGB images used in VQA tasks
72
+ ├── questions/ # JSONs for each subtask
73
+ │ ├── L1_single.json
74
+ │ ├── L2_objects.json
75
+ │ ├── L3_2d_spatial.json
76
+ │ ├── L4_occ.json
77
+ │ └── ...
78
+ ├── Spatial457.py # Hugging Face dataset loader script
79
+ ├── README.md # Documentation
80
+ ```
81
 
82
+ Each JSON file contains a list of VQA examples, where each item includes:
83
 
84
+ - "image_filename": image file name used in the question
85
+ - "question": natural language question
86
+ - "answer": boolean, string, or number
87
+ - "program": symbolic program (optional)
88
+ - "question_index": unique identifier
89
+
90
+ This modular structure supports scalable multi-task evaluation across levels and reasoning types.
91
 
92
  ---
93
 
94
+ ## 🛠️ Dataset Usage
95
+
96
+ You can load the dataset directly using the Hugging Face 🤗 `datasets` library:
97
+
98
+ ### 🔹 Load a specific subtask (e.g., L1_single)
99
+
100
+ ```python
101
+ from datasets import load_dataset
102
+
103
+ dataset = load_dataset("RyanWW/Spatial457", name="L1_single", split="train")
104
+ ```
105
+
106
+ Each example is a dictionary like:
107
+
108
+ ```python
109
+ {
110
+ 'image': <PIL.Image.Image>,
111
+ 'image_filename': 'superCLEVR_new_000001.png',
112
+ 'question': 'Is the large red object in front of the yellow car?',
113
+ 'answer': 'True',
114
+ 'program': [...],
115
+ 'question_index': 100001
116
+ }
117
+ ```
118
+
119
+ ### 🔹 Other available configurations
120
 
121
+ ```python
122
+ [
123
+ "L1_single", "L2_objects", "L3_2d_spatial",
124
+ "L4_occ", "L4_pose", "L5_6d_spatial", "L5_collision"
125
+ ]
126
+ ```
127
 
128
+ You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.
129
 
 
130
 
131
  ---
132
 
 
140
  year = {2025},
141
  url = {https://arxiv.org/abs/2502.08636}
142
  }
143
+ ```
144
 
145