File size: 5,524 Bytes
81fb5c0
b25b197
 
81fb5c0
b25b197
 
81fb5c0
b25b197
81fb5c0
b25b197
 
 
81fb5c0
9c4ce88
132bdab
 
 
9c4ce88
80904ea
 
 
 
 
81fb5c0
80904ea
132bdab
 
 
 
 
 
80904ea
 
 
132bdab
80904ea
 
 
 
132bdab
 
 
 
80904ea
 
 
132bdab
80904ea
 
 
 
 
 
132bdab
 
 
 
 
 
 
 
80904ea
 
 
132bdab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80904ea
132bdab
80904ea
132bdab
 
 
 
 
 
 
80904ea
 
 
132bdab
 
 
 
28ba80e
132bdab
 
 
 
8c83a79
132bdab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80904ea
132bdab
 
 
 
 
 
80904ea
132bdab
80904ea
b318284
 
 
 
b2c7f95
 
 
 
b318284
 
 
 
 
 
 
80904ea
 
 
 
 
 
 
 
 
 
 
 
 
b25b197
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- image-text-to-text
pretty_name: Spatial457
tags:
- spatial-reasoning
- multimodal
---

<div align="center">
  <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/icon_name.png" alt="Spatial457 Logo" width="240"/>
</div>

<h1 align="center">
  <a href="https://arxiv.org/abs/2502.08636">
    Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models
  </a>
</h1>

<p align="center">
  <a href="https://xingruiwang.github.io/">Xingrui Wang</a><sup>1</sup>,
  <a href="#">Wufei Ma</a><sup>1</sup>,
  <a href="#">Tiezheng Zhang</a><sup>1</sup>,
  <a href="#">Celso M. de Melo</a><sup>2</sup>,
  <a href="#">Jieneng Chen</a><sup>1</sup>,
  <a href="#">Alan Yuille</a><sup>1</sup>
</p>

<p align="center">
  <sup>1</sup> Johns Hopkins University &nbsp;&nbsp;&nbsp;&nbsp;
  <sup>2</sup> DEVCOM Army Research Laboratory
</p>

<p align="center">
  <a href="https://xingruiwang.github.io/projects/Spatial457/">🌐 Project Page</a><a href="https://arxiv.org/abs/2502.08636">📄 Paper</a><a href="https://huggingface.co/datasets/RyanWW/Spatial457">🤗 Dataset</a><a href="https://github.com/XingruiWang/Spatial457">💻 Code</a>
</p>

<p align="center">
  <img src="https://xingruiwang.github.io/projects/Spatial457/static/images/teaser.png" alt="Spatial457 Teaser" width="80%"/>
</p>

---

## 🧠 Introduction

**Spatial457** is a diagnostic benchmark designed to evaluate **6D spatial reasoning** in large multimodal models (LMMs). It systematically introduces four core spatial capabilities:

- 🧱 Multi-object understanding  
- 🧭 2D spatial localization  
- 📦 3D spatial localization  
- 🔄 3D orientation estimation  

These are assessed across **five difficulty levels** and **seven diverse question types**, ranging from simple object queries to complex reasoning about physical interactions.

---

## 📂 Dataset Structure

The dataset is organized as follows:

```
Spatial457/
├── images/                     # RGB images used in VQA tasks
├── questions/                  # JSONs for each subtask
│   ├── L1_single.json
│   ├── L2_objects.json
│   ├── L3_2d_spatial.json
│   ├── L4_occ.json
│   └── ...
├── Spatial457.py               # Hugging Face dataset loader script
├── README.md                   # Documentation
```

Each JSON file contains a list of VQA examples, where each item includes:

- "image_filename": image file name used in the question  
- "question": natural language question  
- "answer": boolean, string, or number  
- "program": symbolic program (optional)  
- "question_index": unique identifier  

This modular structure supports scalable multi-task evaluation across levels and reasoning types.

---

## 🛠️ Dataset Usage

You can load the dataset directly using the Hugging Face 🤗 `datasets` library:

### 🔹 Load a specific subtask (e.g., L5_6d_spatial)

```python
from datasets import load_dataset

dataset = load_dataset("RyanWW/Spatial457", name="L5_6d_spatial", split="validation", data_dir=".")
```

Each example is a dictionary like:

```python
{
  'image': <PIL.Image.Image>,
  'image_filename': 'superCLEVR_new_000001.png',
  'question': 'Is the large red object in front of the yellow car?',
  'answer': 'True',
  'program': [...],
  'question_index': 100001
}
```

### 🔹 Other available configurations

```python
[
  "L1_single", "L2_objects", "L3_2d_spatial",
  "L4_occ", "L4_pose", "L5_6d_spatial", "L5_collision"
]
```

You can swap `name="..."` in `load_dataset(...)` to evaluate different spatial reasoning capabilities.

## 📊 Benchmark

We benchmarked a wide range of state-of-the-art models—including GPT-4o, Gemini, Claude, and several open-source LMMs—across all subsets. The results below have been updated after rerunning the evaluation. While they show minor variance compared to the results in the published paper, the conclusions remain unchanged.

The inference script supports [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) and is run by setting the dataset to `Spatial457`. You can find the detailed inference scripts [here](https://github.com/XingruiWang/VLMEvalKit).


### Spatial457 Evaluation Results 

| Model                  | L1_single | L2_objects | L3_2d_spatial | L4_occ | L4_pose | L5_6d_spatial | L5_collision |
|------------------------|-----------|------------|---------------|--------|---------|----------------|---------------|
| **GPT-4o**             | 72.39     | 64.54      | 58.04         | 48.87  | 43.62   | 43.06          | 44.54         |
| **GeminiPro-1.5**      | 69.40     | 66.73      | 55.12         | 51.41  | 44.50   | 43.11          | 44.73         |
| **Claude 3.5 Sonnet**  | 61.04     | 59.20      | 55.20         | 40.49  | 41.38   | 38.81          | 46.27         |
| **Qwen2-VL-7B-Instruct** | 62.84   | 58.90      | 53.73         | 26.85  | 26.83   | 36.20          | 34.84         |

---

## 📚 Citation

```bibtex
@inproceedings{wang2025spatial457,
  title     = {Spatial457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large Multimodal Models},
  author    = {Wang, Xingrui and Ma, Wufei and Zhang, Tiezheng and de Melo, Celso M and Chen, Jieneng and Yuille, Alan},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2025},
  url       = {https://arxiv.org/abs/2502.08636}
}
```