Datasets:
Chengzu Li
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,111 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- visual-question-answering
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- spatial
|
9 |
+
- multimodal
|
10 |
+
size_categories:
|
11 |
+
- 1K<n<10K
|
12 |
+
---
|
13 |
+
# Dataset Card for TOPVIEWRS
|
14 |
+
|
15 |
+
<!-- Provide a quick summary of the dataset. -->
|
16 |
+
The TOPVIEWRS (Top-View Reasoning in Space) benchmark is a multimodal benchmark intended to evaluate the spatial reasoning ability of current Vision-Language Models.
|
17 |
+
It consists of 11,384 multiple-choice questions with either realistic or semantic top-view map as visual input, across 4 perception and reasoning tasks with different levels of complexity.
|
18 |
+
For details, please refer to the [project page](https://topviewrs.github.io/) and the [paper](https://arxiv.org/pdf/2406.02537).
|
19 |
+
|
20 |
+
## Dataset Description
|
21 |
+
|
22 |
+
- **Homepage/Repository:** [https://topviewrs.github.io/](https://topviewrs.github.io/)
|
23 |
+
- **Paper:** [TOPVIEWRS: Vision-Language Models as Top-View Spatial Reasoners](https://arxiv.org/pdf/2406.02537)
|
24 |
+
- **Point of Contact:** [[email protected]](mailto:[email protected])
|
25 |
+
|
26 |
+
## Dataset Details
|
27 |
+
|
28 |
+
### Dataset Features
|
29 |
+
|
30 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
31 |
+
- **Multi-Scale Top-View Maps**: Multi-scale top-view maps of single rooms and full houses add divergence in the granularity of the entities (objects or rooms) in spatial reasoning.
|
32 |
+
- **Realistic Environmental Scenarios with Rich Object Sets**: Real-world environments from indoor scenes, with 80 objects per scene on average.
|
33 |
+
- **Structured Question Framework**: Four tasks including 9 sub-tasks in total, allowing for a fine-grained evaluation and analysis of models’ capabilities from various perspectives and levels of granularity.
|
34 |
+
|
35 |
+
### Dataset Statistics
|
36 |
+
|
37 |
+
The TOPVIEWRS evaluation dataset comprises a total of 11,384 multiple-choice questions after human verification, with
|
38 |
+
5,539 questions associated with realistic top-view
|
39 |
+
maps, and 5,845 with semantic top-view maps.
|
40 |
+
The choices are uniformly distributed over choices A(25.5%), B (24.6%), C (24.5%) and D (25.4%).
|
41 |
+
|
42 |
+
The maps are collected from Matterport3D dataset, which includes 90 building-scale scenes with instance-level semantic and room-level region annotations in 3D meshes.
|
43 |
+
We filter these to exclude multi-floor and low-quality scenes, selecting 7 scenes with an average of 80 objects and 12 rooms each.
|
44 |
+
|
45 |
+
**Note**: *We only release part of the benchmark (2 different scenarios covering all the tasks of the benchmark) in this dataset card to avoid data contamination.
|
46 |
+
For full access to the benchmark, please get in touch with [Chengzu Li](chengzu-li.github.io) via email: [[email protected]](mailto:[email protected])*
|
47 |
+
|
48 |
+
### Uses
|
49 |
+
|
50 |
+
```
|
51 |
+
data = load_datasets(
|
52 |
+
"cl917/topviewrs",
|
53 |
+
trust_remote_code=True,
|
54 |
+
map_type=MAP_TYPE,
|
55 |
+
task_split=TASK_SPLIT,
|
56 |
+
image_save_dir=IMAGE_SAVE_DIR
|
57 |
+
)
|
58 |
+
```
|
59 |
+
|
60 |
+
To use the dataset, you have to specify several arguments when calling `load_datasets`:
|
61 |
+
- `map_type`: should be one of `['realistic', 'semantic']`
|
62 |
+
- `task_split`: should be one of `['top_view_recognition', 'top_view_localization', 'static_spatial_reasoning', 'dynamic_spatial_reasoning']`
|
63 |
+
- `image_save_dir`: specify the directory where you would like the images to be saved
|
64 |
+
|
65 |
+
### Data Instances
|
66 |
+
|
67 |
+
For example an instance from the `top_view_recognition` task is:
|
68 |
+
|
69 |
+
```
|
70 |
+
{
|
71 |
+
'index': 0,
|
72 |
+
'scene_id': '17DRP5sb8fy',
|
73 |
+
'question': 'Which of the following objects are in the room?',
|
74 |
+
'choices': ['shelving', 'bed', 'toilet', 'seating'],
|
75 |
+
'labels': ['bed'],
|
76 |
+
'choice_type': '<OBJECT>',
|
77 |
+
'map_path': '<IMAGE_SAVE_DIR>/data/mp3d/17DRP5sb8fy/semantic/17DRP5sb8fy_0_0.png',
|
78 |
+
'question_ability': 'object_recognition'
|
79 |
+
}
|
80 |
+
```
|
81 |
+
|
82 |
+
### Data Fields
|
83 |
+
|
84 |
+
Every example has the following fields
|
85 |
+
- `idx`: an `int` feature
|
86 |
+
- `scene_id`: a `string` feature, unique id for the scene from Matterport3D
|
87 |
+
- `question`: a `string` feature
|
88 |
+
- `choices`: a sequence of `string` feature, choices for multiple-choice question
|
89 |
+
- `labels`: a sequence of `string` feature, answer for multiple-choice question. The label's position in the `choices` can be used to determine whether it is A, B, C, or D.
|
90 |
+
- `choice_type`: a `string` feature
|
91 |
+
- `map_path`: a `string` feature, the path of the input image
|
92 |
+
- `question_ability`: a `string` feature, sub-tasks for fine-grained evaluation and analysis
|
93 |
+
|
94 |
+
For `dynamic_spatial_reasoning` task, there would be one more data field:
|
95 |
+
- `reference_path`: a sequence of `list[int]` feature, the coordinate sequence of the navigation path on the top-view map.
|
96 |
+
|
97 |
+
|
98 |
+
## Citation
|
99 |
+
|
100 |
+
```
|
101 |
+
@misc{li2024topviewrs,
|
102 |
+
title={TopViewRS: Vision-Language Models as Top-View Spatial Reasoners},
|
103 |
+
author={Chengzu Li and Caiqi Zhang and Han Zhou and Nigel Collier and Anna Korhonen and Ivan Vulić},
|
104 |
+
year={2024},
|
105 |
+
eprint={2406.02537},
|
106 |
+
archivePrefix={arXiv},
|
107 |
+
primaryClass={cs.CL}
|
108 |
+
}
|
109 |
+
```
|
110 |
+
|
111 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|