Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
wangst0181 commited on
Commit
21a4948
Β·
verified Β·
1 Parent(s): 279c5a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -150
README.md CHANGED
@@ -1,150 +1,150 @@
1
- # Spatial Visualization Benchmark
2
-
3
- This repository contains the Spatial Visualization Benchmark. The evaluation code is released on: [wangst0181/Spatial-Visualization-Benchmark](https://github.com/wangst0181/Spatial-Visualization-Benchmark).
4
-
5
- ## Dataset Description
6
-
7
- The SpatialViz-Bench aims to evaluate the spatial visualization capabilities of multimodal large language models, which is a key component of spatial abilities. Targeting 4 sub-abilities of Spatial Visualization, including mental rotation, mental folding, visual penetration, and mental animation, we have designed 3 tasks for each, forming a comprehensive evaluation system comprising 12 tasks in total. Each task is divided into 2 or 3 levels, with each level containing 40 or 50 test cases, resulting in a total of 1180 question-answer pairs.
8
-
9
- **Spatial Visualization**
10
-
11
- - Mental Rotation
12
- - 2D Rotation: Two difficulty levels, based on paper size and pattern complexity.
13
- - 3D Rotation: Two difficulty levels, based on the size of the cube stack.
14
- - Three-view Projection: Two categories, orthographic views of cube stacks and orthographic views of part models.
15
- - Mental Folding
16
- - Paper Folding: Three difficulty levels, based on paper size, number of operations, and number of holes.
17
- - Cube Unfolding: Three difficulty levels, based on pattern complexity (whether the pattern is centrally symmetric).
18
- - Cube Reconstruction: Three difficulty levels, based on pattern complexity.
19
- - Visual Penetration
20
- - Cross-Section: Three difficulty levels, based on the number of combined objects and cross-section direction.
21
- - Cube Count Inference: Three difficulty levels, based on the number of reference views and the size of the cube stack.
22
- - Sliding Blocks: Two difficulty levels, based on the size of the cube stack and the number of disassembled blocks.
23
- - Mental Animation
24
- - Arrow Movement: Two difficulty levels, based on the number of arrows and the number of operations.
25
- - Block Movement: Two difficulty levels, based on the size of the cube stack and the number of movements.
26
- - Mechanical System: Two difficulty levels, based on the complexity of the system structure.
27
-
28
- <img src="./Tasks.png" alt="image-20250424001518418" style="zoom:80%;" />
29
-
30
- ## Dataset Usage
31
-
32
- ### Data Downloading
33
-
34
- The `test-00000-of-00001.parquet` file contains the complete dataset annotations, ready for processing with HF Datasets. It can be loaded using the following code:
35
-
36
- ```python
37
- from datasets import load_dataset
38
- SpatialViz_bench = load_dataset("Anonymous285714/SpatialViz-Bench")
39
- ```
40
-
41
- Additionally, we provide the images in `*.zip`. The hierarchical structure of the folder is as follows:
42
-
43
- ```
44
- ./SpatialViz_Bench_images
45
- β”œβ”€β”€ MentalAnimation
46
- β”‚Β Β  β”œβ”€β”€ ArrowMoving
47
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
48
- β”‚Β Β  β”‚Β Β  └── Level1
49
- β”‚Β Β  β”œβ”€β”€ BlockMoving
50
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
51
- β”‚Β Β  β”‚Β Β  └── Level1
52
- β”‚Β Β  └── MechanicalSystem
53
- β”‚Β Β  β”œβ”€β”€ Level0
54
- β”‚Β Β  └── Level1
55
- β”œβ”€β”€ MentalFolding
56
- β”‚Β Β  β”œβ”€β”€ PaperFolding
57
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
58
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level1
59
- β”‚Β Β  β”‚Β Β  └── Level2
60
- β”‚Β Β  β”œβ”€β”€ CubeReconstruction
61
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
62
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level1
63
- β”‚Β Β  β”‚Β Β  └── Level2
64
- β”‚Β Β  └── CubeUnfolding
65
- β”‚Β Β  β”œβ”€β”€ Level0
66
- β”‚Β Β  β”œβ”€β”€ Level1
67
- β”‚Β Β  └── Level2
68
- β”œβ”€β”€ MentalRotation
69
- β”‚Β Β  β”œβ”€β”€ 2DRotation
70
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
71
- β”‚Β Β  β”‚Β Β  └── Level1
72
- β”‚Β Β  β”œβ”€β”€ 3DRotation
73
- β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
74
- β”‚Β Β  β”‚Β Β  └── Level1
75
- β”‚Β Β  └── 3ViewProjection
76
- β”‚Β Β  β”œβ”€β”€ Level0-Cubes3View
77
- β”‚Β Β  └── Level1-CAD3View
78
- └── VisualPenetration
79
- β”œβ”€β”€ CrossSection
80
- β”‚Β Β  β”œβ”€β”€ Level0
81
- β”‚Β Β  β”œβ”€β”€ Level1
82
- β”‚Β Β  └── Level2
83
- β”œβ”€β”€ CubeCounting
84
- β”‚Β Β  β”œβ”€β”€ Level0
85
- β”‚Β Β  β”œβ”€β”€ Level1
86
- β”‚Β Β  └── Level2
87
- └── CubeAssembly
88
- β”œβ”€β”€ Level0
89
- └── Level1
90
- ```
91
-
92
- ### Data Format
93
-
94
- The `image_path` can be obtained as follows:
95
-
96
- ```Python
97
- print(SpatialViz_bench["test"][0]) # Print the first piece of data
98
- category = SpatialViz_bench["test"][0]['Category']
99
- task = SpatialViz_bench["test"][0]['Task']
100
- level = SpatialViz_bench["test"][0]['Level']
101
- image_id = SpatialViz_bench["test"][0]['Image_id']
102
- question = SpatialViz_bench["test"][0]['Question']
103
- choices = SpatialViz_bench["test"][0]['Choices']
104
- answer = SpatialViz_bench["test"][0]['Answer']
105
- explanation = SpatialViz_bench["test"][0]['Explanation']
106
-
107
- image_path = f"./images/{category}/{task}/{level}/{image_id}.png"
108
- ```
109
-
110
- The dataset is provided in Parquet format and contains the following attributes:
111
-
112
- ```json
113
- {
114
- "Category": "MentalAnimation",
115
- "Task": "ArrowMoving",
116
- "Level": "Level0",
117
- "Image_id": "0-3-3-2",
118
- "Question": "In the diagram, the red arrow is the initial arrow, and the green arrow is the final arrow. The arrow can move in four directions (forward, backward, left, right), where 'forward' always refers to the current direction the arrow is pointing. After each movement, the arrow's direction is updated to the direction of movement. Which of the following paths can make the arrow move from the starting position to the ending position? Please answer from options A, B, C, or D.",
119
- "Choices": [
120
- "(Left, 2 units)--(Left, 1 unit)",
121
- "(Forward, 1 unit)--(Backward, 1 unit)",
122
- "(Forward, 1 unit)--(Backward, 2 units)",
123
- "(Forward, 1 unit)--(Left, 1 unit)"
124
- ],
125
- "Answer": "D",
126
- "Explanation": {
127
- "D": "Option D is correct because the initial arrow can be transformed into the final arrow.",
128
- "CAB": "Option CAB is incorrect because the initial arrow cannot be transformed into the final arrow."
129
- }
130
- }
131
- ```
132
-
133
- ### Evaluation Metric
134
-
135
- Since most question options are represented in reference images, all tasks are constructed as multiple-choice questions, with each question having exactly one correct answer. The model's performance is evaluated based on the accuracy of its responses.
136
-
137
- ## Citation
138
-
139
- If you use SpatialViz-Bench in your research, please cite our paper:
140
-
141
- ```bibtex
142
- @misc{wang2025spatialviz,
143
- author = {Siting Wang and Luoyang Sun and Cheng Deng and Kun Shao and Minnan Pei and Zheng Tian and Haifeng Zhang and Jun Wang},
144
- title = {SpatialViz-Bench: Automatically Generated Spatial Visualization Reasoning Tasks for MLLMs},
145
- year = {2025},
146
- url = {https://github.com/wangst0181/Spatial-Visualization-Benchmark-SpatialViz-Bench/blob/main/SpatialViz-Bench_Automatically_Generated_Spatial_Visualization_Reasoning_Tasks_for_MLLMs.pdf},
147
- note = {Available on GitHub}
148
- }
149
- ```
150
-
 
1
+ # Spatial Visualization Benchmark
2
+
3
+ This repository contains the Spatial Visualization Benchmark. The evaluation code is released on: [wangst0181/Spatial-Visualization-Benchmark](https://github.com/wangst0181/Spatial-Visualization-Benchmark).
4
+
5
+ ## Dataset Description
6
+
7
+ The SpatialViz-Bench aims to evaluate the spatial visualization capabilities of multimodal large language models, which is a key component of spatial abilities. Targeting 4 sub-abilities of Spatial Visualization, including mental rotation, mental folding, visual penetration, and mental animation, we have designed 3 tasks for each, forming a comprehensive evaluation system comprising 12 tasks in total. Each task is divided into 2 or 3 levels, with each level containing 40 or 50 test cases, resulting in a total of 1180 question-answer pairs.
8
+
9
+ **Spatial Visualization**
10
+
11
+ - Mental Rotation
12
+ - 2D Rotation: Two difficulty levels, based on paper size and pattern complexity.
13
+ - 3D Rotation: Two difficulty levels, based on the size of the cube stack.
14
+ - Three-view Projection: Two categories, orthographic views of cube stacks and orthographic views of part models.
15
+ - Mental Folding
16
+ - Paper Folding: Three difficulty levels, based on paper size, number of operations, and number of holes.
17
+ - Cube Unfolding: Three difficulty levels, based on pattern complexity (whether the pattern is centrally symmetric).
18
+ - Cube Reconstruction: Three difficulty levels, based on pattern complexity.
19
+ - Visual Penetration
20
+ - Cross-Section: Three difficulty levels, based on the number of combined objects and cross-section direction.
21
+ - Cube Count Inference: Three difficulty levels, based on the number of reference views and the size of the cube stack.
22
+ - Sliding Blocks: Two difficulty levels, based on the size of the cube stack and the number of disassembled blocks.
23
+ - Mental Animation
24
+ - Arrow Movement: Two difficulty levels, based on the number of arrows and the number of operations.
25
+ - Block Movement: Two difficulty levels, based on the size of the cube stack and the number of movements.
26
+ - Mechanical System: Two difficulty levels, based on the complexity of the system structure.
27
+
28
+ <img src="./Tasks.png" alt="image-20250424001518418" style="zoom:60%;" />
29
+
30
+ ## Dataset Usage
31
+
32
+ ### Data Downloading
33
+
34
+ The `test-00000-of-00001.parquet` file contains the complete dataset annotations, ready for processing with HF Datasets. It can be loaded using the following code:
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+ SpatialViz_bench = load_dataset("Anonymous285714/SpatialViz-Bench")
39
+ ```
40
+
41
+ Additionally, we provide the images in `*.zip`. The hierarchical structure of the folder is as follows:
42
+
43
+ ```
44
+ ./SpatialViz_Bench_images
45
+ β”œβ”€β”€ MentalAnimation
46
+ β”‚Β Β  β”œβ”€β”€ ArrowMoving
47
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
48
+ β”‚Β Β  β”‚Β Β  └── Level1
49
+ β”‚Β Β  β”œβ”€β”€ BlockMoving
50
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
51
+ β”‚Β Β  β”‚Β Β  └── Level1
52
+ β”‚Β Β  └── MechanicalSystem
53
+ β”‚Β Β  β”œβ”€β”€ Level0
54
+ β”‚Β Β  └── Level1
55
+ β”œβ”€β”€ MentalFolding
56
+ β”‚Β Β  β”œβ”€β”€ PaperFolding
57
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
58
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level1
59
+ οΏ½οΏ½Β Β  β”‚Β Β  └── Level2
60
+ β”‚Β Β  β”œβ”€β”€ CubeReconstruction
61
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
62
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level1
63
+ β”‚Β Β  β”‚Β Β  └── Level2
64
+ β”‚Β Β  └── CubeUnfolding
65
+ β”‚Β Β  β”œβ”€β”€ Level0
66
+ β”‚Β Β  β”œβ”€β”€ Level1
67
+ β”‚Β Β  └── Level2
68
+ β”œβ”€β”€ MentalRotation
69
+ β”‚Β Β  β”œβ”€β”€ 2DRotation
70
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
71
+ β”‚Β Β  β”‚Β Β  └── Level1
72
+ β”‚Β Β  β”œβ”€β”€ 3DRotation
73
+ β”‚Β Β  β”‚Β Β  β”œβ”€β”€ Level0
74
+ β”‚Β Β  β”‚Β Β  └── Level1
75
+ β”‚Β Β  └── 3ViewProjection
76
+ β”‚Β Β  β”œβ”€β”€ Level0-Cubes3View
77
+ β”‚Β Β  └── Level1-CAD3View
78
+ └── VisualPenetration
79
+ β”œβ”€β”€ CrossSection
80
+ β”‚Β Β  β”œβ”€β”€ Level0
81
+ β”‚Β Β  β”œβ”€β”€ Level1
82
+ β”‚Β Β  └── Level2
83
+ β”œβ”€β”€ CubeCounting
84
+ β”‚Β Β  β”œβ”€β”€ Level0
85
+ β”‚Β Β  β”œβ”€β”€ Level1
86
+ β”‚Β Β  └── Level2
87
+ └── CubeAssembly
88
+ β”œβ”€β”€ Level0
89
+ └── Level1
90
+ ```
91
+
92
+ ### Data Format
93
+
94
+ The `image_path` can be obtained as follows:
95
+
96
+ ```Python
97
+ print(SpatialViz_bench["test"][0]) # Print the first piece of data
98
+ category = SpatialViz_bench["test"][0]['Category']
99
+ task = SpatialViz_bench["test"][0]['Task']
100
+ level = SpatialViz_bench["test"][0]['Level']
101
+ image_id = SpatialViz_bench["test"][0]['Image_id']
102
+ question = SpatialViz_bench["test"][0]['Question']
103
+ choices = SpatialViz_bench["test"][0]['Choices']
104
+ answer = SpatialViz_bench["test"][0]['Answer']
105
+ explanation = SpatialViz_bench["test"][0]['Explanation']
106
+
107
+ image_path = f"./images/{category}/{task}/{level}/{image_id}.png"
108
+ ```
109
+
110
+ The dataset is provided in Parquet format and contains the following attributes:
111
+
112
+ ```json
113
+ {
114
+ "Category": "MentalAnimation",
115
+ "Task": "ArrowMoving",
116
+ "Level": "Level0",
117
+ "Image_id": "0-3-3-2",
118
+ "Question": "In the diagram, the red arrow is the initial arrow, and the green arrow is the final arrow. The arrow can move in four directions (forward, backward, left, right), where 'forward' always refers to the current direction the arrow is pointing. After each movement, the arrow's direction is updated to the direction of movement. Which of the following paths can make the arrow move from the starting position to the ending position? Please answer from options A, B, C, or D.",
119
+ "Choices": [
120
+ "(Left, 2 units)--(Left, 1 unit)",
121
+ "(Forward, 1 unit)--(Backward, 1 unit)",
122
+ "(Forward, 1 unit)--(Backward, 2 units)",
123
+ "(Forward, 1 unit)--(Left, 1 unit)"
124
+ ],
125
+ "Answer": "D",
126
+ "Explanation": {
127
+ "D": "Option D is correct because the initial arrow can be transformed into the final arrow.",
128
+ "CAB": "Option CAB is incorrect because the initial arrow cannot be transformed into the final arrow."
129
+ }
130
+ }
131
+ ```
132
+
133
+ ### Evaluation Metric
134
+
135
+ Since most question options are represented in reference images, all tasks are constructed as multiple-choice questions, with each question having exactly one correct answer. The model's performance is evaluated based on the accuracy of its responses.
136
+
137
+ ## Citation
138
+
139
+ If you use SpatialViz-Bench in your research, please cite our paper:
140
+
141
+ ```bibtex
142
+ @misc{wang2025spatialviz,
143
+ author = {Siting Wang and Luoyang Sun and Cheng Deng and Kun Shao and Minnan Pei and Zheng Tian and Haifeng Zhang and Jun Wang},
144
+ title = {SpatialViz-Bench: Automatically Generated Spatial Visualization Reasoning Tasks for MLLMs},
145
+ year = {2025},
146
+ url = {https://github.com/wangst0181/Spatial-Visualization-Benchmark-SpatialViz-Bench/blob/main/SpatialViz-Bench_Automatically_Generated_Spatial_Visualization_Reasoning_Tasks_for_MLLMs.pdf},
147
+ note = {Available on GitHub}
148
+ }
149
+ ```
150
+