Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,87 @@ tags:
|
|
10 |
pretty_name: Helvipad
|
11 |
size_categories:
|
12 |
- 10K<n<100K
|
13 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
pretty_name: Helvipad
|
11 |
size_categories:
|
12 |
- 10K<n<100K
|
13 |
+
---
|
14 |
+
|
15 |
+
# <span style="font-variant: small-caps;">Helvipad</span>: A Real-World Dataset for Omnidirectional Stereo Depth Estimation
|
16 |
+
|
17 |
+
|
18 |
+
## Abstract
|
19 |
+
|
20 |
+
Despite considerable progress in stereo depth estimation, omnidirectional imaging remains underexplored,
|
21 |
+
mainly due to the lack of appropriate data.
|
22 |
+
We introduce <span style="font-variant: small-caps;">Helvipad</span>,
|
23 |
+
a real-world dataset for omnidirectional stereo depth estimation, consisting of 40K frames from video sequences
|
24 |
+
across diverse environments, including crowded indoor and outdoor scenes with diverse lighting conditions.
|
25 |
+
Collected using two 360° cameras in a top-bottom setup and a LiDAR sensor, the dataset includes accurate
|
26 |
+
depth and disparity labels by projecting 3D point clouds onto equirectangular images. Additionally, we
|
27 |
+
provide an augmented training set with a significantly increased label density by using depth completion.
|
28 |
+
We benchmark leading stereo depth estimation models for both standard and omnidirectional images.
|
29 |
+
The results show that while recent stereo methods perform decently, a significant challenge persists in accurately
|
30 |
+
estimating depth in omnidirectional imaging. To address this, we introduce necessary adaptations to stereo models,
|
31 |
+
achieving improved performance.
|
32 |
+
|
33 |
+
## Dataset Structure
|
34 |
+
|
35 |
+
The dataset is organized into training and testing subsets with the following structure:
|
36 |
+
|
37 |
+
```
|
38 |
+
helvipad/
|
39 |
+
├── train/
|
40 |
+
│ ├── depth_maps # Depth maps generated from LiDAR data
|
41 |
+
│ ├── depth_maps_augmented # Augmented depth maps using depth completion
|
42 |
+
│ ├── disparity_maps # Disparity maps computed from depth maps
|
43 |
+
│ ├── disparity_maps_augmented # Augmented disparity maps using depth completion
|
44 |
+
│ ├── images_top # Top-camera RGB images
|
45 |
+
│ ├── images_bottom # Bottom-camera RGB images
|
46 |
+
│ ├── LiDAR_pcd # Original LiDAR point cloud data
|
47 |
+
├── test/
|
48 |
+
│ ├── depth_maps # Depth maps generated from LiDAR data
|
49 |
+
│ ├── disparity_maps # Disparity maps computed from depth maps
|
50 |
+
│ ├── images_top # Top-camera RGB images
|
51 |
+
│ ├── images_bottom # Bottom-camera RGB images
|
52 |
+
│ ├── LiDAR_pcd # Original LiDAR point cloud data
|
53 |
+
```
|
54 |
+
|
55 |
+
|
56 |
+
## Benchmark
|
57 |
+
|
58 |
+
We evaluate the performance of multiple state-of-the-art and popular stereo matching methods, both for standard and 360° images. All models are trained on a single NVIDIA A100 GPU with
|
59 |
+
the largest possible batch size to ensure comparable use of computational resources.
|
60 |
+
|
61 |
+
| Method | Type | Disp-MAE (°) | Disp-RMSE (°) | Disp-MARE | Depth-MAE (m) | Depth-RMSE (m) | Depth-MARE (m) |
|
62 |
+
|--------------------|----------------|--------------|---------------|-----------|---------------|----------------|----------------|
|
63 |
+
| [PSMNet](https://arxiv.org/abs/1803.08669) | Stereo | 0.33 | 0.54 | 0.20 | 2.79 | 6.17 | 0.29 |
|
64 |
+
| [360SD-Net](https://arxiv.org/abs/1911.04460) | 360° Stereo | 0.21 | 0.42 | 0.18 | 2.14 | 5.12 | 0.15 |
|
65 |
+
| [IGEV-Stereo](https://arxiv.org/abs/2303.06615) | Stereo | 0.22 | 0.41 | 0.17 | 1.85 | 4.44 | 0.15 |
|
66 |
+
| 360-IGEV-Stereo | 360° Stereo | **0.18** | **0.39** | **0.15** | **1.77** | **4.36** | **0.14** |
|
67 |
+
|
68 |
+
|
69 |
+
## Project Page
|
70 |
+
|
71 |
+
For more information, visualizations, and updates, visit the **[project page](https://vita-epfl.github.io/Helvipad/)**.
|
72 |
+
|
73 |
+
## Citation
|
74 |
+
|
75 |
+
If you use the Helvipad dataset in your research, please cite our paper:
|
76 |
+
|
77 |
+
```bibtex
|
78 |
+
@misc{zayene2024helvipad,
|
79 |
+
author = {Zayene, Mehdi and Endres, Jannik and Havolli, Albias and Corbière, Charles and Cherkaoui, Salim and Ben Ahmed Kontouli, Alexandre and Alahi, Alexandre},
|
80 |
+
title = {Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth Estimation},
|
81 |
+
year = {2024},
|
82 |
+
eprint = {2403.16999},
|
83 |
+
archivePrefix = {arXiv},
|
84 |
+
primaryClass = {cs.CV}
|
85 |
+
}
|
86 |
+
```
|
87 |
+
|
88 |
+
## License
|
89 |
+
|
90 |
+
This dataset is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
|
91 |
+
|
92 |
+
## Acknowledgments
|
93 |
+
|
94 |
+
This work was supported by the [EPFL Center for Imaging](https://imaging.epfl.ch/) through a Collaborative Imaging Grant.
|
95 |
+
We thank the VITA lab members for their valuable feedback, which helped to enhance the quality of this manuscript.
|
96 |
+
We also express our gratitude to Dr. Simone Schaub-Meyer and Oliver Hahn for their insightful advice during the project's final stages.
|