AshkanGanj
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,4 +14,146 @@ Through comprehensive quantitative and qualitative analyses, we demonstrate that
|
|
14 |
|
15 |
![arch](./ArchitectureHybridDepth.png)
|
16 |
|
17 |
-
## Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
![arch](./ArchitectureHybridDepth.png)
|
16 |
|
17 |
+
## Usage
|
18 |
+
|
19 |
+
### Prepraration
|
20 |
+
1. **Clone the repository and install the dependencies:**
|
21 |
+
```bash
|
22 |
+
git clone https://github.com/cake-lab/HybridDepth.git
|
23 |
+
cd HybridDepth
|
24 |
+
conda env create -f environment.yml
|
25 |
+
conda activate hybriddepth
|
26 |
+
```
|
27 |
+
2. **Download Necessary Files:**
|
28 |
+
* Download the necessary file [here](https://github.com/cake-lab/HybridDepth/releases/download/v1.0/DFF-DFV.tar) and place it in the checkpoints directory.
|
29 |
+
* Download the checkpoints listed [here](#pre-trained-models) and put them under the `checkpoints` directory.
|
30 |
+
|
31 |
+
#### Dataset Preparation
|
32 |
+
|
33 |
+
1. **NYU:**
|
34 |
+
Download dataset as per instructions given [here](https://github.com/cleinc/bts/tree/master/pytorch#nyu-depvh-v2).
|
35 |
+
|
36 |
+
1. **DDFF12:**
|
37 |
+
Download dataset as per instructions given [here](https://github.com/fuy34/DFV).
|
38 |
+
1. **ARKitScenes:**
|
39 |
+
Download dataset as per instructions given [here](https://github.com/cake-lab/Mobile-AR-Depth-Estimation).
|
40 |
+
|
41 |
+
### Using HybridDepth model for prediction
|
42 |
+
|
43 |
+
For inference you can run the provided notebook `test.ipynb` or use the following command:
|
44 |
+
|
45 |
+
```python
|
46 |
+
# Load the model checkpoint
|
47 |
+
model_path = './checkpoints/checkpoint.ckpt'
|
48 |
+
model = DepthNetModule()
|
49 |
+
# Load the weights
|
50 |
+
model.load_state_dict(torch.load(model_path))
|
51 |
+
|
52 |
+
model.eval()
|
53 |
+
model = model.to('cuda')
|
54 |
+
```
|
55 |
+
|
56 |
+
After loading the model, you can use the following code to process the input images and get the depth map:
|
57 |
+
|
58 |
+
```python
|
59 |
+
|
60 |
+
from utils.io import prepare_input_image
|
61 |
+
|
62 |
+
data_dir = 'focal stack images directory'
|
63 |
+
|
64 |
+
# Load the focal stack images
|
65 |
+
focal_stack, rgb_img, focus_dist = prepare_input_image(data_dir)
|
66 |
+
|
67 |
+
# inference
|
68 |
+
with torch.no_grad():
|
69 |
+
out = model(rgb_img, focal_stack, focus_dist)
|
70 |
+
|
71 |
+
metric_depth = out[0].squeeze().cpu().numpy() # The metric depth
|
72 |
+
```
|
73 |
+
|
74 |
+
### Evaluation
|
75 |
+
|
76 |
+
First setup the configuration file `config.yaml` in the `configs` directory. We already provide the configuration files for the three datasets in the `configs` directory. In the configuration file, you can specify the path to the dataloader, the path to the model, and other hyperparameters. Here is an example of the configuration file:
|
77 |
+
|
78 |
+
```yaml
|
79 |
+
data:
|
80 |
+
class_path: dataloader.dataset.NYUDataModule # Path to your dataloader Module in dataset.py
|
81 |
+
init_args:
|
82 |
+
nyuv2_data_root: 'root to the NYUv2 dataset or other datasets' # path to the specific dataset
|
83 |
+
img_size: [480, 640] # Adjust if your DataModule expects a tuple for img_size
|
84 |
+
remove_white_border: True
|
85 |
+
num_workers: 0 # if you are using synthetic data, you don't need multiple workers
|
86 |
+
use_labels: True
|
87 |
+
|
88 |
+
model:
|
89 |
+
invert_depth: True # If the model outputs inverted depth
|
90 |
+
|
91 |
+
ckpt_path: checkpoints/checkpoint.ckpt
|
92 |
+
```
|
93 |
+
|
94 |
+
Then specify the configuration file in the `test.sh` script.
|
95 |
+
|
96 |
+
```bash
|
97 |
+
python cli_run.py test --config configs/config_file_name.yaml
|
98 |
+
```
|
99 |
+
|
100 |
+
Finally, run the following command:
|
101 |
+
|
102 |
+
```bash
|
103 |
+
cd scripts
|
104 |
+
sh evaluate.sh
|
105 |
+
```
|
106 |
+
|
107 |
+
### Training
|
108 |
+
|
109 |
+
First setup the configuration file `config.yaml` in the `configs` directory. You only need to specify the path to the dataset and the batch size. The rest of the hyperparameters are already set.
|
110 |
+
For example, you can use the following configuration file for training on the NYUv2 dataset:
|
111 |
+
|
112 |
+
```yaml
|
113 |
+
...
|
114 |
+
model:
|
115 |
+
invert_depth: True
|
116 |
+
# learning rate
|
117 |
+
lr: 3e-4 # you can adjust this value
|
118 |
+
# weight decay
|
119 |
+
wd: 0.001 # you can adjust this value
|
120 |
+
|
121 |
+
data:
|
122 |
+
class_path: dataloader.dataset.NYUDataModule # Path to your dataloader Module in dataset.py
|
123 |
+
init_args:
|
124 |
+
nyuv2_data_root: 'root to the NYUv2 dataset or other datasets' # path to the specific dataset
|
125 |
+
img_size: [480, 640] # Adjust if your NYUDataModule expects a tuple for img_size
|
126 |
+
remove_white_border: True
|
127 |
+
batch_size: 24 # Adjust the batch size
|
128 |
+
num_workers: 0 # if you are using synthetic data, you don't need multiple workers
|
129 |
+
use_labels: True
|
130 |
+
ckpt_path: null
|
131 |
+
```
|
132 |
+
|
133 |
+
Then specify the configuration file in the `train.sh` script.
|
134 |
+
|
135 |
+
```bash
|
136 |
+
python cli_run.py train --config configs/config_file_name.yaml
|
137 |
+
```
|
138 |
+
|
139 |
+
Finally, run the following command:
|
140 |
+
|
141 |
+
```bash
|
142 |
+
cd scripts
|
143 |
+
sh train.sh
|
144 |
+
```
|
145 |
+
|
146 |
+
## Citation
|
147 |
+
If our work assists you in your research, please cite it as follows:
|
148 |
+
|
149 |
+
```Bibtex
|
150 |
+
@misc{ganj2024hybriddepthrobustdepthfusion,
|
151 |
+
title={HybridDepth: Robust Depth Fusion for Mobile AR by Leveraging Depth from Focus and Single-Image Priors},
|
152 |
+
author={Ashkan Ganj and Hang Su and Tian Guo},
|
153 |
+
year={2024},
|
154 |
+
eprint={2407.18443},
|
155 |
+
archivePrefix={arXiv},
|
156 |
+
primaryClass={cs.CV},
|
157 |
+
url={https://arxiv.org/abs/2407.18443},
|
158 |
+
}
|
159 |
+
```
|