ZayarnyukNick's picture
Upload folder using huggingface_hub
864ebc9 verified

A newer version of the Gradio SDK is available: 5.23.3

Upgrade

:bar_chart: Benchmark

We provide in this release (benchmark.zip) with the following 17 entries as a benchmark to evaluate NVS models. We hope this will help standardize the evaluation of NVS models and facilitate fair comparison between different methods.

Dataset Split Path Content Image Preprocessing Image Postprocessing
OmniObject3D S (SV3D), O (Ours) omniobject3d train_test_split_*.json center crop to 576 \
GSO S (SV3D), O (Ours) gso train_test_split_*.json center crop to 576 \
RealEstate10K D (4DiM) re10k-4dim train_test_split_*.json center crop to 576 resize to 256
R (ReconFusion) re10k train_test_split_*.json center crop to 576 \
P (pixelSplat) re10k-pixelsplat train_test_split_*.json center crop to 576 resize to 256
V (ViewCrafter) re10k-viewcrafter images/*.png,transforms.json,train_test_split_*.json resize the shortest side to 576 (--L_short 576) center crop
LLFF R (ReconFusion) llff train_test_split_*.json center crop to 576 \
DTU R (ReconFusion) dtu train_test_split_*.json center crop to 576 \
CO3D R (ReconFusion) co3d train_test_split_*.json center crop to 576 \
V (ViewCrafter) co3d-viewcrafter images/*.png,transforms.json,train_test_split_*.json resize the shortest side to 576 (--L_short 576) center crop
WildRGB-D Oₑ (Ours, easy) wildgbd/easy train_test_split_*.json center crop to 576 \
Oₕ (Ours, hard) wildgbd/hard train_test_split_*.json center crop to 576 \
Mip-NeRF360 R (ReconFusion) mipnerf360 train_test_split_*.json center crop to 576 \
DL3DV-140 O (Ours) dl3dv10 train_test_split_*.json center crop to 576 \
L (Long-LRM) dl3dv140 train_test_split_*.json center crop to 576 \
Tanks and Temples V (ViewCrafter) tnt-viewcrafter images/*.png,transforms.json,train_test_split_*.json resize the shortest side to 576 (--L_short 576) center crop
L (Long-LRM) tnt-longlrm train_test_split_*.json center crop to 576 \
  • For entries without images/*.png and transforms.json, we use the images from the original dataset after converting them into the reconfusion format, which is then parsable by ReconfusionParser (seva/data_io.py). Please note that during this conversion, you should sort the images by sorted(image_paths), which is then directly indexable by our train/test ids. We provide in benchmark/export_reconfusion_example.py an example script converting an existing academic dataset into the the scene folders.
  • For evaluation and benchmarking, we first conduct operations in the Image Preprocessing column to the model input and then operations in the Image Postprocessing column to the model output. The final processed samples are used for metric computation.

Acknowledgment

We would like to thank Wangbo Yu, Aleksander Hołyński, Saurabh Saxena, and Ziwen Chen for their kind clarification on experiment settings.