# LucidFusion: Generating 3D Gaussians with Arbitrary Unposed Images [Hao He](https://heye0507.github.io/)$^{\color{red}{\*}}$ [Yixun Liang](https://yixunliang.github.io/)$^{\color{red}{\*}}$, [Luozhou Wang](https://wileewang.github.io/), [Yuanhao Cai](https://github.com/caiyuanhao1998), [Xinli Xu](https://scholar.google.com/citations?user=lrgPuBUAAAAJ&hl=en&inst=1381320739207392350), [Hao-Xiang Guo](), [Xiang Wen](), [Yingcong Chen](https://www.yingcong.me)$^{\**}$ $\color{red}{\*}$: Equal contribution. \**: Corresponding author. [Paper PDF (Arxiv)](Coming Soon) | [Project Page (Coming Soon)]() | [Gradio Demo](Coming Soon) ---

Note: we compress these motion pictures for faster previewing.

Examples of cross-dataset content creations with our framework, the *LucidFusion*, around **~13FPS** on A800.
## 🎏 Abstract We present a flexible end-to-end feed-forward framework, named the *LucidFusion*, to generate high-resolution 3D Gaussians from unposed, sparse, and arbitrary numbers of multiview images.
CLICK for the full abstract > Recent large reconstruction models have made notable progress in generating high-quality 3D objects from single images. However, these methods often struggle with controllability, as they lack information from multiple views, leading to incomplete or inconsistent 3D reconstructions. To address this limitation, we introduce LucidFusion, a flexible end-to-end feed-forward framework that leverages the Relative Coordinate Map (RCM). Unlike traditional methods linking images to 3D world thorough pose, LucidFusion utilizes RCM to align geometric features coherently across different views, making it highly adaptable for 3D generation from arbitrary, unposed images. Furthermore, LucidFusion seamlessly integrates with the original single-image-to-3D pipeline, producing detailed 3D Gaussians at a resolution of $512 \times 512$, making it well-suited for a wide range of applications.
## πŸ”§ Training Instructions Our code is now released! ### Install ``` conda create -n LucidFusion python=3.9.19 conda activate LucidFusion # For example, we use torch 2.3.1 + cuda 11.8, and tested with latest torch (2.4.1) which works with the latest xformers (0.0.28). pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu118 # Xformers is required! please refer to https://github.com/facebookresearch/xformers for details. # [linux only] cuda 11.8 version pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118 # For 3D Gaussian Splatting, we use LGM modified version, details please refer to https://github.com/3DTopia/LGM git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization pip install ./diff-gaussian-rasterization # Other dependencies pip install -r requirements.txt ``` ### Pretrained Weights Our pre-trained weights will be released soon, please check back! Our current model loads pre-trained diffusion model for config. We use stable-diffusion-2-1-base, to download it, simply run ``` python pretrained/download.py ``` You can omit this step if you already have stable-diffusion-2-1-base, and simply update "model_key" with your local SD-2-1 path for scripts in scripts/ folder. ## πŸ”₯ Inference A shell script is provided with example files. To run, you first need to set up the pre-trained weights as follows: ``` cd LucidFusion mkdir output/demo # Download the pretrained weights and name it as best.ckpt # Place the pretrained weights in LucidFusion/output/demo/best.ckpt ``` We have also provided some preprocessed examples. For GSO files, the example objects are "alarm", "chicken", "hat", "lunch_bag", "mario", and "shoe1". To run GSO demo: ``` # You can adjust "DEMO" field inside the gso_demo.sh to load other examples. bash scripts/gso_demo.sh ``` To run the images demo, masks are obtained using preprocess.py. The example objects are "nutella_new", "monkey_chair", "dog_chair". ``` bash scripts/demo.sh ``` To run the diffusion demo as a single-image-to-multi-view setup, we use the pixel diffusion trained in the CRM, as described in the paper. You can also use other multi-view diffusion models to generate multi-view outputs from a single image. For dependencies issue, please check https://github.com/thu-ml/CRM We also provide LGM's imagegen diffusion, simply set --crm=false in diffusion_demo.sh. You can change the --seed with different seed option. ``` bash script/diffusion_demo.sh ``` You can also try your own example! To do that: 1. Obtain images and place them in the examples folder: ``` LucidFusion β”œβ”€β”€ examples/ | β”œβ”€β”€ "your obj name"/ | | β”œβ”€β”€ "image_01.png" | | β”œβ”€β”€ "image_02.png" | | β”œβ”€β”€ ... ``` 2. Run preprocess.py to extract the recentered image and its mask: ``` # Run the following will create two folders (images, masks) in "your-obj-name" folder. # You can check to see if the extract mask is corrected. python preprocess.py examples/you-obj-name --outdir examples/your-obj-name ``` 3. Modify demo.sh to set DEMO=β€œexamples/your-obj-name”, then run the script: ``` bash scripts/demo.sh ``` ## πŸ€— Gradio Demo We are currently building an online demo of LucidFusion with Gradio. It is still under development, and will coming out soon! ## 🚧 Todo - [x] Release the inference codes - [ ] Release our weights - [ ] Release the Gardio Demo - [ ] Release the Stage 1 and 2 training codes ## πŸ“ Citation If you find our work useful, please consider citing our paper. ``` TODO ``` ## πŸ’Ό Acknowledgement This work is built on many amazing research works and open-source projects: - [gaussian-splatting](https://github.com/graphdeco-inria/gaussian-splatting) and [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization) - [ZeroShape](https://github.com/zxhuang1698/ZeroShape) - [LGM](https://github.com/3DTopia/LGM) Thanks for their excellent work and great contribution to 3D generation area.