Luffuly's picture
update readme
817b35b
---
license: mit
---
# Unique3d-MVImage-Diffuser Model Card
[🌟GitHub](https://github.com/TingtingLiao/unique3d_diffuser) | [🦸 Project Page](https://wukailu.github.io/Unique3D/) | [🔋Normal Diffuser](https://huggingface.co/Luffuly/unique3d-normal-diffuser)</a>
## Example
Note the input image is required to be **white background**.
![mv-boy](https://github.com/user-attachments/assets/65558519-dbfe-40de-ac13-5d78527541c5)
```bash
import torch
import numpy as np
from PIL import Image
from pipeline import StableDiffusionImage2MVCustomPipeline
pipe = Unique3dDiffusionPipeline.from_pretrained(
"Luffuly/unique3d-mvimage-diffuser",
torch_dtype=torch.float16,
trust_remote_code=True,
class_labels=torch.tensor(range(4)),
).to("cuda")
seed = -1
generator = torch.Generator(device='cuda').manual_seed(-1)
image = Image.open('data/boy.png')
forward_args = dict(
width=256,
height=256,
num_images_per_prompt=4,
num_inference_steps=50,
width_cond=256,
height_cond=256,
generator=generator,
guidance_scale=1.5,
)
out = pipe(image, **forward_args).images
rgb_np = np.hstack([np.array(img) for img in out])
Image.fromarray(rgb_np).save(f"mv-boy.png")
```
## Citation
```bash
@misc{wu2024unique3d,
title={Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image},
author={Kailu Wu and Fangfu Liu and Zhihan Cai and Runjie Yan and Hanyang Wang and Yating Hu and Yueqi Duan and Kaisheng Ma},
year={2024},
eprint={2405.20343},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```