Spaces:
Running
on
Zero
Running
on
Zero
Delete README.md
Browse files
README.md
DELETED
@@ -1,169 +0,0 @@
|
|
1 |
-
# One Diffusion to Generate Them All
|
2 |
-
|
3 |
-
<p align="left">
|
4 |
-
<a href="https://lehduong.github.io/OneDiffusion-homepage/">
|
5 |
-
<img alt="Build" src="https://img.shields.io/badge/Project%20Page-OneDiffusion-yellow">
|
6 |
-
</a>
|
7 |
-
<a href="https://arxiv.org/abs/2411.16318">
|
8 |
-
<img alt="Build" src="https://img.shields.io/badge/arXiv%20paper-2411.16318-b31b1b.svg">
|
9 |
-
</a>
|
10 |
-
<a href="https://huggingface.co/spaces/lehduong/OneDiffusion">
|
11 |
-
<img alt="License" src="https://img.shields.io/badge/HF%20Demo-π€-lightblue">
|
12 |
-
</a>
|
13 |
-
<a href="https://huggingface.co/lehduong/OneDiffusion">
|
14 |
-
<img alt="Build" src="https://img.shields.io/badge/HF%20Model-π€-yellow">
|
15 |
-
</a>
|
16 |
-
</p>
|
17 |
-
|
18 |
-
<h4 align="left">
|
19 |
-
<p>
|
20 |
-
<a href=#news>News</a> |
|
21 |
-
<a href=#quick-start>Quick start</a> |
|
22 |
-
<a href=https://github.com/lehduong/OneDiffusion/blob/main/PROMPT_GUIDE.md>Prompt guide & Supported tasks </a> |
|
23 |
-
<a href=#qualitative-results>Qualitative results</a> |
|
24 |
-
<a href="#license">License</a> |
|
25 |
-
<a href="#citation">Citation</a>
|
26 |
-
<p>
|
27 |
-
</h4>
|
28 |
-
|
29 |
-
|
30 |
-
<p align="center">
|
31 |
-
<img src="assets/teaser.png" alt="Teaser Image" width="800">
|
32 |
-
</p>
|
33 |
-
|
34 |
-
|
35 |
-
This is official repo of OneDiffusion, a versatile, large-scale diffusion model that seamlessly supports bidirectional image synthesis and understanding across diverse tasks.
|
36 |
-
|
37 |
-
## News
|
38 |
-
- π¦ 2024/12/10: Released weight.
|
39 |
-
- π 2024/12/06: Added image editing from instruction.
|
40 |
-
- β¨ 2024/12/02: Added subject-driven generation
|
41 |
-
|
42 |
-
## Installation
|
43 |
-
```
|
44 |
-
conda create -n onediffusion_env python=3.8 &&
|
45 |
-
conda activate onediffusion_env &&
|
46 |
-
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118 &&
|
47 |
-
pip install "git+https://github.com/facebookresearch/pytorch3d.git" &&
|
48 |
-
pip install -r requirements.txt
|
49 |
-
```
|
50 |
-
|
51 |
-
## Quick start
|
52 |
-
|
53 |
-
Check `inference.py` for more detailed. For text-to-image, you can use below code snipe.
|
54 |
-
|
55 |
-
```
|
56 |
-
import torch
|
57 |
-
from onediffusion.diffusion.pipelines.onediffusion import OneDiffusionPipeline
|
58 |
-
|
59 |
-
device = torch.device('cuda:0')
|
60 |
-
|
61 |
-
pipeline = OneDiffusionPipeline.from_pretrained("lehduong/OneDiffusion").to(device=device, dtype=torch.bfloat16)
|
62 |
-
|
63 |
-
NEGATIVE_PROMPT = "monochrome, greyscale, low-res, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation"
|
64 |
-
|
65 |
-
output = pipeline(
|
66 |
-
prompt="[[text2image]] A bipedal black cat wearing a huge oversized witch hat, a wizards robe, casting a spell,in an enchanted forest. The scene is filled with fireflies and moss on surrounding rocks and trees",
|
67 |
-
negative_prompt=NEGATIVE_PROMPT,
|
68 |
-
num_inference_steps=50,
|
69 |
-
guidance_scale=4,
|
70 |
-
height=1024,
|
71 |
-
width=1024,
|
72 |
-
)
|
73 |
-
output.images[0].save('text2image_output.jpg')
|
74 |
-
```
|
75 |
-
|
76 |
-
You can run the gradio demo with:
|
77 |
-
```
|
78 |
-
python gradio_demo.py --captioner molmo # [molmo, llava, disable]
|
79 |
-
```
|
80 |
-
The demo provides guidance and helps format the prompt properly for each task.
|
81 |
-
- By default, it loads the Molmo for captioning source images, which significantly increases memory usage. You generally need a GPU with at least $40$ GB of memory to run the demo.
|
82 |
-
- Opting to use LLaVA can reduce this requirement to $\approx 27$ GB, though the resulting captions may be less accurate in some cases.
|
83 |
-
- You can also manually provide the caption for each input image and run with `disable` mode. In this mode, the returned caption is an empty string, but you should still press the `Generate Caption` button so that the code formats the input text properly. The memory requirement for this mode is $\approx 12$ GB.
|
84 |
-
|
85 |
-
Note that the above required memory can change if you use higher resolution or more input images.
|
86 |
-
|
87 |
-
## Qualitative Results
|
88 |
-
|
89 |
-
### 1. Text-to-Image
|
90 |
-
<p align="center">
|
91 |
-
<img src="assets/text2image.jpg" alt="Text-to-Image results" width="800">
|
92 |
-
</p>
|
93 |
-
|
94 |
-
|
95 |
-
### 2. ID customization
|
96 |
-
|
97 |
-
<p align="center">
|
98 |
-
<img src="assets/onediffusion_appendix_faceid.jpg" alt="ID customization" width="800">
|
99 |
-
</p>
|
100 |
-
|
101 |
-
<p align="center">
|
102 |
-
<img src="assets/onediffusion_appendix_faceid_3.jpg" alt="ID customization non-human subject" width="800">
|
103 |
-
</p>
|
104 |
-
|
105 |
-
### 3. Multiview generation
|
106 |
-
|
107 |
-
Single image to multiview:
|
108 |
-
|
109 |
-
<p align="center">
|
110 |
-
<img src="assets/onediffusion_appendix_multiview.jpg" alt="Image to multiview" width="800">
|
111 |
-
</p>
|
112 |
-
|
113 |
-
<p align="center">
|
114 |
-
<img src="assets/onediffusion_appendix_multiview_2.jpg" alt="image to multiview" width="800">
|
115 |
-
</p>
|
116 |
-
|
117 |
-
Text to multiview:
|
118 |
-
|
119 |
-
<p align="center">
|
120 |
-
<img src="assets/text2multiview.jpg" alt="Text to multiview image" width="800">
|
121 |
-
</p>
|
122 |
-
|
123 |
-
### 4. Condition-to-Image and vice versa
|
124 |
-
<p align="center">
|
125 |
-
<img src="assets/cond_and_image.jpg" alt="Condition and Image" width="800">
|
126 |
-
</p>
|
127 |
-
|
128 |
-
### 5. Subject-driven generation
|
129 |
-
|
130 |
-
We finetuned the model on [Subject-200K](https://huggingface.co/datasets/Yuanshi/Subjects200K) dataset (along with all other tasks) for additional 40k steps. The model is now capable of subject-driven generation.
|
131 |
-
|
132 |
-
<p align="center">
|
133 |
-
<img src="assets/subject_driven.jpg" alt="Subject driven generation" width="800">
|
134 |
-
</p>
|
135 |
-
|
136 |
-
### 6. Text-guide image editing
|
137 |
-
|
138 |
-
We finetuned the model on [OmniEdit](https://huggingface.co/datasets/TIGER-Lab/OmniEdit-Filtered-1.2M) dataset for additional 30K steps.
|
139 |
-
|
140 |
-
<p align="center">
|
141 |
-
<img src="assets/onediffusion_editing.jpg" alt="Text-guide editing" width="800">
|
142 |
-
</p>
|
143 |
-
|
144 |
-
### 7. Zero-shot Task combinations
|
145 |
-
|
146 |
-
We found that the model can handle multiple tasks in a zero-shot setting by combining condition images and task tokens without any fine-tuning, as shown in the examples below. However, its performance on these combined tasks might not be robust, and the modelβs behavior may change if the order of task tokens or captions is altered. For example, when using both image inpainting and ID customization together, the target prompt and the caption of the masked image must be identical. If you plan to use such combinations, we recommend fine-tuning the model on these tasks to achieve better performance and simpler usage.
|
147 |
-
|
148 |
-
|
149 |
-
<p align="center">
|
150 |
-
<img src="assets/onediffusion_zeroshot.jpg" alt="Subject driven generation" width="800">
|
151 |
-
</p>
|
152 |
-
|
153 |
-
## License
|
154 |
-
|
155 |
-
The model is trained on several non-commercially licensed datasets (e.g., DL3DV, Unsplash), thus, **model weights** are released under a CC BY-NC license as described in [LICENSE](https://github.com/lehduong/onediffusion/blob/main/LICENSE).
|
156 |
-
|
157 |
-
## Citation
|
158 |
-
|
159 |
-
```bibtex
|
160 |
-
@misc{le2024diffusiongenerate,
|
161 |
-
title={One Diffusion to Generate Them All},
|
162 |
-
author={Duong H. Le and Tuan Pham and Sangho Lee and Christopher Clark and Aniruddha Kembhavi and Stephan Mandt and Ranjay Krishna and Jiasen Lu},
|
163 |
-
year={2024},
|
164 |
-
eprint={2411.16318},
|
165 |
-
archivePrefix={arXiv},
|
166 |
-
primaryClass={cs.CV},
|
167 |
-
url={https://arxiv.org/abs/2411.16318},
|
168 |
-
}
|
169 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|