File size: 1,649 Bytes
ead9475
 
 
 
 
 
70ee66c
ead9475
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70ee66c
 
ead9475
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
license: apache-2.0
---

# IterComp

Official Repository of the paper: *[IterComp](https://arxiv.org/abs/2410.07171)*.

<img src="./itercomp.png" style="zoom:50%;" />

## News🔥🔥🔥

* Oct.9, 2024. Our checkpoints are publicly available on [HuggingFace Repo](https://huggingface.co/comin/IterComp).

## Introduction

IterComp is one of the new State-of-the-Art compositional generation methods. In this repository, we release the model training from  [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) .

## Text-to-Image Usage

```python
from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("comin/IterComp", torch_dtype=torch.float16, use_safetensors=True)
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()

prompt = "An astronaut riding a green horse"
image = pipe(prompt=prompt).images[0]
image.save("output.png")
```

IterComp can **serve as a powerful backbone for various compositional generation methods**, such as [RPG](https://github.com/YangLing0818/RPG-DiffusionMaster) and [Omost](https://github.com/lllyasviel/Omost). We recommend integrating IterComp into these approaches to achieve more advanced compositional generation results.

## Citation

```
@article{zhang2024itercomp,
  title={IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation},
  author={Zhang, Xinchen and Yang, Ling and Li, Guohao and Cai, Yaqi and Xie, Jiake and  Tang, Yong and Yang, Yujiu and Wang, Mengdi and Cui, Bin},
  journal={arXiv preprint arXiv:2410.07171},
  year={2024}
}
```

##