File size: 4,385 Bytes
497d6d2 d8a6d81 497d6d2 2d2c62c 9af6c08 497d6d2 b24d83d 497d6d2 03f5427 497d6d2 14e731f 497d6d2 b24d83d 3128e7f b24d83d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- cvpr
- text-to-image
- image-generation
- compositionality
---
# 🧩 TokenCompose SD14 Model Card
## 🎬CVPR 2024
[TokenCompose_SD14_B](https://mlpc-ucsd.github.io/TokenCompose/) is a [latent text-to-image diffusion model](https://arxiv.org/abs/2112.10752) finetuned from the [**Stable-Diffusion-v1-4**](https://huggingface.co/CompVis/stable-diffusion-v1-4) checkpoint at resolution 512x512 on the [VSR](https://github.com/cambridgeltl/visual-spatial-reasoning) split of [COCO image-caption pairs](https://cocodataset.org/#download) for 24,000 steps with a learning rate of 5e-6. The training objective involves token-level grounding terms in addition to denoising loss for enhanced multi-category instance composition and photorealism. The "_A/B" postfix indicates different finetuning runs of the model using the same above configurations.
# 📄 Paper
Please follow [this](https://arxiv.org/abs/2312.03626) link.
# 🧨Example Usage
We strongly recommend using the [🤗Diffuser](https://github.com/huggingface/diffusers) library to run our model.
```python
import torch
from diffusers import StableDiffusionPipeline
model_id = "mlpc-lab/TokenCompose_SD14_B"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float32)
pipe = pipe.to(device)
prompt = "A cat and a wine glass"
image = pipe(prompt).images[0]
image.save("cat_and_wine_glass.png")
```
# ⬆️Improvements over SD14
<table>
<tr>
<th rowspan="3" align="center">Method</th>
<th colspan="9" align="center">Multi-category Instance Composition</th>
<th colspan="2" align="center">Photorealism</th>
<th colspan="1" align="center">Efficiency</th>
</tr>
<tr>
<!-- <th align="center"> </th> -->
<th rowspan="2" align="center">Object Accuracy</th>
<th colspan="4" align="center">COCO</th>
<th colspan="4" align="center">ADE20K</th>
<th rowspan="2" align="center">FID (COCO)</th>
<th rowspan="2" align="center">FID (Flickr30K)</th>
<th rowspan="2" align="center">Latency</th>
</tr>
<tr>
<!-- <th align="center"> </th> -->
<th align="center">MG2</th>
<th align="center">MG3</th>
<th align="center">MG4</th>
<th align="center">MG5</th>
<th align="center">MG2</th>
<th align="center">MG3</th>
<th align="center">MG4</th>
<th align="center">MG5</th>
</tr>
<tr>
<td align="center"><a href="https://huggingface.co/CompVis/stable-diffusion-v1-4">SD 1.4</a></td>
<td align="center">29.86</td>
<td align="center">90.72<sub>1.33</sub></td>
<td align="center">50.74<sub>0.89</sub></td>
<td align="center">11.68<sub>0.45</sub></td>
<td align="center">0.88<sub>0.21</sub></td>
<td align="center">89.81<sub>0.40</sub></td>
<td align="center">53.96<sub>1.14</sub></td>
<td align="center">16.52<sub>1.13</sub></td>
<td align="center">1.89<sub>0.34</sub></td>
<td align="center"><u>20.88</u></td>
<td align="center"><u>71.46</u></td>
<td align="center"><b>7.54</b><sub>0.17</sub></td>
</tr>
<tr>
<td align="center"><a href="https://github.com/mlpc-ucsd/TokenCompose"><strong>TokenCompose (Ours)</strong></a></td>
<td align="center"><b>52.15</b></td>
<td align="center"><b>98.08</b><sub>0.40</sub></td>
<td align="center"><b>76.16</b><sub>1.04</sub></td>
<td align="center"><b>28.81</b><sub>0.95</sub></td>
<td align="center"><u>3.28</u><sub>0.48</sub></td>
<td align="center"><b>97.75</b><sub>0.34</sub></td>
<td align="center"><b>76.93</b><sub>1.09</sub></td>
<td align="center"><b>33.92</b><sub>1.47</sub></td>
<td align="center"><b>6.21</b><sub>0.62</sub></td>
<td align="center"><b>20.19</b></td>
<td align="center"><b>71.13</b></td>
<td align="center"><b>7.56</b><sub>0.14</sub></td>
</tr>
</table>
# 📰 Citation
```bibtex
@InProceedings{Wang2024TokenCompose,
author = {Wang, Zirui and Sha, Zhizhou and Ding, Zheng and Wang, Yilin and Tu, Zhuowen},
title = {TokenCompose: Text-to-Image Diffusion with Token-level Supervision},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {8553-8564}
}
``` |