Diffusers
Safetensors
dome272 commited on
Commit
701141f
·
1 Parent(s): 60e0d1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -18
README.md CHANGED
@@ -1,25 +1,101 @@
1
  ---
2
  license: mit
3
  ---
 
4
 
5
- ## How to run
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
- **Note**: This is only a single prior model checkpoint and has to be run with https://huggingface.co/warp-diffusion/wuerstchen
 
 
 
8
 
9
- ```python
 
 
 
10
  import torch
11
- from diffusers import AutoPipelineForText2Image
12
- from diffusers.pipelines.wuerstchen import WuerstchenPrior
13
-
14
- prior_model = WuerstchenPrior.from_pretrained("warp-diffusion/wuerstchen-prior-model-finetuned", torch_dtype=torch.float16)
15
- pipe = AutoPipelineForText2Image.from_pretrained("warp-diffusion/wuerstchen", prior_prior=prior_model, torch_dtype=torch.float16).to("cuda")
16
-
17
- prompt = [
18
- "An old destroyed car standing on a cliff in norway, cinematic photography",
19
- "Western movie, closeup cinematic photography",
20
- "Pink nike shoe commercial, closeup cinematic photography",
21
- "Croatia, closeup cinematic photography",
22
- "South Tyrol mountains at sunset, closeup cinematic photography",
23
- ]
24
- images = pipe(prompt, guidance_scale=8.0, width=1024, height=1024).images
25
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
5
 
6
+ ## Würstchen - Overview
7
+ Würstchen is diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
8
+ computational costs for both training and inference by magnitudes. Training on 1024x1024 images, is way more expensive than training at 32x32. Usually, other works make
9
+ use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through it's novel design, we achieve a 42x spatial
10
+ compression. This was unseen before, because common methods fail to faithfully reconstruct detailed images after 16x spatial compression already. Würstchen employs a
11
+ two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
12
+ A third model, Stage C, is learnt in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
13
+ also cheaper and faster inference.
14
+
15
+ ## Würstchen - Prior
16
+ The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
17
+ inference it's job is to generate the image latents given text. These image latents are then sent to Stage A & B to decode the latents into pixel space.
18
+
19
+ ### Prior - Model - Finetuned
20
+ This is the fully finetuned checkpoint. We recommend using the [interpolated model](https://huggingface.co/warp-ai/wuerstchen-prior-model-interpolated), as this checkpoint is overfit to being very
21
+ artistic. However, if you are specifically looking for a very artistic checkpoint, go for this one. In the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen)
22
+ we also give a short overview for the different Prior (Stage C) checkpoints.
23
+
24
+ **Note:** This model is only able to generate 1024x1024 images and shows repetitive patterns when sampling at different resolutions as the finetuning was only done on
25
+ 1024x1024. The [interpolated model](https://huggingface.co/warp-ai/wuerstchen-prior-model-interpolated) does not have this problem.
26
 
27
+ ### Image Sizes
28
+ Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
29
+ We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
30
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
31
 
32
+ ## How to run
33
+ This pipeline should be run together with https://huggingface.co/warp-diffusion/wuerstchen:
34
+
35
+ ```py
36
  import torch
37
+ from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
38
+ from diffusers.pipelines.wuerstchen import WuerstchenPrior, default_stage_c_timesteps
39
+
40
+ device = "cuda"
41
+ dtype = torch.float16
42
+ num_images_per_prompt = 2
43
+
44
+ prior = WuerstchenPrior.from_pretrained("warp-ai/wuerstchen-prior-model-finetuned", torch_dtype=dtype).to(device)
45
+ prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
46
+ "warp-ai/wuerstchen-prior", prior=prior, torch_dtype=dtype
47
+ ).to(device)
48
+ decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
49
+ "warp-ai/wuerstchen", torch_dtype=dtype
50
+ ).to(device)
51
+
52
+ caption = "Anthropomorphic cat dressed as a fire fighter"
53
+ negative_prompt = ""
54
+
55
+ prior_output = prior_pipeline(
56
+ prompt=caption,
57
+ height=1024,
58
+ width=1024,
59
+ timesteps=default_stage_c_timesteps,
60
+ negative_prompt=negative_prompt,
61
+ guidance_scale=4.0,
62
+ num_images_per_prompt=num_images_per_prompt,
63
+ )
64
+ decoder_output = decoder_pipeline(
65
+ image_embeddings=prior_output.image_embeddings,
66
+ prompt=caption,
67
+ negative_prompt=negative_prompt,
68
+ num_images_per_prompt=num_images_per_prompt,
69
+ guidance_scale=0.0,
70
+ output_type="pil",
71
+ ).images
72
+ ```
73
+
74
+ ## Model Details
75
+ - **Developed by:** Pablo Pernias, Dominic Rampas
76
+ - **Model type:** Diffusion-based text-to-image generation model
77
+ - **Language(s):** English
78
+ - **License:** MIT
79
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
80
+ - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2306.00637).
81
+ - **Cite as:**
82
+
83
+ @misc{pernias2023wuerstchen,
84
+ title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
85
+ author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
86
+ year={2023},
87
+ eprint={2306.00637},
88
+ archivePrefix={arXiv},
89
+ primaryClass={cs.CV}
90
+ }
91
+
92
+ ## Environmental Impact
93
+
94
+ **Würstchen v2** **Estimated Emissions**
95
+ Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
96
+
97
+ - **Hardware Type:** A100 PCIe 40GB
98
+ - **Hours used:** 24602
99
+ - **Cloud Provider:** AWS
100
+ - **Compute Region:** US-east
101
+ - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.