stablediffusiontutorials commited on
Commit
2389883
·
verified ·
1 Parent(s): 126f64e

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +6 -0
  2. Hollie_Mengert.ckpt +3 -0
  3. README.md +207 -0
  4. Reference Papers/Denoising Diffusion Probabilistic Models paper.pdf +0 -0
  5. Reference Papers/High-Resolution Image Synthesis with Latent Diffusion Models paper.pdf +0 -0
  6. Reference Papers/Learning Transferable Visual Models From Natural Language Supervision paper.pdf +0 -0
  7. Reference Papers/Photorealistic Text-to-Image Diffusion Models paper.pdf +0 -0
  8. Reference Papers/Quantifying the Carbon Emissions paper.pdf +0 -0
  9. Reference Papers/Stable_Diffusion_Diagrams_V2.pdf +0 -0
  10. Reference Papers/classifier free diffusion guidance paper.pdf +0 -0
  11. SD/attention.py +122 -0
  12. SD/clip.py +96 -0
  13. SD/ddpm.py +123 -0
  14. SD/decoder.py +177 -0
  15. SD/diffusion.py +349 -0
  16. SD/encoder.py +103 -0
  17. SD/model_converter.py +0 -0
  18. SD/model_loader.py +28 -0
  19. SD/pipeline.py +170 -0
  20. SD/run.py +64 -0
  21. SD/sd_demo.ipynb +0 -0
  22. SD_Inkpunk_V1.ckpt +3 -0
  23. SD_Inkpunk_V2.ckpt +3 -0
  24. Sample Images/dog.jpg +0 -0
  25. feature_extractor/preprocessor_config.json +20 -0
  26. license.txt +21 -0
  27. model_index.json +32 -0
  28. requirements.txt +8 -0
  29. safety_checker/config.json +175 -0
  30. safety_checker/model.fp16.safetensors +0 -0
  31. safety_checker/model.safetensors +0 -0
  32. safety_checker/pytorch_model.bin +3 -0
  33. safety_checker/pytorch_model.fp16.bin +0 -0
  34. scheduler/scheduler_config.json +13 -0
  35. text_encoder/config.json +25 -0
  36. text_encoder/model.fp16.safetensors +0 -0
  37. text_encoder/model.safetensors +0 -0
  38. text_encoder/pytorch_model.bin +3 -0
  39. text_encoder/pytorch_model.fp16.bin +0 -0
  40. tokenizer/merges.txt +0 -0
  41. tokenizer/special_tokens_map.json +24 -0
  42. tokenizer/tokenizer_config.json +34 -0
  43. tokenizer/vocab.json +0 -0
  44. unet/config.json +36 -0
  45. unet/diffusion_pytorch_model.bin +3 -0
  46. unet/diffusion_pytorch_model.fp16.bin +3 -0
  47. unet/diffusion_pytorch_model.fp16.safetensors +0 -0
  48. unet/diffusion_pytorch_model.non_ema.bin +3 -0
  49. unet/diffusion_pytorch_model.non_ema.safetensors +3 -0
  50. unet/diffusion_pytorch_model.safetensors +3 -0
.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Reference[[:space:]]Papers/classifier[[:space:]]free[[:space:]]diffusion[[:space:]]guidance[[:space:]]paper.pdf filter=lfs diff=lfs merge=lfs -text
37
+ Reference[[:space:]]Papers/Denoising[[:space:]]Diffusion[[:space:]]Probabilistic[[:space:]]Models[[:space:]]paper.pdf filter=lfs diff=lfs merge=lfs -text
38
+ Reference[[:space:]]Papers/High-Resolution[[:space:]]Image[[:space:]]Synthesis[[:space:]]with[[:space:]]Latent[[:space:]]Diffusion[[:space:]]Models[[:space:]]paper.pdf filter=lfs diff=lfs merge=lfs -text
39
+ Reference[[:space:]]Papers/Learning[[:space:]]Transferable[[:space:]]Visual[[:space:]]Models[[:space:]]From[[:space:]]Natural[[:space:]]Language[[:space:]]Supervision[[:space:]]paper.pdf filter=lfs diff=lfs merge=lfs -text
40
+ Reference[[:space:]]Papers/Photorealistic[[:space:]]Text-to-Image[[:space:]]Diffusion[[:space:]]Models[[:space:]]paper.pdf filter=lfs diff=lfs merge=lfs -text
41
+ Stable_Diffusion_Diagrams_V2.pdf filter=lfs diff=lfs merge=lfs -text
Hollie_Mengert.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c4c9a75f6045b861b3f9252f51442dc4880c70fb792b78446940abc232bdbb7
3
+ size 2132903713
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ tags:
4
+ - stable-diffusion
5
+ - stable-diffusion-diffusers
6
+ - text-to-image
7
+ inference: true
8
+ extra_gated_prompt: |-
9
+ This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
10
+ The CreativeML OpenRAIL License specifies:
11
+
12
+ 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
13
+ 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
14
+ 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
15
+ Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
16
+
17
+ extra_gated_heading: Please read the LICENSE to access this model
18
+ ---
19
+
20
+ # Stable Diffusion v1-5 Model Card
21
+
22
+ Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
23
+ For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
24
+
25
+ The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
26
+ checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
27
+
28
+ You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
29
+
30
+ ### Diffusers
31
+ ```py
32
+ from diffusers import StableDiffusionPipeline
33
+ import torch
34
+
35
+ model_id = "runwayml/stable-diffusion-v1-5"
36
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
37
+ pipe = pipe.to("cuda")
38
+
39
+ prompt = "a photo of an astronaut riding a horse on mars"
40
+ image = pipe(prompt).images[0]
41
+
42
+ image.save("astronaut_rides_horse.png")
43
+ ```
44
+ For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
45
+
46
+ ### Original GitHub Repository
47
+
48
+ 1. Download the weights
49
+ - [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
50
+ - [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
51
+
52
+ 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
53
+
54
+ ## Model Details
55
+ - **Developed by:** Robin Rombach, Patrick Esser
56
+ - **Model type:** Diffusion-based text-to-image generation model
57
+ - **Language(s):** English
58
+ - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
59
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
60
+ - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
61
+ - **Cite as:**
62
+
63
+ @InProceedings{Rombach_2022_CVPR,
64
+ author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
65
+ title = {High-Resolution Image Synthesis With Latent Diffusion Models},
66
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
67
+ month = {June},
68
+ year = {2022},
69
+ pages = {10684-10695}
70
+ }
71
+
72
+ # Uses
73
+
74
+ ## Direct Use
75
+ The model is intended for research purposes only. Possible research areas and
76
+ tasks include
77
+
78
+ - Safe deployment of models which have the potential to generate harmful content.
79
+ - Probing and understanding the limitations and biases of generative models.
80
+ - Generation of artworks and use in design and other artistic processes.
81
+ - Applications in educational or creative tools.
82
+ - Research on generative models.
83
+
84
+ Excluded uses are described below.
85
+
86
+ ### Misuse, Malicious Use, and Out-of-Scope Use
87
+ _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
88
+
89
+
90
+ The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
91
+
92
+ #### Out-of-Scope Use
93
+ The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
94
+
95
+ #### Misuse and Malicious Use
96
+ Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
97
+
98
+ - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
99
+ - Intentionally promoting or propagating discriminatory content or harmful stereotypes.
100
+ - Impersonating individuals without their consent.
101
+ - Sexual content without consent of the people who might see it.
102
+ - Mis- and disinformation
103
+ - Representations of egregious violence and gore
104
+ - Sharing of copyrighted or licensed material in violation of its terms of use.
105
+ - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
106
+
107
+ ## Limitations and Bias
108
+
109
+ ### Limitations
110
+
111
+ - The model does not achieve perfect photorealism
112
+ - The model cannot render legible text
113
+ - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
114
+ - Faces and people in general may not be generated properly.
115
+ - The model was trained mainly with English captions and will not work as well in other languages.
116
+ - The autoencoding part of the model is lossy
117
+ - The model was trained on a large-scale dataset
118
+ [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
119
+ and is not fit for product use without additional safety mechanisms and
120
+ considerations.
121
+ - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
122
+ The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
123
+
124
+ ### Bias
125
+
126
+ While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
127
+ Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
128
+ which consists of images that are primarily limited to English descriptions.
129
+ Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
130
+ This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
131
+ ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
132
+
133
+ ### Safety Module
134
+
135
+ The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
136
+ This checker works by checking model outputs against known hard-coded NSFW concepts.
137
+ The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
138
+ Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
139
+ The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
140
+
141
+
142
+ ## Training
143
+
144
+ **Training Data**
145
+ The model developers used the following dataset for training the model:
146
+
147
+ - LAION-2B (en) and subsets thereof (see next section)
148
+
149
+ **Training Procedure**
150
+ Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
151
+
152
+ - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
153
+ - Text prompts are encoded through a ViT-L/14 text-encoder.
154
+ - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
155
+ - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
156
+
157
+ Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
158
+ - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
159
+ 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
160
+ - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
161
+ 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
162
+ filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
163
+ - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
164
+ - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
165
+ - [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
166
+ - [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
167
+
168
+ - **Hardware:** 32 x 8 x A100 GPUs
169
+ - **Optimizer:** AdamW
170
+ - **Gradient Accumulations**: 2
171
+ - **Batch:** 32 x 8 x 2 x 4 = 2048
172
+ - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
173
+
174
+ ## Evaluation Results
175
+ Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
176
+ 5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
177
+ steps show the relative improvements of the checkpoints:
178
+
179
+ ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png)
180
+
181
+ Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
182
+ ## Environmental Impact
183
+
184
+ **Stable Diffusion v1** **Estimated Emissions**
185
+ Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
186
+
187
+ - **Hardware Type:** A100 PCIe 40GB
188
+ - **Hours used:** 150000
189
+ - **Cloud Provider:** AWS
190
+ - **Compute Region:** US-east
191
+ - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
192
+
193
+
194
+ ## Citation
195
+
196
+ ```bibtex
197
+ @InProceedings{Rombach_2022_CVPR,
198
+ author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
199
+ title = {High-Resolution Image Synthesis With Latent Diffusion Models},
200
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
201
+ month = {June},
202
+ year = {2022},
203
+ pages = {10684-10695}
204
+ }
205
+ ```
206
+
207
+ *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
Reference Papers/Denoising Diffusion Probabilistic Models paper.pdf ADDED
File without changes
Reference Papers/High-Resolution Image Synthesis with Latent Diffusion Models paper.pdf ADDED
File without changes
Reference Papers/Learning Transferable Visual Models From Natural Language Supervision paper.pdf ADDED
File without changes
Reference Papers/Photorealistic Text-to-Image Diffusion Models paper.pdf ADDED
File without changes
Reference Papers/Quantifying the Carbon Emissions paper.pdf ADDED
Binary file (186 kB). View file
 
Reference Papers/Stable_Diffusion_Diagrams_V2.pdf ADDED
File without changes
Reference Papers/classifier free diffusion guidance paper.pdf ADDED
File without changes
SD/attention.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import torch.nn.functional as F
4
+ import math
5
+
6
+ class SelfAttention(nn.Module):
7
+ def __init__(self, n_heads, d_embed, in_proj_bias=True, out_proj_bias=True):
8
+ super().__init__()
9
+ # This combines the Wq, Wk and Wv matrices into one matrix
10
+ self.in_proj = nn.Linear(d_embed, 3 * d_embed, bias=in_proj_bias)
11
+ # This one represents the Wo matrix
12
+ self.out_proj = nn.Linear(d_embed, d_embed, bias=out_proj_bias)
13
+ self.n_heads = n_heads
14
+ self.d_head = d_embed // n_heads
15
+
16
+ def forward(self, x, causal_mask=False):
17
+ # x: # (Batch_Size, Seq_Len, Dim)
18
+
19
+ # (Batch_Size, Seq_Len, Dim)
20
+ input_shape = x.shape
21
+
22
+ # (Batch_Size, Seq_Len, Dim)
23
+ batch_size, sequence_length, d_embed = input_shape
24
+
25
+ # (Batch_Size, Seq_Len, H, Dim / H)
26
+ interim_shape = (batch_size, sequence_length, self.n_heads, self.d_head)
27
+
28
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim * 3) -> 3 tensor of shape (Batch_Size, Seq_Len, Dim)
29
+ q, k, v = self.in_proj(x).chunk(3, dim=-1)
30
+
31
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, H, Dim / H) -> (Batch_Size, H, Seq_Len, Dim / H)
32
+ q = q.view(interim_shape).transpose(1, 2)
33
+ k = k.view(interim_shape).transpose(1, 2)
34
+ v = v.view(interim_shape).transpose(1, 2)
35
+
36
+ # (Batch_Size, H, Seq_Len, Dim) @ (Batch_Size, H, Dim, Seq_Len) -> (Batch_Size, H, Seq_Len, Seq_Len)
37
+ weight = q @ k.transpose(-1, -2)
38
+
39
+ if causal_mask:
40
+ # Mask where the upper triangle (above the principal diagonal) is 1
41
+ mask = torch.ones_like(weight, dtype=torch.bool).triu(1)
42
+ # Fill the upper triangle with -inf
43
+ weight.masked_fill_(mask, -torch.inf)
44
+
45
+ # Divide by d_k (Dim / H).
46
+ # (Batch_Size, H, Seq_Len, Seq_Len) -> (Batch_Size, H, Seq_Len, Seq_Len)
47
+ weight /= math.sqrt(self.d_head)
48
+
49
+ # (Batch_Size, H, Seq_Len, Seq_Len) -> (Batch_Size, H, Seq_Len, Seq_Len)
50
+ weight = F.softmax(weight, dim=-1)
51
+
52
+ # (Batch_Size, H, Seq_Len, Seq_Len) @ (Batch_Size, H, Seq_Len, Dim / H) -> (Batch_Size, H, Seq_Len, Dim / H)
53
+ output = weight @ v
54
+
55
+ # (Batch_Size, H, Seq_Len, Dim / H) -> (Batch_Size, Seq_Len, H, Dim / H)
56
+ output = output.transpose(1, 2)
57
+
58
+ # (Batch_Size, Seq_Len, H, Dim / H) -> (Batch_Size, Seq_Len, Dim)
59
+ output = output.reshape(input_shape)
60
+
61
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
62
+ output = self.out_proj(output)
63
+
64
+ # (Batch_Size, Seq_Len, Dim)
65
+ return output
66
+
67
+ class CrossAttention(nn.Module):
68
+ def __init__(self, n_heads, d_embed, d_cross, in_proj_bias=True, out_proj_bias=True):
69
+ super().__init__()
70
+ self.q_proj = nn.Linear(d_embed, d_embed, bias=in_proj_bias)
71
+ self.k_proj = nn.Linear(d_cross, d_embed, bias=in_proj_bias)
72
+ self.v_proj = nn.Linear(d_cross, d_embed, bias=in_proj_bias)
73
+ self.out_proj = nn.Linear(d_embed, d_embed, bias=out_proj_bias)
74
+ self.n_heads = n_heads
75
+ self.d_head = d_embed // n_heads
76
+
77
+ def forward(self, x, y):
78
+ # x (latent): # (Batch_Size, Seq_Len_Q, Dim_Q)
79
+ # y (context): # (Batch_Size, Seq_Len_KV, Dim_KV) = (Batch_Size, 77, 768)
80
+
81
+ input_shape = x.shape
82
+ batch_size, sequence_length, d_embed = input_shape
83
+ # Divide each embedding of Q into multiple heads such that d_heads * n_heads = Dim_Q
84
+ interim_shape = (batch_size, -1, self.n_heads, self.d_head)
85
+
86
+ # (Batch_Size, Seq_Len_Q, Dim_Q) -> (Batch_Size, Seq_Len_Q, Dim_Q)
87
+ q = self.q_proj(x)
88
+ # (Batch_Size, Seq_Len_KV, Dim_KV) -> (Batch_Size, Seq_Len_KV, Dim_Q)
89
+ k = self.k_proj(y)
90
+ # (Batch_Size, Seq_Len_KV, Dim_KV) -> (Batch_Size, Seq_Len_KV, Dim_Q)
91
+ v = self.v_proj(y)
92
+
93
+ # (Batch_Size, Seq_Len_Q, Dim_Q) -> (Batch_Size, Seq_Len_Q, H, Dim_Q / H) -> (Batch_Size, H, Seq_Len_Q, Dim_Q / H)
94
+ q = q.view(interim_shape).transpose(1, 2)
95
+ # (Batch_Size, Seq_Len_KV, Dim_Q) -> (Batch_Size, Seq_Len_KV, H, Dim_Q / H) -> (Batch_Size, H, Seq_Len_KV, Dim_Q / H)
96
+ k = k.view(interim_shape).transpose(1, 2)
97
+ # (Batch_Size, Seq_Len_KV, Dim_Q) -> (Batch_Size, Seq_Len_KV, H, Dim_Q / H) -> (Batch_Size, H, Seq_Len_KV, Dim_Q / H)
98
+ v = v.view(interim_shape).transpose(1, 2)
99
+
100
+ # (Batch_Size, H, Seq_Len_Q, Dim_Q / H) @ (Batch_Size, H, Dim_Q / H, Seq_Len_KV) -> (Batch_Size, H, Seq_Len_Q, Seq_Len_KV)
101
+ weight = q @ k.transpose(-1, -2)
102
+
103
+ # (Batch_Size, H, Seq_Len_Q, Seq_Len_KV)
104
+ weight /= math.sqrt(self.d_head)
105
+
106
+ # (Batch_Size, H, Seq_Len_Q, Seq_Len_KV)
107
+ weight = F.softmax(weight, dim=-1)
108
+
109
+ # (Batch_Size, H, Seq_Len_Q, Seq_Len_KV) @ (Batch_Size, H, Seq_Len_KV, Dim_Q / H) -> (Batch_Size, H, Seq_Len_Q, Dim_Q / H)
110
+ output = weight @ v
111
+
112
+ # (Batch_Size, H, Seq_Len_Q, Dim_Q / H) -> (Batch_Size, Seq_Len_Q, H, Dim_Q / H)
113
+ output = output.transpose(1, 2).contiguous()
114
+
115
+ # (Batch_Size, Seq_Len_Q, H, Dim_Q / H) -> (Batch_Size, Seq_Len_Q, Dim_Q)
116
+ output = output.view(input_shape)
117
+
118
+ # (Batch_Size, Seq_Len_Q, Dim_Q) -> (Batch_Size, Seq_Len_Q, Dim_Q)
119
+ output = self.out_proj(output)
120
+
121
+ # (Batch_Size, Seq_Len_Q, Dim_Q)
122
+ return output
SD/clip.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import torch.nn.functional as F
4
+ from attention import SelfAttention
5
+
6
+ class CLIPEmbedding(nn.Module):
7
+ def __init__(self, n_vocab: int, n_embd: int, n_token: int):
8
+ super().__init__()
9
+
10
+ self.token_embedding = nn.Embedding(n_vocab, n_embd)
11
+ # A learnable weight matrix encodes the position information for each token
12
+ self.position_embedding = nn.Parameter(torch.zeros((n_token, n_embd)))
13
+
14
+ def forward(self, tokens):
15
+ # (Batch_Size, Seq_Len) -> (Batch_Size, Seq_Len, Dim)
16
+ x = self.token_embedding(tokens)
17
+ # (Batch_Size, Seq_Len) -> (Batch_Size, Seq_Len, Dim)
18
+ x += self.position_embedding
19
+
20
+ return x
21
+
22
+ class CLIPLayer(nn.Module):
23
+ def __init__(self, n_head: int, n_embd: int):
24
+ super().__init__()
25
+
26
+ # Pre-attention norm
27
+ self.layernorm_1 = nn.LayerNorm(n_embd)
28
+ # Self attention
29
+ self.attention = SelfAttention(n_head, n_embd)
30
+ # Pre-FNN norm
31
+ self.layernorm_2 = nn.LayerNorm(n_embd)
32
+ # Feedforward layer
33
+ self.linear_1 = nn.Linear(n_embd, 4 * n_embd)
34
+ self.linear_2 = nn.Linear(4 * n_embd, n_embd)
35
+
36
+ def forward(self, x):
37
+ # (Batch_Size, Seq_Len, Dim)
38
+ residue = x
39
+
40
+ ### SELF ATTENTION ###
41
+
42
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
43
+ x = self.layernorm_1(x)
44
+
45
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
46
+ x = self.attention(x, causal_mask=True)
47
+
48
+ # (Batch_Size, Seq_Len, Dim) + (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
49
+ x += residue
50
+
51
+ ### FEEDFORWARD LAYER ###
52
+ # Apply a feedforward layer where the hidden dimension is 4 times the embedding dimension.
53
+
54
+ residue = x
55
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
56
+ x = self.layernorm_2(x)
57
+
58
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, 4 * Dim)
59
+ x = self.linear_1(x)
60
+
61
+ # (Batch_Size, Seq_Len, 4 * Dim) -> (Batch_Size, Seq_Len, 4 * Dim)
62
+ x = x * torch.sigmoid(1.702 * x) # QuickGELU activation function
63
+
64
+ # (Batch_Size, Seq_Len, 4 * Dim) -> (Batch_Size, Seq_Len, Dim)
65
+ x = self.linear_2(x)
66
+
67
+ # (Batch_Size, Seq_Len, Dim) + (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
68
+ x += residue
69
+
70
+ return x
71
+
72
+ class CLIP(nn.Module):
73
+ def __init__(self):
74
+ super().__init__()
75
+ self.embedding = CLIPEmbedding(49408, 768, 77)
76
+
77
+ self.layers = nn.ModuleList([
78
+ CLIPLayer(12, 768) for i in range(12)
79
+ ])
80
+
81
+ self.layernorm = nn.LayerNorm(768)
82
+
83
+ def forward(self, tokens: torch.LongTensor) -> torch.FloatTensor:
84
+ tokens = tokens.type(torch.long)
85
+
86
+ # (Batch_Size, Seq_Len) -> (Batch_Size, Seq_Len, Dim)
87
+ state = self.embedding(tokens)
88
+
89
+ # Apply encoder layers similar to the Transformer's encoder.
90
+ for layer in self.layers:
91
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
92
+ state = layer(state)
93
+ # (Batch_Size, Seq_Len, Dim) -> (Batch_Size, Seq_Len, Dim)
94
+ output = self.layernorm(state)
95
+
96
+ return output
SD/ddpm.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+
4
+ class DDPMSampler:
5
+
6
+ def __init__(self, generator: torch.Generator, num_training_steps=1000, beta_start: float = 0.00085, beta_end: float = 0.0120):
7
+ # Params "beta_start" and "beta_end" taken from: https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/configs/stable-diffusion/v1-inference.yaml#L5C8-L5C8
8
+ # For the naming conventions, refer to the DDPM paper (https://arxiv.org/pdf/2006.11239.pdf)
9
+ self.betas = torch.linspace(beta_start ** 0.5, beta_end ** 0.5, num_training_steps, dtype=torch.float32) ** 2
10
+ self.alphas = 1.0 - self.betas
11
+ self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
12
+ self.one = torch.tensor(1.0)
13
+
14
+ self.generator = generator
15
+
16
+ self.num_train_timesteps = num_training_steps
17
+ self.timesteps = torch.from_numpy(np.arange(0, num_training_steps)[::-1].copy())
18
+
19
+ def set_inference_timesteps(self, num_inference_steps=50):
20
+ self.num_inference_steps = num_inference_steps
21
+ step_ratio = self.num_train_timesteps // self.num_inference_steps
22
+ timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64)
23
+ self.timesteps = torch.from_numpy(timesteps)
24
+
25
+ def _get_previous_timestep(self, timestep: int) -> int:
26
+ prev_t = timestep - self.num_train_timesteps // self.num_inference_steps
27
+ return prev_t
28
+
29
+ def _get_variance(self, timestep: int) -> torch.Tensor:
30
+ prev_t = self._get_previous_timestep(timestep)
31
+
32
+ alpha_prod_t = self.alphas_cumprod[timestep]
33
+ alpha_prod_t_prev = self.alphas_cumprod[prev_t] if prev_t >= 0 else self.one
34
+ current_beta_t = 1 - alpha_prod_t / alpha_prod_t_prev
35
+
36
+ # For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf)
37
+ # and sample from it to get previous sample
38
+ # x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample
39
+ variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * current_beta_t
40
+
41
+ # we always take the log of variance, so clamp it to ensure it's not 0
42
+ variance = torch.clamp(variance, min=1e-20)
43
+
44
+ return variance
45
+
46
+ def set_strength(self, strength=1):
47
+ """
48
+ Set how much noise to add to the input image.
49
+ More noise (strength ~ 1) means that the output will be further from the input image.
50
+ Less noise (strength ~ 0) means that the output will be closer to the input image.
51
+ """
52
+ # start_step is the number of noise levels to skip
53
+ start_step = self.num_inference_steps - int(self.num_inference_steps * strength)
54
+ self.timesteps = self.timesteps[start_step:]
55
+ self.start_step = start_step
56
+
57
+ def step(self, timestep: int, latents: torch.Tensor, model_output: torch.Tensor):
58
+ t = timestep
59
+ prev_t = self._get_previous_timestep(t)
60
+
61
+ # 1. compute alphas, betas
62
+ alpha_prod_t = self.alphas_cumprod[t]
63
+ alpha_prod_t_prev = self.alphas_cumprod[prev_t] if prev_t >= 0 else self.one
64
+ beta_prod_t = 1 - alpha_prod_t
65
+ beta_prod_t_prev = 1 - alpha_prod_t_prev
66
+ current_alpha_t = alpha_prod_t / alpha_prod_t_prev
67
+ current_beta_t = 1 - current_alpha_t
68
+
69
+ # 2. compute predicted original sample from predicted noise also called
70
+ # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
71
+ pred_original_sample = (latents - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
72
+
73
+ # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
74
+ # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
75
+ pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * current_beta_t) / beta_prod_t
76
+ current_sample_coeff = current_alpha_t ** (0.5) * beta_prod_t_prev / beta_prod_t
77
+
78
+ # 5. Compute predicted previous sample µ_t
79
+ # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
80
+ pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * latents
81
+
82
+ # 6. Add noise
83
+ variance = 0
84
+ if t > 0:
85
+ device = model_output.device
86
+ noise = torch.randn(model_output.shape, generator=self.generator, device=device, dtype=model_output.dtype)
87
+ # Compute the variance as per formula (7) from https://arxiv.org/pdf/2006.11239.pdf
88
+ variance = (self._get_variance(t) ** 0.5) * noise
89
+
90
+ # sample from N(mu, sigma) = X can be obtained by X = mu + sigma * N(0, 1)
91
+ # the variable "variance" is already multiplied by the noise N(0, 1)
92
+ pred_prev_sample = pred_prev_sample + variance
93
+
94
+ return pred_prev_sample
95
+
96
+ def add_noise(
97
+ self,
98
+ original_samples: torch.FloatTensor,
99
+ timesteps: torch.IntTensor,
100
+ ) -> torch.FloatTensor:
101
+ alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype)
102
+ timesteps = timesteps.to(original_samples.device)
103
+
104
+ sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
105
+ sqrt_alpha_prod = sqrt_alpha_prod.flatten()
106
+ while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
107
+ sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
108
+
109
+ sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
110
+ sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
111
+ while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
112
+ sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
113
+
114
+ # Sample from q(x_t | x_0) as in equation (4) of https://arxiv.org/pdf/2006.11239.pdf
115
+ # Because N(mu, sigma) = X can be obtained by X = mu + sigma * N(0, 1)
116
+ # here mu = sqrt_alpha_prod * original_samples and sigma = sqrt_one_minus_alpha_prod
117
+ noise = torch.randn(original_samples.shape, generator=self.generator, device=original_samples.device, dtype=original_samples.dtype)
118
+ noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
119
+ return noisy_samples
120
+
121
+
122
+
123
+
SD/decoder.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import torch.nn.functional as F
4
+ from attention import SelfAttention
5
+
6
+ class VAE_AttentionBlock(nn.Module):
7
+ def __init__(self, channels):
8
+ super().__init__()
9
+ self.groupnorm = nn.GroupNorm(32, channels)
10
+ self.attention = SelfAttention(1, channels)
11
+
12
+ def forward(self, x):
13
+ # x: (Batch_Size, Features, Height, Width)
14
+
15
+ residue = x
16
+
17
+ # (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height, Width)
18
+ x = self.groupnorm(x)
19
+
20
+ n, c, h, w = x.shape
21
+
22
+ # (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height * Width)
23
+ x = x.view((n, c, h * w))
24
+
25
+ # (Batch_Size, Features, Height * Width) -> (Batch_Size, Height * Width, Features). Each pixel becomes a feature of size "Features", the sequence length is "Height * Width".
26
+ x = x.transpose(-1, -2)
27
+
28
+ # Perform self-attention WITHOUT mask
29
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
30
+ x = self.attention(x)
31
+
32
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Features, Height * Width)
33
+ x = x.transpose(-1, -2)
34
+
35
+ # (Batch_Size, Features, Height * Width) -> (Batch_Size, Features, Height, Width)
36
+ x = x.view((n, c, h, w))
37
+
38
+ # (Batch_Size, Features, Height, Width) + (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height, Width)
39
+ x += residue
40
+
41
+ # (Batch_Size, Features, Height, Width)
42
+ return x
43
+
44
+ class VAE_ResidualBlock(nn.Module):
45
+ def __init__(self, in_channels, out_channels):
46
+ super().__init__()
47
+ self.groupnorm_1 = nn.GroupNorm(32, in_channels)
48
+ self.conv_1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1)
49
+
50
+ self.groupnorm_2 = nn.GroupNorm(32, out_channels)
51
+ self.conv_2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1)
52
+
53
+ if in_channels == out_channels:
54
+ self.residual_layer = nn.Identity()
55
+ else:
56
+ self.residual_layer = nn.Conv2d(in_channels, out_channels, kernel_size=1, padding=0)
57
+
58
+ def forward(self, x):
59
+ # x: (Batch_Size, In_Channels, Height, Width)
60
+
61
+ residue = x
62
+
63
+ # (Batch_Size, In_Channels, Height, Width) -> (Batch_Size, In_Channels, Height, Width)
64
+ x = self.groupnorm_1(x)
65
+
66
+ # (Batch_Size, In_Channels, Height, Width) -> (Batch_Size, In_Channels, Height, Width)
67
+ x = F.silu(x)
68
+
69
+ # (Batch_Size, In_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
70
+ x = self.conv_1(x)
71
+
72
+ # (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
73
+ x = self.groupnorm_2(x)
74
+
75
+ # (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
76
+ x = F.silu(x)
77
+
78
+ # (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
79
+ x = self.conv_2(x)
80
+
81
+ # (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
82
+ return x + self.residual_layer(residue)
83
+
84
+ class VAE_Decoder(nn.Sequential):
85
+ def __init__(self):
86
+ super().__init__(
87
+ # (Batch_Size, 4, Height / 8, Width / 8) -> (Batch_Size, 4, Height / 8, Width / 8)
88
+ nn.Conv2d(4, 4, kernel_size=1, padding=0),
89
+
90
+ # (Batch_Size, 4, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
91
+ nn.Conv2d(4, 512, kernel_size=3, padding=1),
92
+
93
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
94
+ VAE_ResidualBlock(512, 512),
95
+
96
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
97
+ VAE_AttentionBlock(512),
98
+
99
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
100
+ VAE_ResidualBlock(512, 512),
101
+
102
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
103
+ VAE_ResidualBlock(512, 512),
104
+
105
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
106
+ VAE_ResidualBlock(512, 512),
107
+
108
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
109
+ VAE_ResidualBlock(512, 512),
110
+
111
+ # Repeats the rows and columns of the data by scale_factor (like when you resize an image by doubling its size).
112
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 4, Width / 4)
113
+ nn.Upsample(scale_factor=2),
114
+
115
+ # (Batch_Size, 512, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 4, Width / 4)
116
+ nn.Conv2d(512, 512, kernel_size=3, padding=1),
117
+
118
+ # (Batch_Size, 512, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 4, Width / 4)
119
+ VAE_ResidualBlock(512, 512),
120
+
121
+ # (Batch_Size, 512, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 4, Width / 4)
122
+ VAE_ResidualBlock(512, 512),
123
+
124
+ # (Batch_Size, 512, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 4, Width / 4)
125
+ VAE_ResidualBlock(512, 512),
126
+
127
+ # (Batch_Size, 512, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 2, Width / 2)
128
+ nn.Upsample(scale_factor=2),
129
+
130
+ # (Batch_Size, 512, Height / 2, Width / 2) -> (Batch_Size, 512, Height / 2, Width / 2)
131
+ nn.Conv2d(512, 512, kernel_size=3, padding=1),
132
+
133
+ # (Batch_Size, 512, Height / 2, Width / 2) -> (Batch_Size, 256, Height / 2, Width / 2)
134
+ VAE_ResidualBlock(512, 256),
135
+
136
+ # (Batch_Size, 256, Height / 2, Width / 2) -> (Batch_Size, 256, Height / 2, Width / 2)
137
+ VAE_ResidualBlock(256, 256),
138
+
139
+ # (Batch_Size, 256, Height / 2, Width / 2) -> (Batch_Size, 256, Height / 2, Width / 2)
140
+ VAE_ResidualBlock(256, 256),
141
+
142
+ # (Batch_Size, 256, Height / 2, Width / 2) -> (Batch_Size, 256, Height, Width)
143
+ nn.Upsample(scale_factor=2),
144
+
145
+ # (Batch_Size, 256, Height, Width) -> (Batch_Size, 256, Height, Width)
146
+ nn.Conv2d(256, 256, kernel_size=3, padding=1),
147
+
148
+ # (Batch_Size, 256, Height, Width) -> (Batch_Size, 128, Height, Width)
149
+ VAE_ResidualBlock(256, 128),
150
+
151
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 128, Height, Width)
152
+ VAE_ResidualBlock(128, 128),
153
+
154
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 128, Height, Width)
155
+ VAE_ResidualBlock(128, 128),
156
+
157
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 128, Height, Width)
158
+ nn.GroupNorm(32, 128),
159
+
160
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 128, Height, Width)
161
+ nn.SiLU(),
162
+
163
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 3, Height, Width)
164
+ nn.Conv2d(128, 3, kernel_size=3, padding=1),
165
+ )
166
+
167
+ def forward(self, x):
168
+ # x: (Batch_Size, 4, Height / 8, Width / 8)
169
+
170
+ # Remove the scaling added by the Encoder.
171
+ x /= 0.18215
172
+
173
+ for module in self:
174
+ x = module(x)
175
+
176
+ # (Batch_Size, 3, Height, Width)
177
+ return x
SD/diffusion.py ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import torch.nn.functional as F
4
+ from attention import SelfAttention, CrossAttention
5
+
6
+ class TimeEmbedding(nn.Module):
7
+ def __init__(self, n_embd):
8
+ super().__init__()
9
+ self.linear_1 = nn.Linear(n_embd, 4 * n_embd)
10
+ self.linear_2 = nn.Linear(4 * n_embd, 4 * n_embd)
11
+
12
+ def forward(self, x):
13
+ # x: (1, 320)
14
+
15
+ # (1, 320) -> (1, 1280)
16
+ x = self.linear_1(x)
17
+
18
+ # (1, 1280) -> (1, 1280)
19
+ x = F.silu(x)
20
+
21
+ # (1, 1280) -> (1, 1280)
22
+ x = self.linear_2(x)
23
+
24
+ return x
25
+
26
+ class UNET_ResidualBlock(nn.Module):
27
+ def __init__(self, in_channels, out_channels, n_time=1280):
28
+ super().__init__()
29
+ self.groupnorm_feature = nn.GroupNorm(32, in_channels)
30
+ self.conv_feature = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1)
31
+ self.linear_time = nn.Linear(n_time, out_channels)
32
+
33
+ self.groupnorm_merged = nn.GroupNorm(32, out_channels)
34
+ self.conv_merged = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1)
35
+
36
+ if in_channels == out_channels:
37
+ self.residual_layer = nn.Identity()
38
+ else:
39
+ self.residual_layer = nn.Conv2d(in_channels, out_channels, kernel_size=1, padding=0)
40
+
41
+ def forward(self, feature, time):
42
+ # feature: (Batch_Size, In_Channels, Height, Width)
43
+ # time: (1, 1280)
44
+
45
+ residue = feature
46
+
47
+ # (Batch_Size, In_Channels, Height, Width) -> (Batch_Size, In_Channels, Height, Width)
48
+ feature = self.groupnorm_feature(feature)
49
+
50
+ # (Batch_Size, In_Channels, Height, Width) -> (Batch_Size, In_Channels, Height, Width)
51
+ feature = F.silu(feature)
52
+
53
+ # (Batch_Size, In_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
54
+ feature = self.conv_feature(feature)
55
+
56
+ # (1, 1280) -> (1, 1280)
57
+ time = F.silu(time)
58
+
59
+ # (1, 1280) -> (1, Out_Channels)
60
+ time = self.linear_time(time)
61
+
62
+ # Add width and height dimension to time.
63
+ # (Batch_Size, Out_Channels, Height, Width) + (1, Out_Channels, 1, 1) -> (Batch_Size, Out_Channels, Height, Width)
64
+ merged = feature + time.unsqueeze(-1).unsqueeze(-1)
65
+
66
+ # (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
67
+ merged = self.groupnorm_merged(merged)
68
+
69
+ # (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
70
+ merged = F.silu(merged)
71
+
72
+ # (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
73
+ merged = self.conv_merged(merged)
74
+
75
+ # (Batch_Size, Out_Channels, Height, Width) + (Batch_Size, Out_Channels, Height, Width) -> (Batch_Size, Out_Channels, Height, Width)
76
+ return merged + self.residual_layer(residue)
77
+
78
+ class UNET_AttentionBlock(nn.Module):
79
+ def __init__(self, n_head: int, n_embd: int, d_context=768):
80
+ super().__init__()
81
+ channels = n_head * n_embd
82
+
83
+ self.groupnorm = nn.GroupNorm(32, channels, eps=1e-6)
84
+ self.conv_input = nn.Conv2d(channels, channels, kernel_size=1, padding=0)
85
+
86
+ self.layernorm_1 = nn.LayerNorm(channels)
87
+ self.attention_1 = SelfAttention(n_head, channels, in_proj_bias=False)
88
+ self.layernorm_2 = nn.LayerNorm(channels)
89
+ self.attention_2 = CrossAttention(n_head, channels, d_context, in_proj_bias=False)
90
+ self.layernorm_3 = nn.LayerNorm(channels)
91
+ self.linear_geglu_1 = nn.Linear(channels, 4 * channels * 2)
92
+ self.linear_geglu_2 = nn.Linear(4 * channels, channels)
93
+
94
+ self.conv_output = nn.Conv2d(channels, channels, kernel_size=1, padding=0)
95
+
96
+ def forward(self, x, context):
97
+ # x: (Batch_Size, Features, Height, Width)
98
+ # context: (Batch_Size, Seq_Len, Dim)
99
+
100
+ residue_long = x
101
+
102
+ # (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height, Width)
103
+ x = self.groupnorm(x)
104
+
105
+ # (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height, Width)
106
+ x = self.conv_input(x)
107
+
108
+ n, c, h, w = x.shape
109
+
110
+ # (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height * Width)
111
+ x = x.view((n, c, h * w))
112
+
113
+ # (Batch_Size, Features, Height * Width) -> (Batch_Size, Height * Width, Features)
114
+ x = x.transpose(-1, -2)
115
+
116
+ # Normalization + Self-Attention with skip connection
117
+
118
+ # (Batch_Size, Height * Width, Features)
119
+ residue_short = x
120
+
121
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
122
+ x = self.layernorm_1(x)
123
+
124
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
125
+ x = self.attention_1(x)
126
+
127
+ # (Batch_Size, Height * Width, Features) + (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
128
+ x += residue_short
129
+
130
+ # (Batch_Size, Height * Width, Features)
131
+ residue_short = x
132
+
133
+ # Normalization + Cross-Attention with skip connection
134
+
135
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
136
+ x = self.layernorm_2(x)
137
+
138
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
139
+ x = self.attention_2(x, context)
140
+
141
+ # (Batch_Size, Height * Width, Features) + (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
142
+ x += residue_short
143
+
144
+ # (Batch_Size, Height * Width, Features)
145
+ residue_short = x
146
+
147
+ # Normalization + FFN with GeGLU and skip connection
148
+
149
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
150
+ x = self.layernorm_3(x)
151
+
152
+ # GeGLU as implemented in the original code: https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/ldm/modules/attention.py#L37C10-L37C10
153
+ # (Batch_Size, Height * Width, Features) -> two tensors of shape (Batch_Size, Height * Width, Features * 4)
154
+ x, gate = self.linear_geglu_1(x).chunk(2, dim=-1)
155
+
156
+ # Element-wise product: (Batch_Size, Height * Width, Features * 4) * (Batch_Size, Height * Width, Features * 4) -> (Batch_Size, Height * Width, Features * 4)
157
+ x = x * F.gelu(gate)
158
+
159
+ # (Batch_Size, Height * Width, Features * 4) -> (Batch_Size, Height * Width, Features)
160
+ x = self.linear_geglu_2(x)
161
+
162
+ # (Batch_Size, Height * Width, Features) + (Batch_Size, Height * Width, Features) -> (Batch_Size, Height * Width, Features)
163
+ x += residue_short
164
+
165
+ # (Batch_Size, Height * Width, Features) -> (Batch_Size, Features, Height * Width)
166
+ x = x.transpose(-1, -2)
167
+
168
+ # (Batch_Size, Features, Height * Width) -> (Batch_Size, Features, Height, Width)
169
+ x = x.view((n, c, h, w))
170
+
171
+ # Final skip connection between initial input and output of the block
172
+ # (Batch_Size, Features, Height, Width) + (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height, Width)
173
+ return self.conv_output(x) + residue_long
174
+
175
+ class Upsample(nn.Module):
176
+ def __init__(self, channels):
177
+ super().__init__()
178
+ self.conv = nn.Conv2d(channels, channels, kernel_size=3, padding=1)
179
+
180
+ def forward(self, x):
181
+ # (Batch_Size, Features, Height, Width) -> (Batch_Size, Features, Height * 2, Width * 2)
182
+ x = F.interpolate(x, scale_factor=2, mode='nearest')
183
+ return self.conv(x)
184
+
185
+ class SwitchSequential(nn.Sequential):
186
+ def forward(self, x, context, time):
187
+ for layer in self:
188
+ if isinstance(layer, UNET_AttentionBlock):
189
+ x = layer(x, context)
190
+ elif isinstance(layer, UNET_ResidualBlock):
191
+ x = layer(x, time)
192
+ else:
193
+ x = layer(x)
194
+ return x
195
+
196
+ class UNET(nn.Module):
197
+ def __init__(self):
198
+ super().__init__()
199
+ self.encoders = nn.ModuleList([
200
+ # (Batch_Size, 4, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
201
+ SwitchSequential(nn.Conv2d(4, 320, kernel_size=3, padding=1)),
202
+
203
+ # (Batch_Size, 320, Height / 8, Width / 8) -> # (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
204
+ SwitchSequential(UNET_ResidualBlock(320, 320), UNET_AttentionBlock(8, 40)),
205
+
206
+ # (Batch_Size, 320, Height / 8, Width / 8) -> # (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
207
+ SwitchSequential(UNET_ResidualBlock(320, 320), UNET_AttentionBlock(8, 40)),
208
+
209
+ # (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 16, Width / 16)
210
+ SwitchSequential(nn.Conv2d(320, 320, kernel_size=3, stride=2, padding=1)),
211
+
212
+ # (Batch_Size, 320, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16)
213
+ SwitchSequential(UNET_ResidualBlock(320, 640), UNET_AttentionBlock(8, 80)),
214
+
215
+ # (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16)
216
+ SwitchSequential(UNET_ResidualBlock(640, 640), UNET_AttentionBlock(8, 80)),
217
+
218
+ # (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 32, Width / 32)
219
+ SwitchSequential(nn.Conv2d(640, 640, kernel_size=3, stride=2, padding=1)),
220
+
221
+ # (Batch_Size, 640, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32)
222
+ SwitchSequential(UNET_ResidualBlock(640, 1280), UNET_AttentionBlock(8, 160)),
223
+
224
+ # (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32)
225
+ SwitchSequential(UNET_ResidualBlock(1280, 1280), UNET_AttentionBlock(8, 160)),
226
+
227
+ # (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 64, Width / 64)
228
+ SwitchSequential(nn.Conv2d(1280, 1280, kernel_size=3, stride=2, padding=1)),
229
+
230
+ # (Batch_Size, 1280, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64)
231
+ SwitchSequential(UNET_ResidualBlock(1280, 1280)),
232
+
233
+ # (Batch_Size, 1280, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64)
234
+ SwitchSequential(UNET_ResidualBlock(1280, 1280)),
235
+ ])
236
+
237
+ self.bottleneck = SwitchSequential(
238
+ # (Batch_Size, 1280, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64)
239
+ UNET_ResidualBlock(1280, 1280),
240
+
241
+ # (Batch_Size, 1280, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64)
242
+ UNET_AttentionBlock(8, 160),
243
+
244
+ # (Batch_Size, 1280, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64)
245
+ UNET_ResidualBlock(1280, 1280),
246
+ )
247
+
248
+ self.decoders = nn.ModuleList([
249
+ # (Batch_Size, 2560, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64)
250
+ SwitchSequential(UNET_ResidualBlock(2560, 1280)),
251
+
252
+ # (Batch_Size, 2560, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64)
253
+ SwitchSequential(UNET_ResidualBlock(2560, 1280)),
254
+
255
+ # (Batch_Size, 2560, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 64, Width / 64) -> (Batch_Size, 1280, Height / 32, Width / 32)
256
+ SwitchSequential(UNET_ResidualBlock(2560, 1280), Upsample(1280)),
257
+
258
+ # (Batch_Size, 2560, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32)
259
+ SwitchSequential(UNET_ResidualBlock(2560, 1280), UNET_AttentionBlock(8, 160)),
260
+
261
+ # (Batch_Size, 2560, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32)
262
+ SwitchSequential(UNET_ResidualBlock(2560, 1280), UNET_AttentionBlock(8, 160)),
263
+
264
+ # (Batch_Size, 1920, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 32, Width / 32) -> (Batch_Size, 1280, Height / 16, Width / 16)
265
+ SwitchSequential(UNET_ResidualBlock(1920, 1280), UNET_AttentionBlock(8, 160), Upsample(1280)),
266
+
267
+ # (Batch_Size, 1920, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16)
268
+ SwitchSequential(UNET_ResidualBlock(1920, 640), UNET_AttentionBlock(8, 80)),
269
+
270
+ # (Batch_Size, 1280, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16)
271
+ SwitchSequential(UNET_ResidualBlock(1280, 640), UNET_AttentionBlock(8, 80)),
272
+
273
+ # (Batch_Size, 960, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 16, Width / 16) -> (Batch_Size, 640, Height / 8, Width / 8)
274
+ SwitchSequential(UNET_ResidualBlock(960, 640), UNET_AttentionBlock(8, 80), Upsample(640)),
275
+
276
+ # (Batch_Size, 960, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
277
+ SwitchSequential(UNET_ResidualBlock(960, 320), UNET_AttentionBlock(8, 40)),
278
+
279
+ # (Batch_Size, 640, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
280
+ SwitchSequential(UNET_ResidualBlock(640, 320), UNET_AttentionBlock(8, 40)),
281
+
282
+ # (Batch_Size, 640, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
283
+ SwitchSequential(UNET_ResidualBlock(640, 320), UNET_AttentionBlock(8, 40)),
284
+ ])
285
+
286
+ def forward(self, x, context, time):
287
+ # x: (Batch_Size, 4, Height / 8, Width / 8)
288
+ # context: (Batch_Size, Seq_Len, Dim)
289
+ # time: (1, 1280)
290
+
291
+ skip_connections = []
292
+ for layers in self.encoders:
293
+ x = layers(x, context, time)
294
+ skip_connections.append(x)
295
+
296
+ x = self.bottleneck(x, context, time)
297
+
298
+ for layers in self.decoders:
299
+ # Since we always concat with the skip connection of the encoder, the number of features increases before being sent to the decoder's layer
300
+ x = torch.cat((x, skip_connections.pop()), dim=1)
301
+ x = layers(x, context, time)
302
+
303
+ return x
304
+
305
+
306
+ class UNET_OutputLayer(nn.Module):
307
+ def __init__(self, in_channels, out_channels):
308
+ super().__init__()
309
+ self.groupnorm = nn.GroupNorm(32, in_channels)
310
+ self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1)
311
+
312
+ def forward(self, x):
313
+ # x: (Batch_Size, 320, Height / 8, Width / 8)
314
+
315
+ # (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
316
+ x = self.groupnorm(x)
317
+
318
+ # (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 320, Height / 8, Width / 8)
319
+ x = F.silu(x)
320
+
321
+ # (Batch_Size, 320, Height / 8, Width / 8) -> (Batch_Size, 4, Height / 8, Width / 8)
322
+ x = self.conv(x)
323
+
324
+ # (Batch_Size, 4, Height / 8, Width / 8)
325
+ return x
326
+
327
+ class Diffusion(nn.Module):
328
+ def __init__(self):
329
+ super().__init__()
330
+ self.time_embedding = TimeEmbedding(320)
331
+ self.unet = UNET()
332
+ self.final = UNET_OutputLayer(320, 4)
333
+
334
+ def forward(self, latent, context, time):
335
+ # latent: (Batch_Size, 4, Height / 8, Width / 8)
336
+ # context: (Batch_Size, Seq_Len, Dim)
337
+ # time: (1, 320)
338
+
339
+ # (1, 320) -> (1, 1280)
340
+ time = self.time_embedding(time)
341
+
342
+ # (Batch, 4, Height / 8, Width / 8) -> (Batch, 320, Height / 8, Width / 8)
343
+ output = self.unet(latent, context, time)
344
+
345
+ # (Batch, 320, Height / 8, Width / 8) -> (Batch, 4, Height / 8, Width / 8)
346
+ output = self.final(output)
347
+
348
+ # (Batch, 4, Height / 8, Width / 8)
349
+ return output
SD/encoder.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import torch.nn.functional as F
4
+ from decoder import VAE_AttentionBlock, VAE_ResidualBlock
5
+
6
+ class VAE_Encoder(nn.Sequential):
7
+ def __init__(self):
8
+ super().__init__(
9
+ # (Batch_Size, Channel, Height, Width) -> (Batch_Size, 128, Height, Width)
10
+ nn.Conv2d(3, 128, kernel_size=3, padding=1),
11
+
12
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 128, Height, Width)
13
+ VAE_ResidualBlock(128, 128),
14
+
15
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 128, Height, Width)
16
+ VAE_ResidualBlock(128, 128),
17
+
18
+ # (Batch_Size, 128, Height, Width) -> (Batch_Size, 128, Height / 2, Width / 2)
19
+ nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=0),
20
+
21
+ # (Batch_Size, 128, Height / 2, Width / 2) -> (Batch_Size, 256, Height / 2, Width / 2)
22
+ VAE_ResidualBlock(128, 256),
23
+
24
+ # (Batch_Size, 256, Height / 2, Width / 2) -> (Batch_Size, 256, Height / 2, Width / 2)
25
+ VAE_ResidualBlock(256, 256),
26
+
27
+ # (Batch_Size, 256, Height / 2, Width / 2) -> (Batch_Size, 256, Height / 4, Width / 4)
28
+ nn.Conv2d(256, 256, kernel_size=3, stride=2, padding=0),
29
+
30
+ # (Batch_Size, 256, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 4, Width / 4)
31
+ VAE_ResidualBlock(256, 512),
32
+
33
+ # (Batch_Size, 512, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 4, Width / 4)
34
+ VAE_ResidualBlock(512, 512),
35
+
36
+ # (Batch_Size, 512, Height / 4, Width / 4) -> (Batch_Size, 512, Height / 8, Width / 8)
37
+ nn.Conv2d(512, 512, kernel_size=3, stride=2, padding=0),
38
+
39
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
40
+ VAE_ResidualBlock(512, 512),
41
+
42
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
43
+ VAE_ResidualBlock(512, 512),
44
+
45
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
46
+ VAE_ResidualBlock(512, 512),
47
+
48
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
49
+ VAE_AttentionBlock(512),
50
+
51
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
52
+ VAE_ResidualBlock(512, 512),
53
+
54
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
55
+ nn.GroupNorm(32, 512),
56
+
57
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 512, Height / 8, Width / 8)
58
+ nn.SiLU(),
59
+
60
+ # Because the padding=1, it means the width and height will increase by 2
61
+ # Out_Height = In_Height + Padding_Top + Padding_Bottom
62
+ # Out_Width = In_Width + Padding_Left + Padding_Right
63
+ # Since padding = 1 means Padding_Top = Padding_Bottom = Padding_Left = Padding_Right = 1,
64
+ # Since the Out_Width = In_Width + 2 (same for Out_Height), it will compensate for the Kernel size of 3
65
+ # (Batch_Size, 512, Height / 8, Width / 8) -> (Batch_Size, 8, Height / 8, Width / 8).
66
+ nn.Conv2d(512, 8, kernel_size=3, padding=1),
67
+
68
+ # (Batch_Size, 8, Height / 8, Width / 8) -> (Batch_Size, 8, Height / 8, Width / 8)
69
+ nn.Conv2d(8, 8, kernel_size=1, padding=0),
70
+ )
71
+
72
+ def forward(self, x, noise):
73
+ # x: (Batch_Size, Channel, Height, Width)
74
+ # noise: (Batch_Size, 4, Height / 8, Width / 8)
75
+
76
+ for module in self:
77
+
78
+ if getattr(module, 'stride', None) == (2, 2): # Padding at downsampling should be asymmetric (see #8)
79
+ # Pad: (Padding_Left, Padding_Right, Padding_Top, Padding_Bottom).
80
+ # Pad with zeros on the right and bottom.
81
+ # (Batch_Size, Channel, Height, Width) -> (Batch_Size, Channel, Height + Padding_Top + Padding_Bottom, Width + Padding_Left + Padding_Right) = (Batch_Size, Channel, Height + 1, Width + 1)
82
+ x = F.pad(x, (0, 1, 0, 1))
83
+
84
+ x = module(x)
85
+ # (Batch_Size, 8, Height / 8, Width / 8) -> two tensors of shape (Batch_Size, 4, Height / 8, Width / 8)
86
+ mean, log_variance = torch.chunk(x, 2, dim=1)
87
+ # Clamp the log variance between -30 and 20, so that the variance is between (circa) 1e-14 and 1e8.
88
+ # (Batch_Size, 4, Height / 8, Width / 8) -> (Batch_Size, 4, Height / 8, Width / 8)
89
+ log_variance = torch.clamp(log_variance, -30, 20)
90
+ # (Batch_Size, 4, Height / 8, Width / 8) -> (Batch_Size, 4, Height / 8, Width / 8)
91
+ variance = log_variance.exp()
92
+ # (Batch_Size, 4, Height / 8, Width / 8) -> (Batch_Size, 4, Height / 8, Width / 8)
93
+ stdev = variance.sqrt()
94
+
95
+ # Transform N(0, 1) -> N(mean, stdev)
96
+ # (Batch_Size, 4, Height / 8, Width / 8) -> (Batch_Size, 4, Height / 8, Width / 8)
97
+ x = mean + stdev * noise
98
+
99
+ # Scale by a constant
100
+ # Constant taken from: https://github.com/CompVis/stable-diffusion/blob/21f890f9da3cfbeaba8e2ac3c425ee9e998d5229/configs/stable-diffusion/v1-inference.yaml#L17C1-L17C1
101
+ x *= 0.18215
102
+
103
+ return x
SD/model_converter.py ADDED
The diff for this file is too large to render. See raw diff
 
SD/model_loader.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from clip import CLIP
2
+ from encoder import VAE_Encoder
3
+ from decoder import VAE_Decoder
4
+ from diffusion import Diffusion
5
+
6
+ import model_converter
7
+
8
+ def preload_models_from_standard_weights(ckpt_path, device):
9
+ state_dict = model_converter.load_from_standard_weights(ckpt_path, device)
10
+
11
+ encoder = VAE_Encoder().to(device)
12
+ encoder.load_state_dict(state_dict['encoder'], strict=True)
13
+
14
+ decoder = VAE_Decoder().to(device)
15
+ decoder.load_state_dict(state_dict['decoder'], strict=True)
16
+
17
+ diffusion = Diffusion().to(device)
18
+ diffusion.load_state_dict(state_dict['diffusion'], strict=True)
19
+
20
+ clip = CLIP().to(device)
21
+ clip.load_state_dict(state_dict['clip'], strict=True)
22
+
23
+ return {
24
+ 'clip': clip,
25
+ 'encoder': encoder,
26
+ 'decoder': decoder,
27
+ 'diffusion': diffusion,
28
+ }
SD/pipeline.py ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+ from tqdm import tqdm
4
+ from ddpm import DDPMSampler
5
+
6
+ WIDTH = 512
7
+ HEIGHT = 512
8
+ LATENTS_WIDTH = WIDTH // 8
9
+ LATENTS_HEIGHT = HEIGHT // 8
10
+
11
+ def generate(
12
+ prompt,
13
+ uncond_prompt=None,
14
+ input_image=None,
15
+ strength=0.8,
16
+ do_cfg=True,
17
+ cfg_scale=7.5,
18
+ sampler_name="ddpm",
19
+ n_inference_steps=50,
20
+ models={},
21
+ seed=None,
22
+ device=None,
23
+ idle_device=None,
24
+ tokenizer=None,
25
+ ):
26
+ with torch.no_grad():
27
+ if not 0 < strength <= 1:
28
+ raise ValueError("strength must be between 0 and 1")
29
+
30
+ if idle_device:
31
+ to_idle = lambda x: x.to(idle_device)
32
+ else:
33
+ to_idle = lambda x: x
34
+
35
+ # Initialize random number generator according to the seed specified
36
+ generator = torch.Generator(device=device)
37
+ if seed is None:
38
+ generator.seed()
39
+ else:
40
+ generator.manual_seed(seed)
41
+
42
+ clip = models["clip"]
43
+ clip.to(device)
44
+
45
+ if do_cfg:
46
+ # Convert into a list of length Seq_Len=77
47
+ cond_tokens = tokenizer.batch_encode_plus(
48
+ [prompt], padding="max_length", max_length=77
49
+ ).input_ids
50
+ # (Batch_Size, Seq_Len)
51
+ cond_tokens = torch.tensor(cond_tokens, dtype=torch.long, device=device)
52
+ # (Batch_Size, Seq_Len) -> (Batch_Size, Seq_Len, Dim)
53
+ cond_context = clip(cond_tokens)
54
+ # Convert into a list of length Seq_Len=77
55
+ uncond_tokens = tokenizer.batch_encode_plus(
56
+ [uncond_prompt], padding="max_length", max_length=77
57
+ ).input_ids
58
+ # (Batch_Size, Seq_Len)
59
+ uncond_tokens = torch.tensor(uncond_tokens, dtype=torch.long, device=device)
60
+ # (Batch_Size, Seq_Len) -> (Batch_Size, Seq_Len, Dim)
61
+ uncond_context = clip(uncond_tokens)
62
+ # (Batch_Size, Seq_Len, Dim) + (Batch_Size, Seq_Len, Dim) -> (2 * Batch_Size, Seq_Len, Dim)
63
+ context = torch.cat([cond_context, uncond_context])
64
+ else:
65
+ # Convert into a list of length Seq_Len=77
66
+ tokens = tokenizer.batch_encode_plus(
67
+ [prompt], padding="max_length", max_length=77
68
+ ).input_ids
69
+ # (Batch_Size, Seq_Len)
70
+ tokens = torch.tensor(tokens, dtype=torch.long, device=device)
71
+ # (Batch_Size, Seq_Len) -> (Batch_Size, Seq_Len, Dim)
72
+ context = clip(tokens)
73
+ to_idle(clip)
74
+
75
+ if sampler_name == "ddpm":
76
+ sampler = DDPMSampler(generator)
77
+ sampler.set_inference_timesteps(n_inference_steps)
78
+ else:
79
+ raise ValueError("Unknown sampler value %s. ")
80
+
81
+ latents_shape = (1, 4, LATENTS_HEIGHT, LATENTS_WIDTH)
82
+
83
+ if input_image:
84
+ encoder = models["encoder"]
85
+ encoder.to(device)
86
+
87
+ input_image_tensor = input_image.resize((WIDTH, HEIGHT))
88
+ # (Height, Width, Channel)
89
+ input_image_tensor = np.array(input_image_tensor)
90
+ # (Height, Width, Channel) -> (Height, Width, Channel)
91
+ input_image_tensor = torch.tensor(input_image_tensor, dtype=torch.float32, device=device)
92
+ # (Height, Width, Channel) -> (Height, Width, Channel)
93
+ input_image_tensor = rescale(input_image_tensor, (0, 255), (-1, 1))
94
+ # (Height, Width, Channel) -> (Batch_Size, Height, Width, Channel)
95
+ input_image_tensor = input_image_tensor.unsqueeze(0)
96
+ # (Batch_Size, Height, Width, Channel) -> (Batch_Size, Channel, Height, Width)
97
+ input_image_tensor = input_image_tensor.permute(0, 3, 1, 2)
98
+
99
+ # (Batch_Size, 4, Latents_Height, Latents_Width)
100
+ encoder_noise = torch.randn(latents_shape, generator=generator, device=device)
101
+ # (Batch_Size, 4, Latents_Height, Latents_Width)
102
+ latents = encoder(input_image_tensor, encoder_noise)
103
+
104
+ # Add noise to the latents (the encoded input image)
105
+ # (Batch_Size, 4, Latents_Height, Latents_Width)
106
+ sampler.set_strength(strength=strength)
107
+ latents = sampler.add_noise(latents, sampler.timesteps[0])
108
+
109
+ to_idle(encoder)
110
+ else:
111
+ # (Batch_Size, 4, Latents_Height, Latents_Width)
112
+ latents = torch.randn(latents_shape, generator=generator, device=device)
113
+
114
+ diffusion = models["diffusion"]
115
+ diffusion.to(device)
116
+
117
+ timesteps = tqdm(sampler.timesteps)
118
+ for i, timestep in enumerate(timesteps):
119
+ # (1, 320)
120
+ time_embedding = get_time_embedding(timestep).to(device)
121
+
122
+ # (Batch_Size, 4, Latents_Height, Latents_Width)
123
+ model_input = latents
124
+
125
+ if do_cfg:
126
+ # (Batch_Size, 4, Latents_Height, Latents_Width) -> (2 * Batch_Size, 4, Latents_Height, Latents_Width)
127
+ model_input = model_input.repeat(2, 1, 1, 1)
128
+
129
+ # model_output is the predicted noise
130
+ # (Batch_Size, 4, Latents_Height, Latents_Width) -> (Batch_Size, 4, Latents_Height, Latents_Width)
131
+ model_output = diffusion(model_input, context, time_embedding)
132
+
133
+ if do_cfg:
134
+ output_cond, output_uncond = model_output.chunk(2)
135
+ model_output = cfg_scale * (output_cond - output_uncond) + output_uncond
136
+
137
+ # (Batch_Size, 4, Latents_Height, Latents_Width) -> (Batch_Size, 4, Latents_Height, Latents_Width)
138
+ latents = sampler.step(timestep, latents, model_output)
139
+
140
+ to_idle(diffusion)
141
+
142
+ decoder = models["decoder"]
143
+ decoder.to(device)
144
+ # (Batch_Size, 4, Latents_Height, Latents_Width) -> (Batch_Size, 3, Height, Width)
145
+ images = decoder(latents)
146
+ to_idle(decoder)
147
+
148
+ images = rescale(images, (-1, 1), (0, 255), clamp=True)
149
+ # (Batch_Size, Channel, Height, Width) -> (Batch_Size, Height, Width, Channel)
150
+ images = images.permute(0, 2, 3, 1)
151
+ images = images.to("cpu", torch.uint8).numpy()
152
+ return images[0]
153
+
154
+ def rescale(x, old_range, new_range, clamp=False):
155
+ old_min, old_max = old_range
156
+ new_min, new_max = new_range
157
+ x -= old_min
158
+ x *= (new_max - new_min) / (old_max - old_min)
159
+ x += new_min
160
+ if clamp:
161
+ x = x.clamp(new_min, new_max)
162
+ return x
163
+
164
+ def get_time_embedding(timestep):
165
+ # Shape: (160,)
166
+ freqs = torch.pow(10000, -torch.arange(start=0, end=160, dtype=torch.float32) / 160)
167
+ # Shape: (1, 160)
168
+ x = torch.tensor([timestep], dtype=torch.float32)[:, None] * freqs[None]
169
+ # Shape: (1, 160 * 2)
170
+ return torch.cat([torch.cos(x), torch.sin(x)], dim=-1)
SD/run.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import model_loader
2
+ import pipeline
3
+ from PIL import Image
4
+ from pathlib import Path
5
+ from transformers import CLIPTokenizer
6
+ import torch
7
+
8
+
9
+ DEVICE = "cpu"
10
+
11
+ ALLOW_CUDA = True
12
+ ALLOW_MPS = False
13
+
14
+ if torch.cuda.is_available() and ALLOW_CUDA:
15
+ DEVICE = "cuda"
16
+
17
+ print(f"Using device: {DEVICE}")
18
+
19
+ tokenizer = CLIPTokenizer("../data/tokenizer_vocab.json", merges_file="../data/tokenizer_merges.txt")
20
+ model_file = "../data/v1-5-pruned-emaonly.ckpt"
21
+ models = model_loader.preload_models_from_standard_weights(model_file, device=DEVICE)
22
+
23
+ ## TEXT TO IMAGE
24
+
25
+ # prompt = "A dog with sunglasses, wearing comfy hat, looking at camera, highly detailed, ultra sharp, cinematic, 100mm lens, 8k resolution."
26
+ prompt = "A cat stretching on the floor, highly detailed, ultra sharp, cinematic, 100mm lens, 8k resolution."
27
+ uncond_prompt = "" # Also known as negative prom pt
28
+ do_cfg = True
29
+ cfg_scale = 8 # min: 1, max: 14
30
+
31
+ ## IMAGE TO IMAGE
32
+
33
+ input_image = None
34
+ # Comment to disable image to image
35
+ image_path = "../images/dog.jpg"
36
+ # input_image = Image.open(image_path)
37
+ # Higher values means more noise will be added to the input image, so the result will further from the input image.
38
+ # Lower values means less noise is added to the input image, so output will be closer to the input image.
39
+ strength = 0.9
40
+
41
+ ## SAMPLER
42
+
43
+ sampler = "ddpm"
44
+ num_inference_steps = 2
45
+ seed = 42
46
+
47
+ output_image = pipeline.generate(
48
+ prompt=prompt,
49
+ uncond_prompt=uncond_prompt,
50
+ input_image=input_image,
51
+ strength=strength,
52
+ do_cfg=do_cfg,
53
+ cfg_scale=cfg_scale,
54
+ sampler_name=sampler,
55
+ n_inference_steps=num_inference_steps,
56
+ seed=seed,
57
+ models=models,
58
+ device=DEVICE,
59
+ idle_device="cpu",
60
+ tokenizer=tokenizer,
61
+ )
62
+
63
+ # Combine the input image and the output image into a single image.
64
+ Image.fromarray(output_image)
SD/sd_demo.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
SD_Inkpunk_V1.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:629ddef95988fd88760808067c8b92625061937e153ab8eff99c933c1516f5d8
3
+ size 2132856622
SD_Inkpunk_V2.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2182245415908822cbac065128a4c5144cc547d0701feb21241cb4e70bb5cf56
3
+ size 2132856622
Sample Images/dog.jpg ADDED
feature_extractor/preprocessor_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": 224,
3
+ "do_center_crop": true,
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_resize": true,
7
+ "feature_extractor_type": "CLIPFeatureExtractor",
8
+ "image_mean": [
9
+ 0.48145466,
10
+ 0.4578275,
11
+ 0.40821073
12
+ ],
13
+ "image_std": [
14
+ 0.26862954,
15
+ 0.26130258,
16
+ 0.27577711
17
+ ],
18
+ "resample": 3,
19
+ "size": 224
20
+ }
license.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) [year] [fullname]
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
model_index.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "StableDiffusionPipeline",
3
+ "_diffusers_version": "0.6.0",
4
+ "feature_extractor": [
5
+ "transformers",
6
+ "CLIPImageProcessor"
7
+ ],
8
+ "safety_checker": [
9
+ "stable_diffusion",
10
+ "StableDiffusionSafetyChecker"
11
+ ],
12
+ "scheduler": [
13
+ "diffusers",
14
+ "PNDMScheduler"
15
+ ],
16
+ "text_encoder": [
17
+ "transformers",
18
+ "CLIPTextModel"
19
+ ],
20
+ "tokenizer": [
21
+ "transformers",
22
+ "CLIPTokenizer"
23
+ ],
24
+ "unet": [
25
+ "diffusers",
26
+ "UNet2DConditionModel"
27
+ ],
28
+ "vae": [
29
+ "diffusers",
30
+ "AutoencoderKL"
31
+ ]
32
+ }
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ ## Python version: 3.11.3
2
+
3
+ torch==2.0.1
4
+ numpy==1.25.0
5
+ tqdm==4.65.0
6
+ transformers==4.33.2
7
+ lightning==2.0.9
8
+ pillow==9.5.0
safety_checker/config.json ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_commit_hash": "4bb648a606ef040e7685bde262611766a5fdd67b",
3
+ "_name_or_path": "CompVis/stable-diffusion-safety-checker",
4
+ "architectures": [
5
+ "StableDiffusionSafetyChecker"
6
+ ],
7
+ "initializer_factor": 1.0,
8
+ "logit_scale_init_value": 2.6592,
9
+ "model_type": "clip",
10
+ "projection_dim": 768,
11
+ "text_config": {
12
+ "_name_or_path": "",
13
+ "add_cross_attention": false,
14
+ "architectures": null,
15
+ "attention_dropout": 0.0,
16
+ "bad_words_ids": null,
17
+ "bos_token_id": 0,
18
+ "chunk_size_feed_forward": 0,
19
+ "cross_attention_hidden_size": null,
20
+ "decoder_start_token_id": null,
21
+ "diversity_penalty": 0.0,
22
+ "do_sample": false,
23
+ "dropout": 0.0,
24
+ "early_stopping": false,
25
+ "encoder_no_repeat_ngram_size": 0,
26
+ "eos_token_id": 2,
27
+ "exponential_decay_length_penalty": null,
28
+ "finetuning_task": null,
29
+ "forced_bos_token_id": null,
30
+ "forced_eos_token_id": null,
31
+ "hidden_act": "quick_gelu",
32
+ "hidden_size": 768,
33
+ "id2label": {
34
+ "0": "LABEL_0",
35
+ "1": "LABEL_1"
36
+ },
37
+ "initializer_factor": 1.0,
38
+ "initializer_range": 0.02,
39
+ "intermediate_size": 3072,
40
+ "is_decoder": false,
41
+ "is_encoder_decoder": false,
42
+ "label2id": {
43
+ "LABEL_0": 0,
44
+ "LABEL_1": 1
45
+ },
46
+ "layer_norm_eps": 1e-05,
47
+ "length_penalty": 1.0,
48
+ "max_length": 20,
49
+ "max_position_embeddings": 77,
50
+ "min_length": 0,
51
+ "model_type": "clip_text_model",
52
+ "no_repeat_ngram_size": 0,
53
+ "num_attention_heads": 12,
54
+ "num_beam_groups": 1,
55
+ "num_beams": 1,
56
+ "num_hidden_layers": 12,
57
+ "num_return_sequences": 1,
58
+ "output_attentions": false,
59
+ "output_hidden_states": false,
60
+ "output_scores": false,
61
+ "pad_token_id": 1,
62
+ "prefix": null,
63
+ "problem_type": null,
64
+ "pruned_heads": {},
65
+ "remove_invalid_values": false,
66
+ "repetition_penalty": 1.0,
67
+ "return_dict": true,
68
+ "return_dict_in_generate": false,
69
+ "sep_token_id": null,
70
+ "task_specific_params": null,
71
+ "temperature": 1.0,
72
+ "tf_legacy_loss": false,
73
+ "tie_encoder_decoder": false,
74
+ "tie_word_embeddings": true,
75
+ "tokenizer_class": null,
76
+ "top_k": 50,
77
+ "top_p": 1.0,
78
+ "torch_dtype": null,
79
+ "torchscript": false,
80
+ "transformers_version": "4.22.0.dev0",
81
+ "typical_p": 1.0,
82
+ "use_bfloat16": false,
83
+ "vocab_size": 49408
84
+ },
85
+ "text_config_dict": {
86
+ "hidden_size": 768,
87
+ "intermediate_size": 3072,
88
+ "num_attention_heads": 12,
89
+ "num_hidden_layers": 12
90
+ },
91
+ "torch_dtype": "float32",
92
+ "transformers_version": null,
93
+ "vision_config": {
94
+ "_name_or_path": "",
95
+ "add_cross_attention": false,
96
+ "architectures": null,
97
+ "attention_dropout": 0.0,
98
+ "bad_words_ids": null,
99
+ "bos_token_id": null,
100
+ "chunk_size_feed_forward": 0,
101
+ "cross_attention_hidden_size": null,
102
+ "decoder_start_token_id": null,
103
+ "diversity_penalty": 0.0,
104
+ "do_sample": false,
105
+ "dropout": 0.0,
106
+ "early_stopping": false,
107
+ "encoder_no_repeat_ngram_size": 0,
108
+ "eos_token_id": null,
109
+ "exponential_decay_length_penalty": null,
110
+ "finetuning_task": null,
111
+ "forced_bos_token_id": null,
112
+ "forced_eos_token_id": null,
113
+ "hidden_act": "quick_gelu",
114
+ "hidden_size": 1024,
115
+ "id2label": {
116
+ "0": "LABEL_0",
117
+ "1": "LABEL_1"
118
+ },
119
+ "image_size": 224,
120
+ "initializer_factor": 1.0,
121
+ "initializer_range": 0.02,
122
+ "intermediate_size": 4096,
123
+ "is_decoder": false,
124
+ "is_encoder_decoder": false,
125
+ "label2id": {
126
+ "LABEL_0": 0,
127
+ "LABEL_1": 1
128
+ },
129
+ "layer_norm_eps": 1e-05,
130
+ "length_penalty": 1.0,
131
+ "max_length": 20,
132
+ "min_length": 0,
133
+ "model_type": "clip_vision_model",
134
+ "no_repeat_ngram_size": 0,
135
+ "num_attention_heads": 16,
136
+ "num_beam_groups": 1,
137
+ "num_beams": 1,
138
+ "num_channels": 3,
139
+ "num_hidden_layers": 24,
140
+ "num_return_sequences": 1,
141
+ "output_attentions": false,
142
+ "output_hidden_states": false,
143
+ "output_scores": false,
144
+ "pad_token_id": null,
145
+ "patch_size": 14,
146
+ "prefix": null,
147
+ "problem_type": null,
148
+ "pruned_heads": {},
149
+ "remove_invalid_values": false,
150
+ "repetition_penalty": 1.0,
151
+ "return_dict": true,
152
+ "return_dict_in_generate": false,
153
+ "sep_token_id": null,
154
+ "task_specific_params": null,
155
+ "temperature": 1.0,
156
+ "tf_legacy_loss": false,
157
+ "tie_encoder_decoder": false,
158
+ "tie_word_embeddings": true,
159
+ "tokenizer_class": null,
160
+ "top_k": 50,
161
+ "top_p": 1.0,
162
+ "torch_dtype": null,
163
+ "torchscript": false,
164
+ "transformers_version": "4.22.0.dev0",
165
+ "typical_p": 1.0,
166
+ "use_bfloat16": false
167
+ },
168
+ "vision_config_dict": {
169
+ "hidden_size": 1024,
170
+ "intermediate_size": 4096,
171
+ "num_attention_heads": 16,
172
+ "num_hidden_layers": 24,
173
+ "patch_size": 14
174
+ }
175
+ }
safety_checker/model.fp16.safetensors ADDED
File without changes
safety_checker/model.safetensors ADDED
File without changes
safety_checker/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:193490b58ef62739077262e833bf091c66c29488058681ac25cf7df3d8190974
3
+ size 1216061799
safety_checker/pytorch_model.fp16.bin ADDED
File without changes
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "PNDMScheduler",
3
+ "_diffusers_version": "0.6.0",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "num_train_timesteps": 1000,
8
+ "set_alpha_to_one": false,
9
+ "skip_prk_steps": true,
10
+ "steps_offset": 1,
11
+ "trained_betas": null,
12
+ "clip_sample": false
13
+ }
text_encoder/config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "openai/clip-vit-large-patch14",
3
+ "architectures": [
4
+ "CLIPTextModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "dropout": 0.0,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "quick_gelu",
11
+ "hidden_size": 768,
12
+ "initializer_factor": 1.0,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 77,
17
+ "model_type": "clip_text_model",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "projection_dim": 768,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.22.0.dev0",
24
+ "vocab_size": 49408
25
+ }
text_encoder/model.fp16.safetensors ADDED
File without changes
text_encoder/model.safetensors ADDED
File without changes
text_encoder/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bf5fb25270cfa3642bea72f6874f06a38d2475fbae1d944bbbead81b3187e1b
3
+ size 336957440
text_encoder/pytorch_model.fp16.bin ADDED
File without changes
tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "do_lower_case": true,
12
+ "eos_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "errors": "replace",
21
+ "model_max_length": 77,
22
+ "name_or_path": "openai/clip-vit-large-patch14",
23
+ "pad_token": "<|endoftext|>",
24
+ "special_tokens_map_file": "./special_tokens_map.json",
25
+ "tokenizer_class": "CLIPTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
unet/config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.6.0",
4
+ "act_fn": "silu",
5
+ "attention_head_dim": 8,
6
+ "block_out_channels": [
7
+ 320,
8
+ 640,
9
+ 1280,
10
+ 1280
11
+ ],
12
+ "center_input_sample": false,
13
+ "cross_attention_dim": 768,
14
+ "down_block_types": [
15
+ "CrossAttnDownBlock2D",
16
+ "CrossAttnDownBlock2D",
17
+ "CrossAttnDownBlock2D",
18
+ "DownBlock2D"
19
+ ],
20
+ "downsample_padding": 1,
21
+ "flip_sin_to_cos": true,
22
+ "freq_shift": 0,
23
+ "in_channels": 4,
24
+ "layers_per_block": 2,
25
+ "mid_block_scale_factor": 1,
26
+ "norm_eps": 1e-05,
27
+ "norm_num_groups": 32,
28
+ "out_channels": 4,
29
+ "sample_size": 64,
30
+ "up_block_types": [
31
+ "UpBlock2D",
32
+ "CrossAttnUpBlock2D",
33
+ "CrossAttnUpBlock2D",
34
+ "CrossAttnUpBlock2D"
35
+ ]
36
+ }
unet/diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7da0e21ba7ea50637bee26e81c220844defdf01aafca02b2c42ecdadb813de4
3
+ size 3438354725
unet/diffusion_pytorch_model.fp16.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30eb3dc47c90e4a55476332b284b2331774c530edbbb83b70cacdd9e7b91af92
3
+ size 1719327893
unet/diffusion_pytorch_model.fp16.safetensors ADDED
File without changes
unet/diffusion_pytorch_model.non_ema.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42bc8b8f3af32866db3c7bb5bcf591ab04438296c2712246d7a640bde5a5ddc1
3
+ size 3438366373
unet/diffusion_pytorch_model.non_ema.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd1b6db09a81cb1d39fbd245a89c1e3db9da9fe8eba5e8f9098ea6c4994221d3
3
+ size 3438167536
unet/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19da7aaa4b880e59d56843f1fcb4dd9b599c28a1d9d9af7c1143057c8ffae9f1
3
+ size 3438167540