Text-to-Image
stable-diffusion
valhalla commited on
Commit
d378f9d
1 Parent(s): fab7d4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -2,19 +2,22 @@
2
  license: other
3
  ---
4
 
 
 
5
  ## Access to the model
6
- The Stable Diffusion weights are currently only available to universities, academics, research institutions and independent researchers. Please request access applying to [this](#) form
 
7
 
8
  # Stable Diffusion v1 Model Card
9
- This model card focuses on the model associated with the Stable Diffusion model, codebase available [here](https://github.com/CompVis/latent-diffusion).
10
 
11
  ## Model Details
12
- - **Developed by:** Robin Rombach, Patrick Esser,
13
  - **Model type:** Diffusion-based text-to-image generation model
14
  - **Language(s):** English
15
- - **License:** ~~Creative Commons 4.0~~
16
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
17
- - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/latent-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
18
  - **Cite as:**
19
 
20
  @InProceedings{Rombach_2022_CVPR,
@@ -105,7 +108,7 @@ which were trained as follows,
105
  194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
106
  - `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
107
  515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
108
- filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an improved aesthetics estimator, ~~that was trained on top of CLIP embeddings using the [Simulacra Aesthetic Captions](https://github.com/JD-P/simulacra-aesthetic-captions) dataset.~~).
109
  - `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
110
 
111
 
@@ -120,7 +123,7 @@ Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
120
  5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
121
  steps show the relative improvements of the checkpoints:
122
 
123
- ![pareto](assets/v1-variants-scores.jpg)
124
 
125
  Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
126
  ## Environmental Impact
@@ -148,4 +151,4 @@ Based on that information, we estimate the following CO2 emissions using the [Ma
148
  }
149
  ```
150
 
151
- *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
 
2
  license: other
3
  ---
4
 
5
+
6
+ ___
7
  ## Access to the model
8
+ **The Stable Diffusion weights are currently only available to universities, academics, research institutions and independent researchers. Please request access applying to [this](#) form**
9
+ ___
10
 
11
  # Stable Diffusion v1 Model Card
12
+ This model card focuses on the model associated with the Stable Diffusion model, available [here](https://github.com/CompVis/stable-diffusion).
13
 
14
  ## Model Details
15
+ - **Developed by:** Robin Rombach, Patrick Esser
16
  - **Model type:** Diffusion-based text-to-image generation model
17
  - **Language(s):** English
18
+ - **License:** [Proprietary](LICENSE)
19
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
20
+ - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
21
  - **Cite as:**
22
 
23
  @InProceedings{Rombach_2022_CVPR,
 
108
  194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
109
  - `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
110
  515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
111
+ filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
112
  - `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
113
 
114
 
 
123
  5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
124
  steps show the relative improvements of the checkpoints:
125
 
126
+ ![pareto](v1-variants-scores.jpg)
127
 
128
  Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
129
  ## Environmental Impact
 
151
  }
152
  ```
153
 
154
+ *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*