Text-to-Image
Diffusers
Safetensors
English
StableDiffusionPipeline
common-canvas
stable-diffusion
Skylion007 commited on
Commit
bfa3806
·
verified ·
1 Parent(s): abebcb4

Improve formatting

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -13,10 +13,10 @@ language:
13
  ## Summary
14
  CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion 2. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model.
15
 
16
- Input: CommonCatalog Text Captions
17
- Output: CommonCatalog Images
18
- Architecture: Stable Diffusion 2
19
- Version Number: 0.1
20
 
21
  The goal of this purpose is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825
22
 
 
13
  ## Summary
14
  CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion 2. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model.
15
 
16
+ **Input:** CommonCatalog Text Captions
17
+ **Output:** CommonCatalog Images
18
+ **Architecture:** Stable Diffusion 2
19
+ **Version Number:** 0.1
20
 
21
  The goal of this purpose is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825
22