jrrjrr commited on
Commit
202b9a8
·
1 Parent(s): ad48813

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -5,11 +5,11 @@ tags:
5
  - stable-diffusion
6
  - text-to-image
7
  ---
8
- # These are Stable Diffusion v1.5 type models and compatible ControlNet v1.1 models that have been converted to Apple's CoreML format
9
 
10
  ## For use with a Swift app or the SwiftCLI
11
 
12
- The SD models are all "original" (not split-einsum) and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
13
 
14
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
15
 
@@ -31,14 +31,14 @@ The sizes are always meant to be WIDTH x HEIGHT. A 512x768 is "portrait" orient
31
 
32
  **If you encounter any models that do not work fully with image2image and ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline or Mochi Diffusion 3.2 or the Mochi Diffusion CN test build, please leave a report in the Community area here. If you would like to add models that you have converted, leave a message as well, and I'll try to figure out out to grant you access to this repo.**
33
 
34
- ## Base Models - A Variety Of SD-1.5-Type Models For Use With CN
35
  Each folder contains 4 zipped model files, output size as indicated: 512x512, 512x768, 768x512 or 768x768
36
- - DreamShaper v5.0, 1.5-type model, original, for ControlNet & Standard
37
- - GhostMix v1.1, 1.5-type anime model, original, for ControlNet & Standard
38
- - MeinaMix v9.0 1.5-type anime model, original, for ControlNet & Standard
39
- - MyMerge v1.0 1.5-type NSFW model, original, for ControlNet & Standard
40
- - Realistic Vision v2.0, 1.5-type model, original, for ControlNet & Standard
41
- - Stable Diffusion v1.5, original, for ControlNet & Standard
42
 
43
  ## ControlNet Models - All Current SD-1.5-Type ControlNet Models
44
  Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512, 768x768
 
5
  - stable-diffusion
6
  - text-to-image
7
  ---
8
+ # ControlNet v1.1 Models And Compatible Stable Diffusion v1.5 Type Models Converted To Apple CoreML Format
9
 
10
  ## For use with a Swift app or the SwiftCLI
11
 
12
+ The SD models are all "Original" (not "Split-Einsum") and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
13
 
14
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
15
 
 
31
 
32
  **If you encounter any models that do not work fully with image2image and ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline or Mochi Diffusion 3.2 or the Mochi Diffusion CN test build, please leave a report in the Community area here. If you would like to add models that you have converted, leave a message as well, and I'll try to figure out out to grant you access to this repo.**
33
 
34
+ ## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
35
  Each folder contains 4 zipped model files, output size as indicated: 512x512, 512x768, 768x512 or 768x768
36
+ - DreamShaper v5.0, 1.5-type model, "Original"
37
+ - GhostMix v1.1, 1.5-type anime model, "Original"
38
+ - MeinaMix v9.0 1.5-type anime model, "Original"
39
+ - MyMerge v1.0 1.5-type NSFW model, "Original"
40
+ - Realistic Vision v2.0, 1.5-type model, "Original"
41
+ - Stable Diffusion v1.5, "Original"
42
 
43
  ## ControlNet Models - All Current SD-1.5-Type ControlNet Models
44
  Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512, 768x768