Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ tags:
|
|
9 |
|
10 |
## For use with a Swift app like [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) or the SwiftCLI
|
11 |
|
12 |
-
The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are
|
13 |
|
14 |
The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
|
15 |
|
@@ -19,7 +19,7 @@ The ControlNet models in this repo have both "Original" and "Split-Einsum" versi
|
|
19 |
|
20 |
All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the (June 2023) [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 4.0 version.
|
21 |
|
22 |
-
|
23 |
|
24 |
The ControlNet model files are in the "CN" folder of this repo. They are zipped and need to be unzipped after downloading. The larger zips hold "Original" types at 512x512, 512x768, 768x512 and 768x768. The smaller zips with "SE" have a single model for "Split-Einsum".
|
25 |
|
@@ -36,12 +36,12 @@ The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768
|
|
36 |
**If you encounter any models that do not work correctly with image2image and/or a ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline for i2i or CN, or Mochi Diffusion 3.2 using i2i, or Mochi Diffusion 4.0 using i2i or CN, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and we will grant you access to this repo.**
|
37 |
|
38 |
## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
|
39 |
-
Each folder remaining at this repo contains 4 zipped model files, output sizes as indicated: 512x512, 512x768, 768x512 or 768x768
|
40 |
-
- DreamShaper v5.0, 1.5-type model, "Original" & "Split-Einsum"
|
41 |
-
- GhostMix v1.1, 1.5-type anime model, "Original" & "Split-Einsum"
|
|
|
42 |
- MeinaMix v9.0 1.5-type anime model, "Original"
|
43 |
- MyMerge v1.0 1.5-type NSFW model, "Original"
|
44 |
-
- Realistic Vision v2.0, 1.5-type model, "Original" & "Split-Einsum" at https://huggingface.co/coreml/coreml-realisticVision-v20_cn
|
45 |
- Stable Diffusion v1.5, "Original"
|
46 |
|
47 |
## ControlNet Models - All Current SD-1.5-Type ControlNet Models
|
|
|
9 |
|
10 |
## For use with a Swift app like [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) or the SwiftCLI
|
11 |
|
12 |
+
The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are FP-16, with the standard SD-1.5 VAE embedded. "Split-Einsum" versions that support ControlNet are currently being added. Models that support both "Original" and "Split-Einsum" will be relocated to individual model repos listed on the [**CORE ML MODELS**](https://huggingface.co/coreml) main page. The relocated models will also be linked below on this page.
|
13 |
|
14 |
The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
|
15 |
|
|
|
19 |
|
20 |
All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the (June 2023) [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 4.0 version.
|
21 |
|
22 |
+
The full SD type (base) models remaining at this repo are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading. The models still stored here are built only for "Original".
|
23 |
|
24 |
The ControlNet model files are in the "CN" folder of this repo. They are zipped and need to be unzipped after downloading. The larger zips hold "Original" types at 512x512, 512x768, 768x512 and 768x768. The smaller zips with "SE" have a single model for "Split-Einsum".
|
25 |
|
|
|
36 |
**If you encounter any models that do not work correctly with image2image and/or a ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline for i2i or CN, or Mochi Diffusion 3.2 using i2i, or Mochi Diffusion 4.0 using i2i or CN, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and we will grant you access to this repo.**
|
37 |
|
38 |
## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
|
39 |
+
Each folder remaining at this repo contains 4 "Origina" zipped model files, output sizes as indicated: 512x512, 512x768, 768x512 or 768x768
|
40 |
+
- DreamShaper v5.0, 1.5-type model, "Original" & "Split-Einsum" -- Reloacted to: https://huggingface.co/coreml/coreml-DreamShaper-v5.0_cn
|
41 |
+
- GhostMix v1.1, 1.5-type anime model, "Original" & "Split-Einsum" -- Relocated to: https://huggingface.co/coreml/coreml-ghostmix-v20-bakedVAE_cn
|
42 |
+
- Realistic Vision v2.0, 1.5-type model, "Original" & "Split-Einsum" -- Relocated to: https://huggingface.co/coreml/coreml-realisticVision-v20_cn
|
43 |
- MeinaMix v9.0 1.5-type anime model, "Original"
|
44 |
- MyMerge v1.0 1.5-type NSFW model, "Original"
|
|
|
45 |
- Stable Diffusion v1.5, "Original"
|
46 |
|
47 |
## ControlNet Models - All Current SD-1.5-Type ControlNet Models
|