Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ tags:
|
|
7 |
---
|
8 |
# ControlNet v1.1 Models And Compatible Stable Diffusion v1.5 Type Models Converted To Apple CoreML Format
|
9 |
|
10 |
-
## For use with a Swift app or the SwiftCLI
|
11 |
|
12 |
The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
|
13 |
|
@@ -17,7 +17,7 @@ They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctl
|
|
17 |
|
18 |
All of the ControlNet models in this repo are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions. "Split-Einsum" versions for use with the Neural Engine (CPU and NE) are available at a different repo. A link to that repo is at the bottom of this page.
|
19 |
|
20 |
-
All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the new (June 2023) [**
|
21 |
|
22 |
The full SD models are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
|
23 |
|
@@ -33,7 +33,7 @@ Please note that when you unzip the ControlNet files (for example Canny.zip) fro
|
|
33 |
|
34 |
The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
|
35 |
|
36 |
-
**If you encounter any models that do not work correctly with image2image and/or a ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline for i2i or CN, or Mochi Diffusion 3.2 using i2i, or Mochi Diffusion 4.0 using i2i or CN, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and
|
37 |
|
38 |
## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
|
39 |
Each folder contains 4 zipped model files, output sizes as indicated: 512x512, 512x768, 768x512 or 768x768
|
|
|
7 |
---
|
8 |
# ControlNet v1.1 Models And Compatible Stable Diffusion v1.5 Type Models Converted To Apple CoreML Format
|
9 |
|
10 |
+
## For use with a Swift app like [MOCHI DIFFUSION](https://github.com/godly-devotion/MochiDiffusio) or the SwiftCLI
|
11 |
|
12 |
The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
|
13 |
|
|
|
17 |
|
18 |
All of the ControlNet models in this repo are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions. "Split-Einsum" versions for use with the Neural Engine (CPU and NE) are available at a different repo. A link to that repo is at the bottom of this page.
|
19 |
|
20 |
+
All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the new (June 2023) [**MOCHI DIFFUSION#*](https://github.com/godly-devotion/MochiDiffusion) 4.0 version.
|
21 |
|
22 |
The full SD models are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
|
23 |
|
|
|
33 |
|
34 |
The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
|
35 |
|
36 |
+
**If you encounter any models that do not work correctly with image2image and/or a ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline for i2i or CN, or Mochi Diffusion 3.2 using i2i, or Mochi Diffusion 4.0 using i2i or CN, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and we will grant you access to this repo.**
|
37 |
|
38 |
## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
|
39 |
Each folder contains 4 zipped model files, output sizes as indicated: 512x512, 512x768, 768x512 or 768x768
|