Update README.md
Browse files
README.md
CHANGED
@@ -9,13 +9,13 @@ tags:
|
|
9 |
|
10 |
## For use with a Swift app or the SwiftCLI
|
11 |
|
12 |
-
The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
|
13 |
|
14 |
The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
|
15 |
|
16 |
They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0 or ml-stable-diffusion 1.0.0, such as Mochi Diffusion 3.2, 4.0, or later.
|
17 |
|
18 |
-
All of the ControlNet models in this repo are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions.
|
19 |
|
20 |
All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the new (June 2023) [**Mochi Diffusion**](https://github.com/godly-devotion/MochiDiffusion) 4.0 version.
|
21 |
|
|
|
9 |
|
10 |
## For use with a Swift app or the SwiftCLI
|
11 |
|
12 |
+
The SD models in this repo are all "Original" and built for CPU and GPU. They are each for the output size noted. They are fp16, with the standard SD-1.5 VAE embedded.
|
13 |
|
14 |
The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
|
15 |
|
16 |
They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0 or ml-stable-diffusion 1.0.0, such as Mochi Diffusion 3.2, 4.0, or later.
|
17 |
|
18 |
+
All of the ControlNet models in this repo are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions. "Split-Einsum" versions for use with the Neural Engine (CPU and NE) are available at a different repo. A link to that repo is at the bottom of this page.
|
19 |
|
20 |
All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need [**ml-stable-diffusion**](https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the new (June 2023) [**Mochi Diffusion**](https://github.com/godly-devotion/MochiDiffusion) 4.0 version.
|
21 |
|