jrrjrr commited on
Commit
3705ace
·
1 Parent(s): 6c1d42b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -13,11 +13,11 @@ The SD models are all "Original" (not "Split-Einsum") and built for CPU and GPU.
13
 
14
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
15
 
16
- They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0, such as Mochi Diffusion 3.2 or later.
17
 
18
  All of the ControlNet models are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions. The 512x512 builds appear to also work with "Split-Einsum" models, using CPU and GPU (cpuAmdGPU), but from my tests, they will not work with "Split-Einsum" models when using the Neural Engine (NE).
19
 
20
- All of the models in this repo work with Swift and the current apple/ml-stable-diffusion pipeline release (0.4.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need ml-stable-diffusion (https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the Mochi Diffusion (https://github.com/godly-devotion/MochiDiffusion) test version currently in a closed beta test. Join the Mochi Difusion Discord server (https://discord.gg/x2kartzxGv) to request access to the beta test version.
21
 
22
  The full SD models are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
23
 
@@ -27,11 +27,11 @@ There is also a "MISC" folder that has text files with some notes and a screenca
27
 
28
  For command line use, the "MISC" notes cover setting up a miniconda3 environment. If you are using the command line, please read the notes concerning naming and placement of your ControlNet model folder.
29
 
30
- If you are using a GUI, that app will most likely guide you to the correct location/arrangement for your ConrolNet model folder.
31
 
32
  The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
33
 
34
- **If you encounter any models that do not work correctly with image2image and/or a ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline for i2i or CN, or Mochi Diffusion 3.2 using i2i, or the Mochi Diffusion beta test build using i2i or CN, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and I'll grant you access to this repo.**
35
 
36
  ## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
37
  Each folder contains 4 zipped model files, output sizes as indicated: 512x512, 512x768, 768x512 or 768x768
 
13
 
14
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
15
 
16
+ They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0, such as Mochi Diffusion 3.2, 4.0, or later.
17
 
18
  All of the ControlNet models are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions. The 512x512 builds appear to also work with "Split-Einsum" models, using CPU and GPU (cpuAmdGPU), but from my tests, they will not work with "Split-Einsum" models when using the Neural Engine (NE).
19
 
20
+ All of the models in this repo work with Swift and the current apple/ml-stable-diffusion pipeline release (0.4.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need ml-stable-diffusion (https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the new (June 2023) Mochi Diffusion 4.0 version (https://github.com/godly-devotion/MochiDiffusion).
21
 
22
  The full SD models are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
23
 
 
27
 
28
  For command line use, the "MISC" notes cover setting up a miniconda3 environment. If you are using the command line, please read the notes concerning naming and placement of your ControlNet model folder.
29
 
30
+ If you are using a GUI like Mochi Diffusion 4.0, the app will most likely guide you to the correct location/arrangement for your ConrolNet model folder.
31
 
32
  The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
33
 
34
+ **If you encounter any models that do not work correctly with image2image and/or a ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline for i2i or CN, or Mochi Diffusion 3.2 using i2i, or Mochi Diffusion 4.0 using i2i or CN, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and I'll grant you access to this repo.**
35
 
36
  ## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
37
  Each folder contains 4 zipped model files, output sizes as indicated: 512x512, 512x768, 768x512 or 768x768