jrrjrr commited on
Commit
99ea321
·
1 Parent(s): 34a1dad

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -15,24 +15,26 @@ The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both th
15
 
16
  They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0, such as Mochi Diffusion 3.2 or later.
17
 
18
- All of the ControlNet models are also "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. The zip files each have a set of models at 4 resolutions. They may also work with split-einsum models, using CPU and GPU (not CPU and NE), but they will not work with SD-2.1 type models at all.
19
 
20
- All of the models in this repo will only work with Swift and the current ml-stable-diffusion pipeline (0.4.0). They were not built for a python diffusers pipeline. They need apple/ml-stable-diffusion (from GitHub) for command line use or a Swift app that supports ControlNet, such as the Mochi Diffusion test version currently in a closed beta test at https://github.com/godly-devotion/MochiDiffusion that supports ControlNet. Join the Mochi Difusion Discord server at https://discord.gg/x2kartzxGv to request access to the beta test version.
21
 
22
- The full SD models are in the "SD" folder here. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
23
 
24
- The ControlNet model files are in the "CN" folder here. They are also zipped and need to be unzipped after downloading. Note that they are zipped into sets of 4: 512x512, 512x768, 768x512, 768x768 for each ControlNet type.
25
 
26
- There is also a MISC folder that has text files with my notes and a screencap of my directory structure. These are provided for folks who want to try converting models themselves and/or running the models with a SwiftCLI. The notes are not perfect, and may be out of date as the various python and CoreML packages are updated.
27
 
28
- For command line use, it all runs in a miniconda3 environment, covered in one of the notes. If you are using the command line, please read the notes concerning naming and placement of your ControlNet model folder. If you are using a GUI, it will most likely guide you to the correct location/arrangement.
29
 
30
- The sizes are always meant to be WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.**
31
 
32
- **If you encounter any models that do not work fully with image2image and ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline or Mochi Diffusion 3.2 or the Mochi Diffusion CN test build, please leave a report in the Community area here. If you would like to add models that you have converted, leave a message as well, and I'll try to figure out out to grant you access to this repo.**
 
 
33
 
34
  ## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
35
- Each folder contains 4 zipped model files, output size as indicated: 512x512, 512x768, 768x512 or 768x768
36
  - DreamShaper v5.0, 1.5-type model, "Original"
37
  - GhostMix v1.1, 1.5-type anime model, "Original"
38
  - MeinaMix v9.0 1.5-type anime model, "Original"
@@ -41,11 +43,11 @@ Each folder contains 4 zipped model files, output size as indicated: 512x512, 51
41
  - Stable Diffusion v1.5, "Original"
42
 
43
  ## ControlNet Models - All Current SD-1.5-Type ControlNet Models
44
- Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512, 768x768
45
  - Canny -- Edge Detection, Outlines As Input
46
  - Depth -- Reproduces Depth Relationships From An Image
47
  - InPaint -- Use Masks To Define And Modify An Area (not sure how this works)
48
- - InstrP2P -- Instruct Pixel2Pixel ("Change X to Y")
49
  - LineAnime -- Find And Reuse Small Outlines, Optimized For Anime
50
  - LineArt -- Find And Reuse Small Outlines
51
  - MLSD -- Find And Reuse Straight Lines And Edges
@@ -55,4 +57,4 @@ Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512, 768x76
55
  - Segmentation -- Find And Reuse Distinct Areas
56
  - Shuffle -- Find And Reorder Major Elements
57
  - SoftEdge -- Find And Reuse Soft Edges
58
- - Tile -- Subtle Variations In Batch Runs
 
15
 
16
  They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0, such as Mochi Diffusion 3.2 or later.
17
 
18
+ All of the ControlNet models are "Original" ones, built for CPU and GPU compute units (cpuAndGPU) and for SD-1.5 type models. They will not work with SD-2.1 type models. The zip files each have a set of models at 4 resolutions. The 512x512 builds appear to also work with "Split-Einsum" models, using CPU and GPU (cpuAmdGPU), but from my tests, they will not work with "Split-Einsum" models when using the Neural Engine (NE).
19
 
20
+ All of the models in this repo work with Swift and the current apple/ml-stable-diffusion pipeline release (0.4.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need ml-stable-diffusion (https://github.com/apple/ml-stable-diffusion) for command line use, or a Swift app that supports ControlNet, such as the Mochi Diffusion (https://github.com/godly-devotion/MochiDiffusion) test version currently in a closed beta test. Join the Mochi Difusion Discord server (https://discord.gg/x2kartzxGv) to request access to the beta test version.
21
 
22
+ The full SD models are in the "SD" folder of this repo. They are in subfolders by model name and individually zipped for a particular resolution. They need to be unzipped for use after downloading.
23
 
24
+ The ControlNet model files are in the "CN" folder of this repo. They are zipped and need to be unzipped after downloading. Each zip holds a set of 4 resolutions for that ControlNet type, built for 512x512, 512x768, 768x512 and 768x768.
25
 
26
+ There is also a MISC folder that has text files with some notes and a screencap of my directory structure. These are provided for those who want to convert models themselves and/or run the models with a SwiftCLI. The notes are not perfect, and may be out of date if any of the Python or CoreML packages referenced have been updated recently. You can open a Discussion here if you need help with any of the MISC items.
27
 
28
+ For command line use, the MISC notes cover setting up a miniconda3 environment. If you are using the command line, please read the notes concerning naming and placement of your ControlNet model folder.
29
 
30
+ If you are using a GUI, that app will most likely guide you to the correct location/arrangement for your ConrolNet model folder.
31
 
32
+ The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
33
+
34
+ **If you encounter any models that do not work correctly with image2image and/or a ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline for i2i or CN, or Mochi Diffusion 3.2 using i2i, or the Mochi Diffusion beta test build using i2i or CN, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and I'll grant you access to this repo.**
35
 
36
  ## Base Models - A Variety Of SD-1.5-Type Models For Use With ControlNet
37
+ Each folder contains 4 zipped model files, output sizes as indicated: 512x512, 512x768, 768x512 or 768x768
38
  - DreamShaper v5.0, 1.5-type model, "Original"
39
  - GhostMix v1.1, 1.5-type anime model, "Original"
40
  - MeinaMix v9.0 1.5-type anime model, "Original"
 
43
  - Stable Diffusion v1.5, "Original"
44
 
45
  ## ControlNet Models - All Current SD-1.5-Type ControlNet Models
46
+ Each zip file contains a set of 4 resolutions: 512x512, 512x768, 768x512 and 768x768
47
  - Canny -- Edge Detection, Outlines As Input
48
  - Depth -- Reproduces Depth Relationships From An Image
49
  - InPaint -- Use Masks To Define And Modify An Area (not sure how this works)
50
+ - InstrP2P -- Instruct Pixel2Pixel - "Change X to Y"
51
  - LineAnime -- Find And Reuse Small Outlines, Optimized For Anime
52
  - LineArt -- Find And Reuse Small Outlines
53
  - MLSD -- Find And Reuse Straight Lines And Edges
 
57
  - Segmentation -- Find And Reuse Distinct Areas
58
  - Shuffle -- Find And Reorder Major Elements
59
  - SoftEdge -- Find And Reuse Soft Edges
60
+ - Tile -- Subtle Variations Within Batch Runs