jrrjrr commited on
Commit
e263d0a
·
1 Parent(s): 74f2bd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ The SD models (base models) linked at the bottom of this page were relocated fro
23
 
24
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
25
 
26
- They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0 or ml-stable-diffusion 1.0.0, such as [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 3.2, 4.0, or later.
27
 
28
  The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
29
 
 
23
 
24
  The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.
25
 
26
+ They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0 or ml-stable-diffusion 1.0.0, such as [**MOCHI DIFFUSION**](https://github.com/godly-devotion/MochiDiffusion) 4.0, or later.
27
 
28
  The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.
29