Update README.md
Browse files
README.md
CHANGED
@@ -133,7 +133,7 @@ tags:
|
|
133 |
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
134 |
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
135 |
- `original` version is only compatible with `CPU & GPU` option
|
136 |
-
- `
|
137 |
- Resolution and bit size are as noted in the individual file names.
|
138 |
- This model requires macOS 14.0 or later to run properly.
|
139 |
- This model was converted with a `vae-encoder` for use with `image2image`.
|
|
|
133 |
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
|
134 |
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
|
135 |
- `original` version is only compatible with `CPU & GPU` option
|
136 |
+
- `split_einsum` version takes **about 5-10 minutes** to load the model for the first time and is available for both `CPU & Neural Engine` and `CPU & GPU` options. If your Mac has a lot of GPUs, using the CPU & GPU option will speed up image generation.
|
137 |
- Resolution and bit size are as noted in the individual file names.
|
138 |
- This model requires macOS 14.0 or later to run properly.
|
139 |
- This model was converted with a `vae-encoder` for use with `image2image`.
|