Stable Diffusion Models by Olive for OnnxRuntime CUDA
Collection
Stable Diffusion ONNX Models optimized by Olive for OnnxRuntime CUDA execution provider
•
7 items
•
Updated
•
1
This repository hosts the optimized versions of DreamShaper XL Turbo to accelerate inference with ONNX Runtime CUDA execution provider.
The models are generated by Olive with command like the following:
python stable_diffusion_xl.py --provider cuda --optimize --model_id Lykon/dreamshaper-xl-turbo
Base model
Lykon/dreamshaper-xl-turbo