Stable Diffusion Models by Olive for OnnxRuntime CUDA
Collection
Stable Diffusion ONNX Models optimized by Olive for OnnxRuntime CUDA execution provider
•
7 items
•
Updated
•
1
This repository hosts the optimized versions of Stable Diffusion XL Refiner 1.0 to accelerate inference with ONNX Runtime CUDA execution provider.
The models are generated by Olive with command like the following:
python stable_diffusion_xl.py --provider cuda --optimize --use_fp16_fixed_vae --model_id stabilityai/stable-diffusion-xl-refiner-1.0
The VAE decoder is converted from sdxl-vae-fp16-fix. There are slight discrepancies between its output and that of the original VAE, but the decoded images should be close enough for most purposes.
Base model
stabilityai/stable-diffusion-xl-refiner-1.0