I use sd_turbo to test sd2 for sd.cpp , so here they are.

The "old" q8_0 is a direct conversion, converting to f16 first, then to q8_0 gave an equivalent performing but smaller filesize model file.

Use --cfg-scale 1 --steps 8 and maybe --schedule karras.

The model only really produces ok output for 512x512 .

Downloads last month
63
GGUF
Model size
1.3B params
Architecture
undefined

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Green-Sky/SD-Turbo-GGUF

Quantized
(3)
this model