WAI-NSFW-illustrious-SDXL V11.0 GGUF CONVERTED AND QUANTIZED

Original Model. All credits to its creator

VERY IMPORTANT

This model was converted for COMFYUI-GGUF usage. Workability in other backends is not guaranteed!

Also you need illustrious_clip_g_fp8_e4m3fn.safetensors, illustrious_clip_l_fp8_e4m3fn.safetensors text encoders and illustrious_v110_vae_fp8_e4m3fn.safetensors VAE

Another encoders and vae may not work!

UPD: Wai's VAE giving slightly better results: LINK. Use it instead of illustrious VAE

INSTALL

Drop gguf model to models/diffusion_models folder

clip_g and clip_l to models/text_encoders folder

vae to models/vae folder

You can use gguf custom nodes from calcuis or from city96, theres no difference

Load gguf model via GGUF LOADER (calcuis) or Unet Loader (city96) node

Load text encoders via GGUF DualCLIPLoader (calcuis) or DualCLIPLoader (city96) node

Load vae via default Load VAE node

You can get KSampler settings from Original Model Card

Addional info

Less bits - less VRAM usage, less Storage usage, slower inference, loss in details and quality

illustrious LoRA models works just fine

Comparsion

I am not sure about ability to reproduce same result in sdxl models, but seems like Q4_0 not that bad as i thought, absolutely usable

You can use these images to setup comfyui workflow

FP16 Q8_0 Q5_K_S Q4_0
FP16 Q8_0 Q5_K_S Q4_0
FP16 Q8_0 Q5_K_S Q4_0
Downloads last month
806
GGUF
Model size
2.57B params
Architecture
sdxl

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for kekusprod/WAI-NSFW-illustrious-SDXL-v110-GGUF