π° Hybrid-sd-xl-700m for Stable Diffusion XL
Hybrid-sd-xl-700m is a pruned-finetuned version UNet which is based on SDXL1.0 and Koala-700M.
Compared to 2560M parameter size of SDXL Unet, our UNet is 3.2x smaller, 2.4x faster, as well as better-performed in generating more realistic photographic image, displaying more details and rich variety of backgroud. We used a few training tricks and resourceful dataset to finetune the model, thus ended up with higher image aesthetics and image authenticity. Hybrid-sd-xl-700m is useful for real-time previewing of the SDXL generation process, and you are very welcome to try it !!!!!!
Index table
Model | Params (M) | UNet 1-step inference time (ms) | GPU Memory Usage (MiB) |
---|---|---|---|
SDXL | 2560 | 448.47 | 18431 |
Hybrid-sd-xl-700m | 780 β | 185.93 β | 10651 β |
T2I Comparison using one A100 GPU, The image order from left to right : SDXL1.0 -> Koala-700M -> Hybrid-sd-xl-700m
This repo contains .safetensors
versions of the Hybrid-sd-xl-700m weights.
Using in 𧨠diffusers
import torch
from diffusers import StableDiffusionXLPipeline,UNet2DConditionModel
unet = UNet2DConditionModel.from_pretrained('cqyan/hybrid-sd-xl-700m')
pipe = StableDiffusionXLPipeline.from_pretrained(
'stabilityai/stable-diffusion-xl-base-1.0',
unet = unet,
torch_dtype=torch.float16)
prompt = "full body, cat dressed as a Viking, with weapon in his paws, battle coloring, glow hyper-detail, hyper-realism, cinematic"
image = pipe(prompt, num_inference_steps=25, guidance_scale=7).images[0]
image.save("cat.png")
- Downloads last month
- 10
Model tree for cqyan/hybrid-sd-xl-700m
Base model
etri-vilab/koala-700m