Diffusers fp16 workaround

#10
by david565 - opened

The example loads the transformer in fp32. You can preload the transformer in fp16 as a workaround.

from diffusers import AuraFlowPipeline, AuraFlowTransformer2DModel
import torch

transformer = AuraFlowTransformer2DModel.from_pretrained(
    "fal/AuraFlow-v0.3",
    subfolder="transformer",
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

pipeline = AuraFlowPipeline.from_pretrained(
    "fal/AuraFlow-v0.3",
    transformer=transformer,
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

image = pipeline(
    prompt="rempage of the iguana character riding F1, fast and furious, cinematic movie poster",
    width=1536,
    height=768,
    num_inference_steps=50, 
    generator=torch.Generator().manual_seed(1),
    guidance_scale=3.5,
).images[0]

image.save("output.png")

Sign up or log in to comment