Aminrabi's picture
End of training
c0af20c
|
raw
history blame
5.02 kB

Denoising Diffusion Implicit Models (DDIM)

Overview

Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.

The abstract of the paper is the following:

Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.

The original codebase of this paper can be found here: ermongroup/ddim. For questions, feel free to contact the author on tsong.me.

Experimental: "Common Diffusion Noise Schedules and Sample Steps are Flawed":

The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion.

The abstract reads as follows:

*We discover that common diffusion noise schedules do not enforce the last timestep to have zero signal-to-noise ratio (SNR), and some implementations of diffusion samplers do not start from the last timestep. Such designs are flawed and do not reflect the fact that the model is given pure Gaussian noise at inference, creating a discrepancy between training and inference. We show that the flawed design causes real problems in existing implementations. In Stable Diffusion, it severely limits the model to only generate images with medium brightness and prevents it from generating very bright and dark samples. We propose a few simple fixes:

  • (1) rescale the noise schedule to enforce zero terminal SNR;
  • (2) train the model with v prediction;
  • (3) change the sampler to always start from the last timestep;
  • (4) rescale classifier-free guidance to prevent over-exposure. These simple changes ensure the diffusion process is congruent between training and inference and allow the model to generate samples more faithful to the original data distribution.*

You can apply all of these changes in diffusers when using [DDIMScheduler]:

  • (1) rescale the noise schedule to enforce zero terminal SNR;
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True)
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
  • (4) rescale classifier-free guidance to prevent over-exposure.
pipe(..., guidance_rescale=0.7)

An example is to use this checkpoint which has been fine-tuned using the "v_prediction".

The checkpoint can then be run in inference as follows:

from diffusers import DiffusionPipeline, DDIMScheduler

pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
pipe.scheduler = DDIMScheduler.from_config(
    pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
)
pipe.to("cuda")

prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
image = pipeline(prompt, guidance_rescale=0.7).images[0]

DDIMScheduler

[[autodoc]] DDIMScheduler