AutoencoderKLAllegro
The 3D variational autoencoder (VAE) model with KL loss used in Allegro was introduced in Allegro: Open the Black Box of Commercial-Level Video Generation Model by RhymesAI.
The model can be loaded with the following code snippet.
from diffusers import AutoencoderKLAllegro
vae = AutoencoderKLCogVideoX.from_pretrained("rhymes-ai/Allegro", subfolder="vae", torch_dtype=torch.float32).to("cuda")
AutoencoderKLAllegro
class diffusers.AutoencoderKLAllegro
< source >( in_channels: int = 3 out_channels: int = 3 down_block_types: typing.Tuple[str, ...] = ('AllegroDownBlock3D', 'AllegroDownBlock3D', 'AllegroDownBlock3D', 'AllegroDownBlock3D') up_block_types: typing.Tuple[str, ...] = ('AllegroUpBlock3D', 'AllegroUpBlock3D', 'AllegroUpBlock3D', 'AllegroUpBlock3D') block_out_channels: typing.Tuple[int, ...] = (128, 256, 512, 512) temporal_downsample_blocks: typing.Tuple[bool, ...] = (True, True, False, False) temporal_upsample_blocks: typing.Tuple[bool, ...] = (False, True, True, False) latent_channels: int = 4 layers_per_block: int = 2 act_fn: str = 'silu' norm_num_groups: int = 32 temporal_compression_ratio: float = 4 sample_size: int = 320 scaling_factor: float = 0.13 force_upcast: bool = True )
Parameters
- in_channels (int, defaults to
3
) — Number of channels in the input image. - out_channels (int, defaults to
3
) — Number of channels in the output. - down_block_types (
Tuple[str, ...]
, defaults to("AllegroDownBlock3D", "AllegroDownBlock3D", "AllegroDownBlock3D", "AllegroDownBlock3D")
) — Tuple of strings denoting which types of down blocks to use. - up_block_types (
Tuple[str, ...]
, defaults to("AllegroUpBlock3D", "AllegroUpBlock3D", "AllegroUpBlock3D", "AllegroUpBlock3D")
) — Tuple of strings denoting which types of up blocks to use. - block_out_channels (
Tuple[int, ...]
, defaults to(128, 256, 512, 512)
) — Tuple of integers denoting number of output channels in each block. - temporal_downsample_blocks (
Tuple[bool, ...]
, defaults to(True, True, False, False)
) — Tuple of booleans denoting which blocks to enable temporal downsampling in. - latent_channels (
int
, defaults to4
) — Number of channels in latents. - layers_per_block (
int
, defaults to2
) — Number of resnet or attention or temporal convolution layers per down/up block. - act_fn (
str
, defaults to"silu"
) — The activation function to use. - norm_num_groups (
int
, defaults to32
) — Number of groups to use in normalization layers. - temporal_compression_ratio (
int
, defaults to4
) — Ratio by which temporal dimension of samples are compressed. - sample_size (
int
, defaults to320
) — Default latent size. - scaling_factor (
float
, defaults to0.13235
) — The component-wise standard deviation of the trained latent space computed using the first batch of the training set. This is used to scale the latent space to have unit variance when training the diffusion model. The latents are scaled with the formulaz = z * scaling_factor
before being passed to the diffusion model. When decoding, the latents are scaled back to the original scale with the formula:z = 1 / scaling_factor * z
. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image Synthesis with Latent Diffusion Models paper. - force_upcast (
bool
, default toTrue
) — If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE can be fine-tuned / trained to a lower range without loosing too much precision in which caseforce_upcast
can be set toFalse
- see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Used in Allegro.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
Disable sliced VAE decoding. If enable_slicing
was previously enabled, this method will go back to computing
decoding in one step.
Disable tiled VAE decoding. If enable_tiling
was previously enabled, this method will go back to computing
decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
forward
< source >( sample: Tensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )
Parameters
- sample (
torch.Tensor
) — Input sample. - sample_posterior (
bool
, optional, defaults toFalse
) — Whether to sample from the posterior. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return aDecoderOutput
instead of a plain tuple. - generator (
torch.Generator
, optional) — PyTorch random number generator.
AutoencoderKLOutput
class diffusers.models.modeling_outputs.AutoencoderKLOutput
< source >( latent_dist: DiagonalGaussianDistribution )
Output of AutoencoderKL encoding method.
DecoderOutput
class diffusers.models.autoencoders.vae.DecoderOutput
< source >( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )
Output of decoding method.