Diffusers documentation

AutoencoderKLHunyuanVideo

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

AutoencoderKLHunyuanVideo

The 3D variational autoencoder (VAE) model with KL loss used in HunyuanVideo, which was introduced in HunyuanVideo: A Systematic Framework For Large Video Generative Models by Tencent.

The model can be loaded with the following code snippet.

from diffusers import AutoencoderKLHunyuanVideo

vae = AutoencoderKLHunyuanVideo.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="vae", torch_dtype=torch.float16)

AutoencoderKLHunyuanVideo

class diffusers.AutoencoderKLHunyuanVideo

< >

( in_channels: int = 3 out_channels: int = 3 latent_channels: int = 16 down_block_types: typing.Tuple[str, ...] = ('HunyuanVideoDownBlock3D', 'HunyuanVideoDownBlock3D', 'HunyuanVideoDownBlock3D', 'HunyuanVideoDownBlock3D') up_block_types: typing.Tuple[str, ...] = ('HunyuanVideoUpBlock3D', 'HunyuanVideoUpBlock3D', 'HunyuanVideoUpBlock3D', 'HunyuanVideoUpBlock3D') block_out_channels: typing.Tuple[int] = (128, 256, 512, 512) layers_per_block: int = 2 act_fn: str = 'silu' norm_num_groups: int = 32 scaling_factor: float = 0.476986 spatial_compression_ratio: int = 8 temporal_compression_ratio: int = 4 mid_block_add_attention: bool = True )

A VAE model with KL loss for encoding videos into latents and decoding latent representations into videos. Introduced in HunyuanVideo.

This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).

wrapper

< >

( *args **kwargs )

disable_slicing

< >

( )

Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing decoding in one step.

disable_tiling

< >

( )

Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing decoding in one step.

enable_slicing

< >

( )

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

enable_tiling

< >

( tile_sample_min_height: typing.Optional[int] = None tile_sample_min_width: typing.Optional[int] = None tile_sample_min_num_frames: typing.Optional[int] = None tile_sample_stride_height: typing.Optional[float] = None tile_sample_stride_width: typing.Optional[float] = None tile_sample_stride_num_frames: typing.Optional[float] = None )

Parameters

  • tile_sample_min_height (int, optional) — The minimum height required for a sample to be separated into tiles across the height dimension.
  • tile_sample_min_width (int, optional) — The minimum width required for a sample to be separated into tiles across the width dimension.
  • tile_sample_min_num_frames (int, optional) — The minimum number of frames required for a sample to be separated into tiles across the frame dimension.
  • tile_sample_stride_height (int, optional) — The minimum amount of overlap between two consecutive vertical tiles. This is to ensure that there are no tiling artifacts produced across the height dimension.
  • tile_sample_stride_width (int, optional) — The stride between two consecutive horizontal tiles. This is to ensure that there are no tiling artifacts produced across the width dimension.
  • tile_sample_stride_num_frames (int, optional) — The stride between two consecutive frame tiles. This is to ensure that there are no tiling artifacts produced across the frame dimension.

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

forward

< >

( sample: Tensor sample_posterior: bool = False return_dict: bool = True generator: typing.Optional[torch._C.Generator] = None )

Parameters

  • sample (torch.Tensor) — Input sample.
  • sample_posterior (bool, optional, defaults to False) — Whether to sample from the posterior.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a DecoderOutput instead of a plain tuple.

tiled_decode

< >

( z: Tensor return_dict: bool = True ) ~models.vae.DecoderOutput or tuple

Parameters

  • z (torch.Tensor) — Input batch of latent vectors.
  • return_dict (bool, optional, defaults to True) — Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple.

Returns

~models.vae.DecoderOutput or tuple

If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is returned.

Decode a batch of images using a tiled decoder.

tiled_encode

< >

( x: Tensor ) torch.Tensor

Parameters

  • x (torch.Tensor) — Input batch of videos.

Returns

torch.Tensor

The latent representation of the encoded videos.

Encode a batch of images using a tiled encoder.

DecoderOutput

class diffusers.models.autoencoders.vae.DecoderOutput

< >

( sample: Tensor commit_loss: typing.Optional[torch.FloatTensor] = None )

Parameters

  • sample (torch.Tensor of shape (batch_size, num_channels, height, width)) — The decoded output sample from the last layer of the model.

Output of decoding method.

< > Update on GitHub