Diffusers documentation

LoRA

You are viewing v0.32.0 version. A newer version v0.32.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

LoRA

LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the denoiser, text encoder or both. The denoiser usually corresponds to a UNet (UNet2DConditionModel, for example) or a Transformer (SD3Transformer2DModel, for example). There are several classes for loading LoRA weights:

  • StableDiffusionLoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
  • StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the StableDiffusionLoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model.
  • SD3LoraLoaderMixin provides similar functions for Stable Diffusion 3.
  • FluxLoraLoaderMixin provides similar functions for Flux.
  • CogVideoXLoraLoaderMixin provides similar functions for CogVideoX.
  • Mochi1LoraLoaderMixin provides similar functions for Mochi.
  • AmusedLoraLoaderMixin is for the AmusedPipeline.
  • LoraBaseMixin provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.

To learn more about how to load LoRA weights, see the LoRA loading guide.

StableDiffusionLoraLoaderMixin

class diffusers.loaders.StableDiffusionLoraLoaderMixin

< >

( )

Load LoRA layers into Stable Diffusion UNet2DConditionModel and CLIPTextModel.

load_lora_into_text_encoder

< >

( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The key should be prefixed with an additional text_encoder to distinguish between unet lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • text_encoder (CLIPTextModel) — The text encoder model to load the LoRA layers into.
  • prefix (str) — Expected prefix of the text_encoder in the state_dict.
  • lora_scale (float) — How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into text_encoder

load_lora_into_unet

< >

( state_dict network_alphas unet adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional unet which can be used to distinguish between text encoder lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • unet (UNet2DConditionModel) — The UNet model to load the LoRA layers into.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into unet.

load_lora_weights

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name = None **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — See lora_state_dict().
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.
  • kwargs (dict, optional) — See lora_state_dict().

Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and self.text_encoder.

All kwargs are forwarded to self.lora_state_dict.

See lora_state_dict() for more details on how the state dict is loaded.

See load_lora_into_unet() for more details on how the state dict is loaded into self.unet.

See load_lora_into_text_encoder() for more details on how the state dict is loaded into self.text_encoder.

lora_state_dict

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — Can be either:

    • A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on the Hub.
    • A path to a directory (for example ./my_model_directory) containing the model weights saved with ModelMixin.save_pretrained().
    • A torch state dict.
  • cache_dir (Union[str, os.PathLike], optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • local_files_only (bool, optional, defaults to False) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, the token generated from diffusers-cli login (stored in ~/.huggingface) is used.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.
  • subfolder (str, optional, defaults to "") — The subfolder location of a model file within a larger model repository on the Hub or locally.
  • weight_name (str, optional, defaults to None) — Name of the serialized state dict file.

Return state dict for lora weights and the network alphas.

We support loading A1111 formatted LoRA checkpoints in a limited capacity.

This function is experimental and might change in the future.

save_lora_weights

< >

( save_directory: typing.Union[str, os.PathLike] unet_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )

Parameters

  • save_directory (str or os.PathLike) — Directory to save LoRA parameters to. Will be created if it doesn’t exist.
  • unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the unet.
  • text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers.
  • is_main_process (bool, optional, defaults to True) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
  • save_function (Callable) — The function to use to save the state dictionary. Useful during distributed training when you need to replace torch.save with another method. Can be configured with the environment variable DIFFUSERS_SAVE_MODE.
  • safe_serialization (bool, optional, defaults to True) — Whether to save the model using safetensors or the traditional PyTorch way with pickle.

Save the LoRA parameters corresponding to the UNet and text encoder.

StableDiffusionXLLoraLoaderMixin

class diffusers.loaders.StableDiffusionXLLoraLoaderMixin

< >

( )

Load LoRA layers into Stable Diffusion XL UNet2DConditionModel, CLIPTextModel, and CLIPTextModelWithProjection.

load_lora_into_text_encoder

< >

( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The key should be prefixed with an additional text_encoder to distinguish between unet lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • text_encoder (CLIPTextModel) — The text encoder model to load the LoRA layers into.
  • prefix (str) — Expected prefix of the text_encoder in the state_dict.
  • lora_scale (float) — How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into text_encoder

load_lora_into_unet

< >

( state_dict network_alphas unet adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional unet which can be used to distinguish between text encoder lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • unet (UNet2DConditionModel) — The UNet model to load the LoRA layers into.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into unet.

load_lora_weights

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name: typing.Optional[str] = None **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — See lora_state_dict().
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.
  • kwargs (dict, optional) — See lora_state_dict().

Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and self.text_encoder.

All kwargs are forwarded to self.lora_state_dict.

See lora_state_dict() for more details on how the state dict is loaded.

See load_lora_into_unet() for more details on how the state dict is loaded into self.unet.

See load_lora_into_text_encoder() for more details on how the state dict is loaded into self.text_encoder.

lora_state_dict

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — Can be either:

    • A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on the Hub.
    • A path to a directory (for example ./my_model_directory) containing the model weights saved with ModelMixin.save_pretrained().
    • A torch state dict.
  • cache_dir (Union[str, os.PathLike], optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • local_files_only (bool, optional, defaults to False) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, the token generated from diffusers-cli login (stored in ~/.huggingface) is used.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.
  • subfolder (str, optional, defaults to "") — The subfolder location of a model file within a larger model repository on the Hub or locally.
  • weight_name (str, optional, defaults to None) — Name of the serialized state dict file.

Return state dict for lora weights and the network alphas.

We support loading A1111 formatted LoRA checkpoints in a limited capacity.

This function is experimental and might change in the future.

save_lora_weights

< >

( save_directory: typing.Union[str, os.PathLike] unet_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_2_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )

Parameters

  • save_directory (str or os.PathLike) — Directory to save LoRA parameters to. Will be created if it doesn’t exist.
  • unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the unet.
  • text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers.
  • text_encoder_2_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the text_encoder_2. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers.
  • is_main_process (bool, optional, defaults to True) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
  • save_function (Callable) — The function to use to save the state dictionary. Useful during distributed training when you need to replace torch.save with another method. Can be configured with the environment variable DIFFUSERS_SAVE_MODE.
  • safe_serialization (bool, optional, defaults to True) — Whether to save the model using safetensors or the traditional PyTorch way with pickle.

Save the LoRA parameters corresponding to the UNet and text encoder.

SD3LoraLoaderMixin

class diffusers.loaders.SD3LoraLoaderMixin

< >

( )

Load LoRA layers into SD3Transformer2DModel, CLIPTextModel, and CLIPTextModelWithProjection.

Specific to StableDiffusion3Pipeline.

load_lora_into_text_encoder

< >

( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The key should be prefixed with an additional text_encoder to distinguish between unet lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • text_encoder (CLIPTextModel) — The text encoder model to load the LoRA layers into.
  • prefix (str) — Expected prefix of the text_encoder in the state_dict.
  • lora_scale (float) — How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into text_encoder

load_lora_into_transformer

< >

( state_dict transformer adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional unet which can be used to distinguish between text encoder lora layers.
  • transformer (SD3Transformer2DModel) — The Transformer model to load the LoRA layers into.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into transformer.

load_lora_weights

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name = None **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — See lora_state_dict().
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.
  • kwargs (dict, optional) — See lora_state_dict().

Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and self.text_encoder.

All kwargs are forwarded to self.lora_state_dict.

See lora_state_dict() for more details on how the state dict is loaded.

See ~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer for more details on how the state dict is loaded into self.transformer.

lora_state_dict

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — Can be either:

    • A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on the Hub.
    • A path to a directory (for example ./my_model_directory) containing the model weights saved with ModelMixin.save_pretrained().
    • A torch state dict.
  • cache_dir (Union[str, os.PathLike], optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • local_files_only (bool, optional, defaults to False) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, the token generated from diffusers-cli login (stored in ~/.huggingface) is used.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.
  • subfolder (str, optional, defaults to "") — The subfolder location of a model file within a larger model repository on the Hub or locally.

Return state dict for lora weights and the network alphas.

We support loading A1111 formatted LoRA checkpoints in a limited capacity.

This function is experimental and might change in the future.

save_lora_weights

< >

( save_directory: typing.Union[str, os.PathLike] transformer_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None text_encoder_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_2_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )

Parameters

  • save_directory (str or os.PathLike) — Directory to save LoRA parameters to. Will be created if it doesn’t exist.
  • transformer_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the transformer.
  • text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers.
  • text_encoder_2_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the text_encoder_2. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers.
  • is_main_process (bool, optional, defaults to True) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
  • save_function (Callable) — The function to use to save the state dictionary. Useful during distributed training when you need to replace torch.save with another method. Can be configured with the environment variable DIFFUSERS_SAVE_MODE.
  • safe_serialization (bool, optional, defaults to True) — Whether to save the model using safetensors or the traditional PyTorch way with pickle.

Save the LoRA parameters corresponding to the UNet and text encoder.

FluxLoraLoaderMixin

class diffusers.loaders.FluxLoraLoaderMixin

< >

( )

Load LoRA layers into FluxTransformer2DModel, CLIPTextModel.

Specific to StableDiffusion3Pipeline.

load_lora_into_text_encoder

< >

( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The key should be prefixed with an additional text_encoder to distinguish between unet lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • text_encoder (CLIPTextModel) — The text encoder model to load the LoRA layers into.
  • prefix (str) — Expected prefix of the text_encoder in the state_dict.
  • lora_scale (float) — How much to scale the output of the lora linear layer before it is added with the output of the regular lora layer.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into text_encoder

load_lora_into_transformer

< >

( state_dict network_alphas transformer adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional unet which can be used to distinguish between text encoder lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • transformer (FluxTransformer2DModel) — The Transformer model to load the LoRA layers into.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into transformer.

load_lora_weights

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name = None **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — See lora_state_dict().
  • kwargs (dict, optional) — See lora_state_dict().
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — `Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.transformer and self.text_encoder.

All kwargs are forwarded to self.lora_state_dict.

See lora_state_dict() for more details on how the state dict is loaded.

See ~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer for more details on how the state dict is loaded into self.transformer.

lora_state_dict

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] return_alphas: bool = False **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — Can be either:

    • A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on the Hub.
    • A path to a directory (for example ./my_model_directory) containing the model weights saved with ModelMixin.save_pretrained().
    • A torch state dict.
  • cache_dir (Union[str, os.PathLike], optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • local_files_only (bool, optional, defaults to False) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, the token generated from diffusers-cli login (stored in ~/.huggingface) is used.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.
  • subfolder (str, optional, defaults to "") — The subfolder location of a model file within a larger model repository on the Hub or locally.

Return state dict for lora weights and the network alphas.

We support loading A1111 formatted LoRA checkpoints in a limited capacity.

This function is experimental and might change in the future.

save_lora_weights

< >

( save_directory: typing.Union[str, os.PathLike] transformer_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )

Parameters

  • save_directory (str or os.PathLike) — Directory to save LoRA parameters to. Will be created if it doesn’t exist.
  • transformer_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the transformer.
  • text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers.
  • is_main_process (bool, optional, defaults to True) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
  • save_function (Callable) — The function to use to save the state dictionary. Useful during distributed training when you need to replace torch.save with another method. Can be configured with the environment variable DIFFUSERS_SAVE_MODE.
  • safe_serialization (bool, optional, defaults to True) — Whether to save the model using safetensors or the traditional PyTorch way with pickle.

Save the LoRA parameters corresponding to the UNet and text encoder.

unfuse_lora

< >

( components: typing.List[str] = ['transformer', 'text_encoder'] **kwargs )

Parameters

  • components (List[str]) — List of LoRA-injectable components to unfuse LoRA from.

Reverses the effect of pipe.fuse_lora().

This is an experimental API.

CogVideoXLoraLoaderMixin

class diffusers.loaders.CogVideoXLoraLoaderMixin

< >

( )

Load LoRA layers into CogVideoXTransformer3DModel. Specific to CogVideoXPipeline.

load_lora_into_transformer

< >

( state_dict transformer adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional unet which can be used to distinguish between text encoder lora layers.
  • transformer (CogVideoXTransformer3DModel) — The Transformer model to load the LoRA layers into.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into transformer.

load_lora_weights

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name = None **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — See lora_state_dict().
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.
  • kwargs (dict, optional) — See lora_state_dict().

Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.transformer and self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See ~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer for more details on how the state dict is loaded into self.transformer.

lora_state_dict

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — Can be either:

    • A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on the Hub.
    • A path to a directory (for example ./my_model_directory) containing the model weights saved with ModelMixin.save_pretrained().
    • A torch state dict.
  • cache_dir (Union[str, os.PathLike], optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • local_files_only (bool, optional, defaults to False) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, the token generated from diffusers-cli login (stored in ~/.huggingface) is used.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.
  • subfolder (str, optional, defaults to "") — The subfolder location of a model file within a larger model repository on the Hub or locally.

Return state dict for lora weights and the network alphas.

We support loading A1111 formatted LoRA checkpoints in a limited capacity.

This function is experimental and might change in the future.

save_lora_weights

< >

( save_directory: typing.Union[str, os.PathLike] transformer_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )

Parameters

  • save_directory (str or os.PathLike) — Directory to save LoRA parameters to. Will be created if it doesn’t exist.
  • transformer_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the transformer.
  • is_main_process (bool, optional, defaults to True) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
  • save_function (Callable) — The function to use to save the state dictionary. Useful during distributed training when you need to replace torch.save with another method. Can be configured with the environment variable DIFFUSERS_SAVE_MODE.
  • safe_serialization (bool, optional, defaults to True) — Whether to save the model using safetensors or the traditional PyTorch way with pickle.

Save the LoRA parameters corresponding to the UNet and text encoder.

unfuse_lora

< >

( components: typing.List[str] = ['transformer', 'text_encoder'] **kwargs )

Parameters

  • components (List[str]) — List of LoRA-injectable components to unfuse LoRA from.
  • unfuse_transformer (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters.
  • unfuse_text_encoder (bool, defaults to True) — Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the LoRA parameters then it won’t have any effect.

Reverses the effect of pipe.fuse_lora().

This is an experimental API.

Mochi1LoraLoaderMixin

class diffusers.loaders.Mochi1LoraLoaderMixin

< >

( )

Load LoRA layers into MochiTransformer3DModel. Specific to MochiPipeline.

load_lora_into_transformer

< >

( state_dict transformer adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional unet which can be used to distinguish between text encoder lora layers.
  • transformer (MochiTransformer3DModel) — The Transformer model to load the LoRA layers into.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into transformer.

load_lora_weights

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name = None **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — See lora_state_dict().
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.
  • kwargs (dict, optional) — See lora_state_dict().

Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.transformer and self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See ~loaders.StableDiffusionLoraLoaderMixin.load_lora_into_transformer for more details on how the state dict is loaded into self.transformer.

lora_state_dict

< >

( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs )

Parameters

  • pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — Can be either:

    • A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on the Hub.
    • A path to a directory (for example ./my_model_directory) containing the model weights saved with ModelMixin.save_pretrained().
    • A torch state dict.
  • cache_dir (Union[str, os.PathLike], optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used.
  • force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • local_files_only (bool, optional, defaults to False) — Whether to only load local model weights and configuration files or not. If set to True, the model won’t be downloaded from the Hub.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, the token generated from diffusers-cli login (stored in ~/.huggingface) is used.
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.
  • subfolder (str, optional, defaults to "") — The subfolder location of a model file within a larger model repository on the Hub or locally.

Return state dict for lora weights and the network alphas.

We support loading A1111 formatted LoRA checkpoints in a limited capacity.

This function is experimental and might change in the future.

save_lora_weights

< >

( save_directory: typing.Union[str, os.PathLike] transformer_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True )

Parameters

  • save_directory (str or os.PathLike) — Directory to save LoRA parameters to. Will be created if it doesn’t exist.
  • transformer_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — State dict of the LoRA layers corresponding to the transformer.
  • is_main_process (bool, optional, defaults to True) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions.
  • save_function (Callable) — The function to use to save the state dictionary. Useful during distributed training when you need to replace torch.save with another method. Can be configured with the environment variable DIFFUSERS_SAVE_MODE.
  • safe_serialization (bool, optional, defaults to True) — Whether to save the model using safetensors or the traditional PyTorch way with pickle.

Save the LoRA parameters corresponding to the UNet and text encoder.

unfuse_lora

< >

( components: typing.List[str] = ['transformer', 'text_encoder'] **kwargs )

Parameters

  • components (List[str]) — List of LoRA-injectable components to unfuse LoRA from.
  • unfuse_transformer (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters.
  • unfuse_text_encoder (bool, defaults to True) — Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the LoRA parameters then it won’t have any effect.

Reverses the effect of pipe.fuse_lora().

This is an experimental API.

AmusedLoraLoaderMixin

class diffusers.loaders.AmusedLoraLoaderMixin

< >

( )

load_lora_into_transformer

< >

( state_dict network_alphas transformer adapter_name = None _pipeline = None low_cpu_mem_usage = False )

Parameters

  • state_dict (dict) — A standard state dict containing the lora layer parameters. The keys can either be indexed directly into the unet or prefixed with an additional unet which can be used to distinguish between text encoder lora layers.
  • network_alphas (Dict[str, float]) — The value of the network alpha used for stable learning and preventing underflow. This value has the same meaning as the --network_alpha option in the kohya-ss trainer script. Refer to this link.
  • transformer (UVit2DModel) — The Transformer model to load the LoRA layers into.
  • adapter_name (str, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will use default_{i} where i is the total number of adapters being loaded.
  • low_cpu_mem_usage (bool, optional) — Speed up model loading by only loading the pretrained LoRA weights and not initializing the random weights.

This will load the LoRA layers specified in state_dict into transformer.

LoraBaseMixin

class diffusers.loaders.lora_base.LoraBaseMixin

< >

( )

Utility class for handling LoRAs.

delete_adapters

< >

( adapter_names: typing.Union[typing.List[str], str] )

Parameters

  • Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — adapter_names (Union[List[str], str]): The names of the adapter to delete. Can be a single string or a list of strings

fuse_lora

< >

( components: typing.List[str] = [] lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: typing.Optional[typing.List[str]] = None **kwargs )

Parameters

  • components — (List[str]): List of LoRA-injectable components to fuse the LoRAs into.
  • lora_scale (float, defaults to 1.0) — Controls how much to influence the outputs with the LoRA parameters.
  • safe_fusing (bool, defaults to False) — Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them.
  • adapter_names (List[str], optional) — Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused.

Fuses the LoRA parameters into the original parameters of the corresponding blocks.

This is an experimental API.

Example:

from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.fuse_lora(lora_scale=0.7)

get_active_adapters

< >

( )

Gets the list of the current active adapters.

Example:

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
).to("cuda")
pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy")
pipeline.get_active_adapters()

get_list_adapters

< >

( )

Gets the current list of all available adapters in the pipeline.

set_lora_device

< >

( adapter_names: typing.List[str] device: typing.Union[torch.device, str, int] )

Parameters

  • adapter_names (List[str]) — List of adapters to send device to.
  • device (Union[torch.device, str, int]) — Device to send the adapters to. Can be either a torch device, a str or an integer.

Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case you want to load multiple adapters and free some GPU memory.

unfuse_lora

< >

( components: typing.List[str] = [] **kwargs )

Parameters

  • components (List[str]) — List of LoRA-injectable components to unfuse LoRA from.
  • unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters.
  • unfuse_text_encoder (bool, defaults to True) — Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the LoRA parameters then it won’t have any effect.

Reverses the effect of pipe.fuse_lora().

This is an experimental API.

unload_lora_weights

< >

( )

Unloads the LoRA parameters.

Examples:

>>> # Assuming `pipeline` is already loaded with the LoRA parameters.
>>> pipeline.unload_lora_weights()
>>> ...
< > Update on GitHub