Transformers documentation

Custom Layers and Utilities

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.53.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Custom Layers and Utilities

This page lists all the custom layers used by the library, as well as the utility functions and classes it provides for modeling.

Most of those are only useful if you are studying the code of the models in the library.

Layers

class transformers.GradientCheckpointingLayer

< >

( *args **kwargs )

Base class for layers with gradient checkpointing.

This class enables gradient checkpointing functionality for a layer. By default, gradient checkpointing is disabled (gradient_checkpointing = False). When model.set_gradient_checkpointing() is called, gradient checkpointing is enabled by setting gradient_checkpointing = True and assigning a checkpointing function to _gradient_checkpointing_func.

Important:

When using gradient checkpointing with use_reentrant=True, inputs that require gradients (e.g. hidden states) must be passed as positional arguments (*args) rather than keyword arguments to properly propagate gradients.

Example:

>>> # Correct - hidden_states passed as positional arg
>>> out = self.layer(hidden_states, attention_mask=attention_mask)

>>> # Incorrect - hidden_states passed as keyword arg
>>> out = self.layer(hidden_states=hidden_states, attention_mask=attention_mask)

Attention Functions

class transformers.AttentionInterface

< >

( )

Dict-like object keeping track of allowed attention functions. You can easily add a new attention function with a call to register(). If a model needs to locally overwrite an existing attention function, say sdpa, it needs to declare a new instance of this class inside the modeling_<model>.py, and declare it on that instance.

register

< >

( key: str value: typing.Callable )

Attention Mask Functions

class transformers.AttentionMaskInterface

< >

( )

register

< >

( key: str value: typing.Callable )

Rotary Position Embedding Functions

transformers.dynamic_rope_update

< >

( rope_forward )

Parameters

  • rope_forward (Callable) — The forward pass of the RoPE implementation.

Decorator function to update the RoPE parameters in the forward pass, if the model is using a dynamic RoPE (i.e. a RoPE implementation that may recompute its frequencies in the forward pass).

Pytorch custom modules

class transformers.Conv1D

< >

( nf nx )

Parameters

  • nf (int) — The number of output features.
  • nx (int) — The number of input features.

1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).

Basically works like a linear layer but the weights are transposed.

PyTorch Helper Functions

transformers.apply_chunking_to_forward

< >

( forward_fn: Callable[..., torch.Tensor] chunk_size: int chunk_dim: int *input_tensors ) torch.Tensor

Parameters

  • forward_fn (Callable[..., torch.Tensor]) — The forward function of the model.
  • chunk_size (int) — The chunk size of a chunked tensor: num_chunks = len(input_tensors[0]) / chunk_size.
  • chunk_dim (int) — The dimension over which the input_tensors should be chunked.
  • input_tensors (tuple[torch.Tensor]) — The input tensors of forward_fn which will be chunked

Returns

torch.Tensor

A tensor with the same shape as the forward_fn would have given if applied`.

This function chunks the input_tensors into smaller input tensor parts of size chunk_size over the dimension chunk_dim. It then applies a layer forward_fn to each chunk independently to save memory.

If the forward_fn is independent across the chunk_dim this function will yield the same result as directly applying forward_fn to input_tensors.

Examples:

# rename the usual forward() fn to forward_chunk()
def forward_chunk(self, hidden_states):
    hidden_states = self.decoder(hidden_states)
    return hidden_states


# implement a chunked forward function
def forward(self, hidden_states):
    return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states)

transformers.pytorch_utils.find_pruneable_heads_and_indices

< >

( heads: list[int] n_heads: int head_size: int already_pruned_heads: set[int] ) tuple[Set[int], torch.LongTensor]

Parameters

  • heads (list[int]) — List of the indices of heads to prune.
  • n_heads (int) — The number of heads in the model.
  • head_size (int) — The size of each head.
  • already_pruned_heads (Set[int]) — A set of already pruned heads.

Returns

tuple[Set[int], torch.LongTensor]

A tuple with the indices of heads to prune taking already_pruned_heads into account and the indices of rows/columns to keep in the layer weight.

Finds the heads and their indices taking already_pruned_heads into account.

transformers.prune_layer

< >

( layer: nn.Linear | Conv1D index: torch.LongTensor dim: int | None = None ) torch.nn.Linear or Conv1D

Parameters

  • layer (Union[torch.nn.Linear, Conv1D]) — The layer to prune.
  • index (torch.LongTensor) — The indices to keep in the layer.
  • dim (int, optional) — The dimension on which to keep the indices.

Returns

torch.nn.Linear or Conv1D

The pruned layer as a new layer with requires_grad=True.

Prune a Conv1D or linear layer to keep only entries in index.

Used to remove heads.

transformers.pytorch_utils.prune_conv1d_layer

< >

( layer: Conv1D index: torch.LongTensor dim: int = 1 ) Conv1D

Parameters

  • layer (Conv1D) — The layer to prune.
  • index (torch.LongTensor) — The indices to keep in the layer.
  • dim (int, optional, defaults to 1) — The dimension on which to keep the indices.

Returns

Conv1D

The pruned layer as a new layer with requires_grad=True.

Prune a Conv1D layer to keep only entries in index. A Conv1D work as a Linear layer (see e.g. BERT) but the weights are transposed.

Used to remove heads.

transformers.pytorch_utils.prune_linear_layer

< >

( layer: nn.Linear index: torch.LongTensor dim: int = 0 ) torch.nn.Linear

Parameters

  • layer (torch.nn.Linear) — The layer to prune.
  • index (torch.LongTensor) — The indices to keep in the layer.
  • dim (int, optional, defaults to 0) — The dimension on which to keep the indices.

Returns

torch.nn.Linear

The pruned layer as a new layer with requires_grad=True.

Prune a linear layer to keep only entries in index.

Used to remove heads.

< > Update on GitHub