Transformers documentation

AIMv2

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.53.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

AIMv2

Overview

The AIMv2 model was proposed in Multimodal Autoregressive Pre-training of Large Vision Encoders by Enrico Fini, Mustafa Shukor, Xiujun Li, Philipp Dufter, Michal Klein, David Haldimann, Sai Aitharaju, Victor Guilherme Turrisi da Costa, Louis Béthune, Zhe Gan, Alexander T Toshev, Marcin Eichner, Moin Nabi, Yinfei Yang, Joshua M. Susskind, Alaaeldin El-Nouby.

The abstract from the paper is the following:

We introduce a novel method for pre-training of large-scale vision encoders. Building on recent advancements in autoregressive pre-training of vision models, we extend this framework to a multimodal setting, i.e., images and text. In this paper, we present AIMV2, a family of generalist vision encoders characterized by a straightforward pre-training process, scalability, and remarkable performance across a range of downstream tasks. This is achieved by pairing the vision encoder with a multimodal decoder that autoregressively generates raw image patches and text tokens. Our encoders excel not only in multimodal evaluations but also in vision benchmarks such as localization, grounding, and classification. Notably, our AIMV2-3B encoder achieves 89.5% accuracy on ImageNet-1k with a frozen trunk. Furthermore, AIMV2 consistently outperforms state-of-the-art contrastive models (e.g., CLIP, SigLIP) in multimodal image understanding across diverse settings.

This model was contributed by Yaswanth Gali. The original code can be found here.

Usage Example

Here is an example of Image Feature Extraction using specific checkpoints on resized images and native resolution images:

import requests
from PIL import Image
from transformers import AutoImageProcessor, AutoModel

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)

processor = AutoImageProcessor.from_pretrained("apple/aimv2-large-patch14-native")
model = AutoModel.from_pretrained("apple/aimv2-large-patch14-native")

inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)

Here is an example of a checkpoint performing zero-shot classification:

import requests
from PIL import Image
from transformers import AutoProcessor, AutoModel

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = ["Picture of a dog.", "Picture of a cat.", "Picture of a horse."]

processor = AutoProcessor.from_pretrained("apple/aimv2-large-patch14-224-lit")
model = AutoModel.from_pretrained("apple/aimv2-large-patch14-224-lit")

inputs = processor(
    images=image,
    text=text,
    add_special_tokens=True,
    truncation=True,
    padding=True,
    return_tensors="pt",
)
outputs = model(**inputs)
probs = outputs.logits_per_image.softmax(dim=-1)

Aimv2Config

class transformers.Aimv2Config

< >

( text_config = None vision_config = None projection_dim = 512 logit_scale_init_value = 2.6592 **kwargs )

Parameters

  • text_config (dict, optional) — Dictionary of configuration options used to initialize Aimv2TextConfig.
  • vision_config (dict, optional) — Dictionary of configuration options used to initialize Aimv2VisionConfig.
  • projection_dim (int, optional, defaults to 512) — Dimensionality of text and vision projection layers.
  • logit_scale_init_value (float, optional, defaults to 2.6592) — The initial value of the logit_scale parameter.
  • kwargs (optional) — Dictionary of keyword arguments.

Aimv2Config is the configuration class to store the configuration of a Aimv2Model. It is used to instantiate a AIMv2 model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the AIMv2 apple/aimv2-large-patch14-224-lit architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import Aimv2Config, Aimv2Model

>>> # Initializing a Aimv2Config with apple/aimv2-large-patch14-224-lit style configuration
>>> configuration = Aimv2Config()

>>> # Initializing a Aimv2Model (with random weights) from the apple/aimv2-large-patch14-224-lit style configuration
>>> model = Aimv2Model(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

>>> # We can also initialize a Aimv2Config from a Aimv2TextConfig and a Aimv2VisionConfig
>>> from transformers import Aimv2TextConfig, Aimv2VisionConfig

>>> # Initializing a AIMv2Text and AIMv2Vision configuration
>>> config_text = Aimv2TextConfig()
>>> config_vision = Aimv2VisionConfig()

>>> config = Aimv2Config(text_config=config_text, vision_config=config_vision)

from_text_vision_configs

< >

( text_config: Aimv2TextConfig vision_config: Aimv2VisionConfig **kwargs ) Aimv2Config

Returns

Aimv2Config

An instance of a configuration object

Instantiate a Aimv2Config (or a derived class) from aimv2 text model configuration and aimv2 vision model configuration.

Aimv2TextConfig

class transformers.Aimv2TextConfig

< >

( vocab_size: int = 49408 hidden_size: int = 768 intermediate_size: int = 2048 num_hidden_layers: int = 12 num_attention_heads: int = 6 rms_norm_eps: float = 1e-05 attention_dropout: float = 0.0 qkv_bias: bool = False mlp_bias: bool = False hidden_act: str = 'silu' pad_token_id: typing.Optional[int] = None bos_token_id: typing.Optional[int] = None eos_token_id: int = 49407 max_position_embeddings: int = 77 initializer_range: bool = 0.02 **kwargs )

Parameters

  • vocab_size (int, optional, defaults to 49408) — Vocabulary size of the AIMv2 text model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling Aimv2Model.
  • hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
  • intermediate_size (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
  • num_attention_heads (int, optional, defaults to 6) — Number of attention heads for each attention layer in the Transformer encoder.
  • rms_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the rms normalization layers.
  • attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • qkv_bias (bool, optional, defaults to False) — Whether to add a bias to the queries, keys and values.
  • mlp_bias (bool, optional, defaults to False) — Whether to add a bias to the Linear layers or Not.
  • hidden_act (str or function, optional, defaults to "silu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" "quick_gelu" are supported.
  • pad_token_id (int, optional, defaults to 1) — The id of the padding token in the vocabulary.
  • bos_token_id (int, optional, defaults to 49406) — The id of the beginning-of-sequence token in the vocabulary.
  • eos_token_id (int, optional, defaults to 49407) — The id of the end-of-sequence token in the vocabulary.
  • max_position_embeddings (int, optional, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the for initializing all weight matrices.

This is the configuration class to store the configuration of a Aimv2TextModel. It is used to instantiate a AIMv2 text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the text encoder of the AIMv2 apple/aimv2-large-patch14-224-lit architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Aimv2VisionConfig

class transformers.Aimv2VisionConfig

< >

( hidden_size: int = 1024 intermediate_size: int = 2816 num_hidden_layers: int = 24 num_attention_heads: int = 8 num_channels: int = 3 image_size: int = 224 patch_size: int = 14 rms_norm_eps: float = 1e-05 attention_dropout: float = 0.0 qkv_bias: bool = False mlp_bias: bool = False hidden_act: str = 'silu' initializer_range: float = 0.02 use_head: bool = True is_native: bool = False **kwargs )

Parameters

  • hidden_size (int, optional, defaults to 1024) — Dimensionality of the encoder layers and the pooler layer.
  • intermediate_size (int, optional, defaults to 2816) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.
  • num_hidden_layers (int, optional, defaults to 24) — Number of hidden layers in the Transformer encoder.
  • num_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder.
  • num_channels (int, optional, defaults to 3) — Number of channels in the input images.
  • image_size (int, optional, defaults to 224) — The size (resolution) of each image.
  • patch_size (int, optional, defaults to 14) — The size (resolution) of each patch.
  • rms_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the rms normalization layers.
  • attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • qkv_bias (bool, optional, defaults to False) — Whether to add a bias to the queries, keys and values.
  • mlp_bias (bool, optional, defaults to False) — Whether to add a bias to the Linear layers or Not.
  • hidden_act (str or function, optional, defaults to "silu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" "quick_gelu" are supported.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the for initializing all weight matrices.
  • use_head (str, optional, defaults to True) — Whether to use Attention Pooling Head or Not.
  • is_native (str, optional, defaults to False) — Whether to use ckpt trained for image native resolution or not.

This is the configuration class to store the configuration of a Aimv2VisionModel. It is used to instantiate a AIMv2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the vision encoder of the AIMv2 apple/aimv2-large-patch14-224 architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

>>> from transformers import SiglipVisionConfig, SiglipVisionModel

>>> # Initializing a Aimv2VisionConfig with apple/aimv2-large-patch14-224 style configuration
>>> configuration = Aimv2VisionConfig()

>>> # Initializing a Aimv2VisionModel (with random weights) from the apple/aimv2-large-patch14-224 style configuration
>>> model = Aimv2VisionModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

Aimv2Model

class transformers.Aimv2Model

< >

( config: Aimv2Config )

Parameters

  • config (Aimv2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare Aimv2 Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None ) transformers.models.aimv2.modeling_aimv2.Aimv2Output or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using {image_processor_class}. See {image_processor_class}.__call__ for details ({processor_class} uses {image_processor_class} for processing images).
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Returns

transformers.models.aimv2.modeling_aimv2.Aimv2Output or tuple(torch.FloatTensor)

A transformers.models.aimv2.modeling_aimv2.Aimv2Output or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (None) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity.
  • logits_per_image (torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores.
  • logits_per_text (torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores.
  • text_embeds (torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of Aimv2TextModel.
  • image_embeds (torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of Aimv2VisionModel.
  • text_model_output (<class '~modeling_outputs.BaseModelOutputWithPooling'>.text_model_output, defaults to None) — The output of the Aimv2TextModel.
  • vision_model_output (<class '~modeling_outputs.BaseModelOutputWithPooling'>.vision_model_output, defaults to None) — The output of the Aimv2VisionModel.

The Aimv2Model forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, Aimv2Model

>>> model = Aimv2Model.from_pretrained("apple/aimv2-large-patch14-224-lit")
>>> processor = AutoProcessor.from_pretrained("apple/aimv2-large-patch14-224-lit")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(
...     text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
... )

>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image  # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1)  # we can take the softmax to get the label probabilities

Aimv2VisionModel

class transformers.Aimv2VisionModel

< >

( config: Aimv2VisionConfig )

Parameters

  • config (Aimv2VisionConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The Vision model from AIMv2 without any head or projection on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None ) transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)

Parameters

  • pixel_values (`of shape(batch_size, num_channels, image_size, image_size)) -- The tensors corresponding to the input images. Pixel values can be obtained using {image_processor_class}. See {image_processor_class}.call for details ({processor_class}uses{image_processor_class}` for processing images).
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Returns

transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Aimv2Config) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The Aimv2VisionModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from PIL import Image
>>> import requests
>>> from transformers import AutoProcessor, Siglip2VisionModel

>>> model = Aimv2VisionModel.from_pretrained("apple/aimv2-large-patch14-native")
>>> processor = AutoProcessor.from_pretrained("apple/aimv2-large-patch14-native")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> inputs = processor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooled_output = outputs.pooler_output  # pooled features

Aimv2TextModel

class transformers.Aimv2TextModel

< >

( config: Aimv2TextConfig )

Parameters

  • config (Aimv2TextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The text model from AIMv2 without any head or projection on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None ) transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)

Parameters

  • input_ids (`of shape(batch_size, sequence_length)`) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.

Returns

transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Aimv2Config) and inputs.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The Aimv2TextModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

</pt> <tf> < > Update on GitHub