AWS Trainium & Inferentia documentation

CLIP

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

CLIP

Overview

The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.

Export to Neuron

To deploy 🤗 Transformers models on Neuron devices, you first need to compile the models and export them to a serialized format for inference. Below are two approaches to compile the model, you can choose the one that best suits your needs. Here we take the feature-extraction as an example:

Option 1: CLI

You can export the model using the Optimum command-line interface as follows:

optimum-cli export neuron --model openai/clip-vit-base-patch32 --task feature-extraction --text_batch_size 2 --sequence_length 77 --image_batch_size 1 --num_channels 3 --width 224 --height 224 clip_feature_extraction_neuronx/

Execute optimum-cli export neuron --help to display all command line options and their description.

Option 2: Python API

from optimum.neuron import NeuronCLIPModel

input_shapes = {"text_batch_size": 2, "sequence_length": 77, "image_batch_size": 1, "num_channels": 3, "width": 224, "height": 224}
compiler_args = {"auto_cast": "matmul", "auto_cast_type": "bf16"}
neuron_model = NeuronCLIPModel.from_pretrained(
    "openai/clip-vit-base-patch32",
    export=True,
    **input_shapes,
    **compiler_args,
)
# Save locally
neuron_model.save_pretrained("clip_feature_extraction_neuronx/")

# Upload to the HuggingFace Hub
neuron_model.push_to_hub(
    "clip_feature_extraction_neuronx/", repository_id="optimum/clip-vit-base-patch32-neuronx"  # Replace with your HF Hub repo id
)

NeuronCLIPModel

class optimum.neuron.NeuronCLIPModel

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None model_file_name: typing.Optional[str] = None preprocessors: typing.Optional[typing.List] = None neuron_config: typing.Optional[ForwardRef('NeuronDefaultConfig')] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronTracedModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.

Bare CLIP Model without any specific head on top, used for the task “feature-extraction”.

This model inherits from ~neuron.modeling.NeuronTracedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

forward

< >

( input_ids: Tensor pixel_values: Tensor attention_mask: Tensor **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, None] of shape (batch_size, sequence_length)) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • pixel_values (Union[torch.Tensor, None] of shape (batch_size, num_channels, height, width)) — Pixel values corresponding to the images in the current batch. Pixel values can be obtained from encoded images using AutoImageProcessor.

The NeuronCLIPModel forward method, overrides the __call__ special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.

Example:

>>> from transformers import AutoProcessor
>>> from optimum.neuron import NeuronCLIPModel

>>> processor = AutoProcessor.from_pretrained("optimum/clip-vit-base-patch32-neuronx")
>>> model = NeuronCLIPModel.from_pretrained("optimum/clip-vit-base-patch32-neuronx")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

>>> outputs = model(**inputs)
>>> logits_per_image = outputs.logits_per_image  # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1)

NeuronCLIPForImageClassification

class optimum.neuron.NeuronCLIPForImageClassification

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None model_file_name: typing.Optional[str] = None preprocessors: typing.Optional[typing.List] = None neuron_config: typing.Optional[ForwardRef('NeuronDefaultConfig')] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronTracedModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript module with embedded NEFF(Neuron Executable File Format) compiled by neuron(x) compiler.

CLIP vision encoder with an image classification head on top (a linear layer on top of the pooled final hidden states of the patch tokens) e.g. for ImageNet.

This model inherits from ~neuron.modeling.NeuronTracedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

forward

< >

( pixel_values: Tensor **kwargs )

Parameters

  • pixel_values (Union[torch.Tensor, None] of shape (batch_size, num_channels, height, width), defaults to None) — Pixel values corresponding to the images in the current batch. Pixel values can be obtained from encoded images using AutoImageProcessor.

The NeuronCLIPForImageClassification forward method, overrides the __call__ special method. Accepts only the inputs traced during the compilation step. Any additional inputs provided during inference will be ignored. To include extra inputs, recompile the model with those inputs specified.

Example:

>>> import requests
>>> from PIL import Image
>>> from optimum.neuron import NeuronCLIPForImageClassification
>>> from transformers import AutoImageProcessor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoImageProcessor.from_pretrained("optimum/clip-vit-base-patch32-image-classification-neuronx")
>>> model = NeuronCLIPForImageClassification.from_pretrained("optimum/clip-vit-base-patch32-image-classification-neuronx")

>>> inputs = preprocessor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> predicted_label = logits.argmax(-1).item()