Transformers documentation
MobileViT
This model was released on 2021-10-05 and added to Hugging Face Transformers on 2022-06-29.
MobileViT
MobileViT is a lightweight vision transformer for mobile devices that merges CNNs’s efficiency and inductive biases with transformers global context modeling. It treats transformers as convolutions, enabling global information processing without the heavy computational cost of standard ViTs.

You can find all the original MobileViT checkpoints under the Apple organization.
- This model was contributed by matthijs.
Click on the MobileViT models in the right sidebar for more examples of how to apply MobileViT to different vision tasks.
The example below demonstrates how to do [Image Classification] with Pipeline and the AutoModel class.
import torch
from transformers import pipeline
classifier = pipeline(
task="image-classification",
model="apple/mobilevit-small",
dtype=torch.float16, device=0,
)
preds = classifier("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
print(f"Prediction: {preds}\n")
Notes
- Does not operate on sequential data, it’s purely designed for image tasks.
- Feature maps are used directly instead of token embeddings.
- Use MobileViTImageProcessor to preprocess images.
- If using custom preprocessing, ensure that images are in BGR format (not RGB), as expected by the pretrained weights.
- The classification models are pretrained on ImageNet-1k.
- The segmentation models use a DeepLabV3 head and are pretrained on PASCAL VOC.
MobileViTConfig
class transformers.MobileViTConfig
< source >( num_channels = 3 image_size = 256 patch_size = 2 hidden_sizes = [144, 192, 240] neck_hidden_sizes = [16, 32, 64, 96, 128, 160, 640] num_attention_heads = 4 mlp_ratio = 2.0 expand_ratio = 4.0 hidden_act = 'silu' conv_kernel_size = 3 output_stride = 32 hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.0 classifier_dropout_prob = 0.1 initializer_range = 0.02 layer_norm_eps = 1e-05 qkv_bias = True aspp_out_channels = 256 atrous_rates = [6, 12, 18] aspp_dropout_prob = 0.1 semantic_loss_ignore_index = 255 **kwargs )
Parameters
- num_channels (
int
, optional, defaults to 3) — The number of input channels. - image_size (
int
, optional, defaults to 256) — The size (resolution) of each image. - patch_size (
int
, optional, defaults to 2) — The size (resolution) of each patch. - hidden_sizes (
list[int]
, optional, defaults to[144, 192, 240]
) — Dimensionality (hidden size) of the Transformer encoders at each stage. - neck_hidden_sizes (
list[int]
, optional, defaults to[16, 32, 64, 96, 128, 160, 640]
) — The number of channels for the feature maps of the backbone. - num_attention_heads (
int
, optional, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder. - mlp_ratio (
float
, optional, defaults to 2.0) — The ratio of the number of channels in the output of the MLP to the number of channels in the input. - expand_ratio (
float
, optional, defaults to 4.0) — Expansion factor for the MobileNetv2 layers. - hidden_act (
str
orfunction
, optional, defaults to"silu"
) — The non-linear activation function (function or string) in the Transformer encoder and convolution layers. - conv_kernel_size (
int
, optional, defaults to 3) — The size of the convolutional kernel in the MobileViT layer. - output_stride (
int
, optional, defaults to 32) — The ratio of the spatial resolution of the output to the resolution of the input image. - hidden_dropout_prob (
float
, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the Transformer encoder. - attention_probs_dropout_prob (
float
, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. - classifier_dropout_prob (
float
, optional, defaults to 0.1) — The dropout ratio for attached classifiers. - initializer_range (
float
, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (
float
, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers. - qkv_bias (
bool
, optional, defaults toTrue
) — Whether to add a bias to the queries, keys and values. - aspp_out_channels (
int
, optional, defaults to 256) — Number of output channels used in the ASPP layer for semantic segmentation. - atrous_rates (
list[int]
, optional, defaults to[6, 12, 18]
) — Dilation (atrous) factors used in the ASPP layer for semantic segmentation. - aspp_dropout_prob (
float
, optional, defaults to 0.1) — The dropout ratio for the ASPP layer for semantic segmentation. - semantic_loss_ignore_index (
int
, optional, defaults to 255) — The index that is ignored by the loss function of the semantic segmentation model.
This is the configuration class to store the configuration of a MobileViTModel. It is used to instantiate a MobileViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MobileViT apple/mobilevit-small architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import MobileViTConfig, MobileViTModel
>>> # Initializing a mobilevit-small style configuration
>>> configuration = MobileViTConfig()
>>> # Initializing a model from the mobilevit-small style configuration
>>> model = MobileViTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
MobileViTFeatureExtractor
Preprocesses a batch of images and optionally segmentation maps.
Overrides the __call__
method of the Preprocessor
class so that both images and segmentation maps can be
passed in as positional arguments.
post_process_semantic_segmentation
< source >( outputs target_sizes: typing.Optional[list[tuple]] = None ) → semantic_segmentation
Parameters
- outputs (MobileViTForSemanticSegmentation) — Raw outputs of the model.
- target_sizes (
list[Tuple]
of lengthbatch_size
, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.
Returns
semantic_segmentation
list[torch.Tensor]
of length batch_size
, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes
is
specified). Each entry of each torch.Tensor
correspond to a semantic class id.
Converts the output of MobileViTForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
MobileViTImageProcessor
class transformers.MobileViTImageProcessor
< source >( do_resize: bool = True size: typing.Optional[dict[str, int]] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_center_crop: bool = True crop_size: typing.Optional[dict[str, int]] = None do_flip_channel_order: bool = True do_reduce_labels: bool = False **kwargs )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to resize the image’s (height, width) dimensions to the specifiedsize
. Can be overridden by thedo_resize
parameter in thepreprocess
method. - size (
dict[str, int]
optional, defaults to{"shortest_edge" -- 224}
): Controls the size of the output image after resizing. Can be overridden by thesize
parameter in thepreprocess
method. - resample (
PILImageResampling
, optional, defaults toResampling.BILINEAR
) — Defines the resampling filter to use if resizing the image. Can be overridden by theresample
parameter in thepreprocess
method. - do_rescale (
bool
, optional, defaults toTrue
) — Whether to rescale the image by the specified scalerescale_factor
. Can be overridden by thedo_rescale
parameter in thepreprocess
method. - rescale_factor (
int
orfloat
, optional, defaults to1/255
) — Scale factor to use if rescaling the image. Can be overridden by therescale_factor
parameter in thepreprocess
method. - do_center_crop (
bool
, optional, defaults toTrue
) — Whether to crop the input at the center. If the input size is smaller thancrop_size
along any edge, the image is padded with 0’s and then center cropped. Can be overridden by thedo_center_crop
parameter in thepreprocess
method. - crop_size (
dict[str, int]
, optional, defaults to{"height" -- 256, "width": 256}
): Desired output size(size["height"], size["width"])
when applying center-cropping. Can be overridden by thecrop_size
parameter in thepreprocess
method. - do_flip_channel_order (
bool
, optional, defaults toTrue
) — Whether to flip the color channels from RGB to BGR. Can be overridden by thedo_flip_channel_order
parameter in thepreprocess
method. - do_reduce_labels (
bool
, optional, defaults toFalse
) — Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255. Can be overridden by thedo_reduce_labels
parameter in thepreprocess
method.
Constructs a MobileViT image processor.
preprocess
< source >( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor'], NoneType] = None do_resize: typing.Optional[bool] = None size: typing.Optional[dict[str, int]] = None resample: Resampling = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_center_crop: typing.Optional[bool] = None crop_size: typing.Optional[dict[str, int]] = None do_flip_channel_order: typing.Optional[bool] = None do_reduce_labels: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None )
Parameters
- images (
ImageInput
) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False
. - segmentation_maps (
ImageInput
, optional) — Segmentation map to preprocess. - do_resize (
bool
, optional, defaults toself.do_resize
) — Whether to resize the image. - size (
dict[str, int]
, optional, defaults toself.size
) — Size of the image after resizing. - resample (
int
, optional, defaults toself.resample
) — Resampling filter to use if resizing the image. This can be one of the enumPILImageResampling
, Only has an effect ifdo_resize
is set toTrue
. - do_rescale (
bool
, optional, defaults toself.do_rescale
) — Whether to rescale the image by rescale factor. - rescale_factor (
float
, optional, defaults toself.rescale_factor
) — Rescale factor to rescale the image by ifdo_rescale
is set toTrue
. - do_center_crop (
bool
, optional, defaults toself.do_center_crop
) — Whether to center crop the image. - crop_size (
dict[str, int]
, optional, defaults toself.crop_size
) — Size of the center crop ifdo_center_crop
is set toTrue
. - do_flip_channel_order (
bool
, optional, defaults toself.do_flip_channel_order
) — Whether to flip the channel order of the image. - do_reduce_labels (
bool
, optional, defaults toself.do_reduce_labels
) — Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255. - return_tensors (
str
orTensorType
, optional) — The type of tensors to return. Can be one of:- Unset: Return a list of
np.ndarray
. TensorType.TENSORFLOW
or'tf'
: Return a batch of typetf.Tensor
.TensorType.PYTORCH
or'pt'
: Return a batch of typetorch.Tensor
.TensorType.NUMPY
or'np'
: Return a batch of typenp.ndarray
.TensorType.JAX
or'jax'
: Return a batch of typejax.numpy.ndarray
.
- Unset: Return a list of
- data_format (
ChannelDimension
orstr
, optional, defaults toChannelDimension.FIRST
) — The channel dimension format for the output image. Can be one of:ChannelDimension.FIRST
: image in (num_channels, height, width) format.ChannelDimension.LAST
: image in (height, width, num_channels) format.
- input_data_format (
ChannelDimension
orstr
, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format."none"
orChannelDimension.NONE
: image in (height, width) format.
Preprocess an image or batch of images.
post_process_semantic_segmentation
< source >( outputs target_sizes: typing.Optional[list[tuple]] = None ) → semantic_segmentation
Parameters
- outputs (MobileViTForSemanticSegmentation) — Raw outputs of the model.
- target_sizes (
list[Tuple]
of lengthbatch_size
, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.
Returns
semantic_segmentation
list[torch.Tensor]
of length batch_size
, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes
is
specified). Each entry of each torch.Tensor
correspond to a semantic class id.
Converts the output of MobileViTForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
MobileViTImageProcessorFast
class transformers.MobileViTImageProcessorFast
< source >( **kwargs: typing_extensions.Unpack[transformers.models.mobilevit.image_processing_mobilevit_fast.MobileVitFastImageProcessorKwargs] )
Constructs a fast Mobilevit image processor.
preprocess
< source >( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] segmentation_maps: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor'], NoneType] = None **kwargs: typing_extensions.Unpack[transformers.models.mobilevit.image_processing_mobilevit_fast.MobileVitFastImageProcessorKwargs] ) → <class 'transformers.image_processing_base.BatchFeature'>
Parameters
- images (
Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']]
) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False
. - segmentation_maps (
ImageInput
, optional) — The segmentation maps to preprocess. - do_resize (
bool
, optional) — Whether to resize the image. - size (
dict[str, int]
, optional) — Describes the maximum input dimensions to the model. - default_to_square (
bool
, optional) — Whether to default to a square image when resizing, if size is an int. - resample (
Union[PILImageResampling, F.InterpolationMode, NoneType]
) — Resampling filter to use if resizing the image. This can be one of the enumPILImageResampling
. Only has an effect ifdo_resize
is set toTrue
. - do_center_crop (
bool
, optional) — Whether to center crop the image. - crop_size (
dict[str, int]
, optional) — Size of the output image after applyingcenter_crop
. - do_rescale (
bool
, optional) — Whether to rescale the image. - rescale_factor (
Union[int, float, NoneType]
) — Rescale factor to rescale the image by ifdo_rescale
is set toTrue
. - do_normalize (
bool
, optional) — Whether to normalize the image. - image_mean (
Union[float, list[float], NoneType]
) — Image mean to use for normalization. Only has an effect ifdo_normalize
is set toTrue
. - image_std (
Union[float, list[float], NoneType]
) — Image standard deviation to use for normalization. Only has an effect ifdo_normalize
is set toTrue
. - do_convert_rgb (
bool
, optional) — Whether to convert the image to RGB. - return_tensors (
Union[str, ~utils.generic.TensorType, NoneType]
) — Returns stacked tensors if set to `pt, otherwise returns a list of tensors. - data_format (
~image_utils.ChannelDimension
, optional) — OnlyChannelDimension.FIRST
is supported. Added for compatibility with slow processors. - input_data_format (
Union[str, ~image_utils.ChannelDimension, NoneType]
) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format."none"
orChannelDimension.NONE
: image in (height, width) format.
- device (
torch.device
, optional) — The device to process the images on. If unset, the device is inferred from the input images. - disable_grouping (
bool
, optional) — Whether to disable grouping of images by size to process them individually and not in batches. If None, will be set to True if the images are on CPU, and False otherwise. This choice is based on empirical observations, as detailed here: https://github.com/huggingface/transformers/pull/38157 - do_flip_channel_order (
bool
, optional, defaults toself.do_flip_channel_order
) — Whether to flip the color channels from RGB to BGR or vice versa. - do_reduce_labels (
bool
, optional, defaults toself.do_reduce_labels
) — Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255.
Returns
<class 'transformers.image_processing_base.BatchFeature'>
- data (
dict
) — Dictionary of lists/arrays/tensors returned by the call method (‘pixel_values’, etc.). - tensor_type (
Union[None, str, TensorType]
, optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at initialization.
post_process_semantic_segmentation
< source >( outputs target_sizes: typing.Optional[list[tuple]] = None ) → semantic_segmentation
Parameters
- outputs (MobileNetV2ForSemanticSegmentation) — Raw outputs of the model.
- target_sizes (
list[Tuple]
of lengthbatch_size
, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized.
Returns
semantic_segmentation
list[torch.Tensor]
of length batch_size
, where each item is a semantic
segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes
is
specified). Each entry of each torch.Tensor
correspond to a semantic class id.
Converts the output of MobileNetV2ForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch.
MobileViTModel
class transformers.MobileViTModel
< source >( config: MobileViTConfig expand_output: bool = True )
Parameters
- config (MobileViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
- expand_output (
bool
, optional, defaults toTrue
) — Whether to expand the output of the model using a 1x1 convolution. IfTrue
, the model will apply an additional 1x1 convolution to expand the output channels fromconfig.neck_hidden_sizes[5]
toconfig.neck_hidden_sizes[6]
.
The bare Mobilevit Model outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( pixel_values: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.Tensor
of shape(batch_size, num_channels, image_size, image_size)
, optional) — The tensors corresponding to the input images. Pixel values can be obtained using MobileViTImageProcessor. See MobileViTImageProcessor.call() for details (processor_class
uses MobileViTImageProcessor for processing images). - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (MobileViTConfig) and inputs.
-
last_hidden_state (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
) — Sequence of hidden-states at the output of the last layer of the model. -
pooler_output (
torch.FloatTensor
of shape(batch_size, hidden_size)
) — Last layer hidden-state after a pooling operation on the spatial dimensions. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, num_channels, height, width)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
The MobileViTModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
MobileViTForImageClassification
class transformers.MobileViTForImageClassification
< source >( config: MobileViTConfig )
Parameters
- config (MobileViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
MobileViT model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( pixel_values: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.Tensor
of shape(batch_size, num_channels, image_size, image_size)
, optional) — The tensors corresponding to the input images. Pixel values can be obtained using MobileViTImageProcessor. See MobileViTImageProcessor.call() for details (processor_class
uses MobileViTImageProcessor for processing images). - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - labels (
torch.LongTensor
of shape(batch_size,)
, optional) — Labels for computing the image classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss). Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy). - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)
A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (MobileViTConfig) and inputs.
- loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Classification (or regression if config.num_labels==1) loss. - logits (
torch.FloatTensor
of shape(batch_size, config.num_labels)
) — Classification (or regression if config.num_labels==1) scores (before SoftMax). - hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape(batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the model at the output of each stage.
The MobileViTForImageClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoImageProcessor, MobileViTForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("apple/mobilevit-small")
>>> model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-small")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
...
MobileViTForSemanticSegmentation
class transformers.MobileViTForSemanticSegmentation
< source >( config: MobileViTConfig )
Parameters
- config (MobileViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
MobileViT model with a semantic segmentation head on top, e.g. for Pascal VOC.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( pixel_values: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
Parameters
- pixel_values (
torch.Tensor
of shape(batch_size, num_channels, image_size, image_size)
, optional) — The tensors corresponding to the input images. Pixel values can be obtained using MobileViTImageProcessor. See MobileViTImageProcessor.call() for details (processor_class
uses MobileViTImageProcessor for processing images). - labels (
torch.LongTensor
of shape(batch_size, height, width)
, optional) — Ground truth semantic segmentation maps for computing the loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels > 1
, a classification loss is computed (Cross-Entropy). - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (MobileViTConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Classification (or regression if config.num_labels==1) loss. -
logits (
torch.FloatTensor
of shape(batch_size, config.num_labels, logits_height, logits_width)
) — Classification scores for each pixel.The logits returned do not necessarily have the same size as the
pixel_values
passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, patch_size, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, patch_size, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The MobileViTForSemanticSegmentation forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> import requests
>>> import torch
>>> from PIL import Image
>>> from transformers import AutoImageProcessor, MobileViTForSemanticSegmentation
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> image_processor = AutoImageProcessor.from_pretrained("apple/deeplabv3-mobilevit-small")
>>> model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small")
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> # logits are of shape (batch_size, num_labels, height, width)
>>> logits = outputs.logits