Transformers documentation

HGNet-V2

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.51.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

HGNet-V2

Overview

A HGNet-V2 (High Performance GPU Net) image classification model. HGNet arhtictecture was proposed in HGNET: A Hierarchical Feature Guided Network for Occupancy Flow Field Prediction by Zhan Chen, Chen Tang, Lu Xiong

The abstract from the HGNET paper is the following:

Predicting the motion of multiple traffic participants has always been one of the most challenging tasks in autonomous driving. The recently proposed occupancy flow field prediction method has shown to be a more effective and scalable representation compared to general trajectory prediction methods. However, in complex multi-agent traffic scenarios, it remains difficult to model the interactions among various factors and the dependencies among prediction outputs at different time steps. In view of this, we propose a transformer-based hierarchical feature guided network (HGNET), which can efficiently extract features of agents and map information from visual and vectorized inputs, modeling multimodal interaction relationships. Second, we design the Feature-Guided Attention (FGAT) module to leverage the potential guiding effects between different prediction targets, thereby improving prediction accuracy. Additionally, to enhance the temporal consistency and causal relationships of the predictions, we propose a Time Series Memory framework to learn the conditional distribution models of the prediction outputs at future time steps from multivariate time series. The results demonstrate that our model exhibits competitive performance, which ranks 3rd in the 2024 Waymo Occupancy and Flow Prediction Challenge.

This model was contributed by VladOS95-cyber. The original code can be found here.

HGNetV2Config

class transformers.HGNetV2Config

< >

( num_channels = 3 embedding_size = 64 depths = [3, 4, 6, 3] hidden_sizes = [256, 512, 1024, 2048] hidden_act = 'relu' out_features = None out_indices = None stem_channels = [3, 32, 48] stage_in_channels = [48, 128, 512, 1024] stage_mid_channels = [48, 96, 192, 384] stage_out_channels = [128, 512, 1024, 2048] stage_num_blocks = [1, 1, 3, 1] stage_downsample = [False, True, True, True] stage_light_block = [False, False, True, True] stage_kernel_size = [3, 3, 5, 5] stage_numb_of_layers = [6, 6, 6, 6] use_learnable_affine_block = False initializer_range = 0.02 **kwargs )

Parameters

  • num_channels (int, optional, defaults to 3) — The number of input channels.
  • embedding_size (int, optional, defaults to 64) — Dimensionality (hidden size) for the embedding layer.
  • depths (List[int], optional, defaults to [3, 4, 6, 3]) — Depth (number of layers) for each stage.
  • hidden_sizes (List[int], optional, defaults to [256, 512, 1024, 2048]) — Dimensionality (hidden size) at each stage.
  • hidden_act (str, optional, defaults to "relu") — The non-linear activation function in each block. If string, "gelu", "relu", "selu" and "gelu_new" are supported.
  • out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. Must be in the same order as defined in the stage_names attribute.
  • out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. Must be in the same order as defined in the stage_names attribute.
  • stem_channels (List[int], optional, defaults to [3, 32, 48]) — Channel dimensions for the stem layers:
    • First number (3) is input image channels
    • Second number (32) is intermediate stem channels
    • Third number (48) is output stem channels
  • stage_in_channels (List[int], optional, defaults to [48, 128, 512, 1024]) — Input channel dimensions for each stage of the backbone. This defines how many channels the input to each stage will have.
  • stage_mid_channels (List[int], optional, defaults to [48, 96, 192, 384]) — Mid-channel dimensions for each stage of the backbone. This defines the number of channels used in the intermediate layers of each stage.
  • stage_out_channels (List[int], optional, defaults to [128, 512, 1024, 2048]) — Output channel dimensions for each stage of the backbone. This defines how many channels the output of each stage will have.
  • stage_num_blocks (List[int], optional, defaults to [1, 1, 3, 1]) — Number of blocks to be used in each stage of the backbone. This controls the depth of each stage by specifying how many convolutional blocks to stack.
  • stage_downsample (List[bool], optional, defaults to [False, True, True, True]) — Indicates whether to downsample the feature maps at each stage. If True, the spatial dimensions of the feature maps will be reduced.
  • stage_light_block (List[bool], optional, defaults to [False, False, True, True]) — Indicates whether to use light blocks in each stage. Light blocks are a variant of convolutional blocks that may have fewer parameters.
  • stage_kernel_size (List[int], optional, defaults to [3, 3, 5, 5]) — Kernel sizes for the convolutional layers in each stage.
  • stage_numb_of_layers (List[int], optional, defaults to [6, 6, 6, 6]) — Number of layers to be used in each block of the stage.
  • use_learnable_affine_block (bool, optional, defaults to False) — Whether to use Learnable Affine Blocks (LAB) in the network. LAB adds learnable scale and bias parameters after certain operations.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

This is the configuration class to store the configuration of a HGNetV2Backbone. It is used to instantiate a HGNet-V2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of D-FINE-X-COCO B4 ”ustc-community/dfine_x_coco”. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

HGNetV2Backbone

class transformers.HGNetV2Backbone

< >

( config: HGNetV2Config )

forward

< >

( pixel_values: Tensor output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.BackboneOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See RTDetrImageProcessor.call() for details.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

transformers.modeling_outputs.BackboneOutput or tuple(torch.FloatTensor)

A transformers.modeling_outputs.BackboneOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HGNetV2Config) and inputs.

  • feature_maps (tuple(torch.FloatTensor) of shape (batch_size, num_channels, height, width)) — Feature maps of the stages.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size) or (batch_size, num_channels, height, width), depending on the backbone.

    Hidden-states of the model at the output of each stage plus the initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Only applicable if the backbone uses attention.

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The HGNetV2Backbone forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import RTDetrResNetConfig, RTDetrResNetBackbone
>>> import torch

>>> config = RTDetrResNetConfig()
>>> model = RTDetrResNetBackbone(config)

>>> pixel_values = torch.randn(1, 3, 224, 224)

>>> with torch.no_grad():
...     outputs = model(pixel_values)

>>> feature_maps = outputs.feature_maps
>>> list(feature_maps[-1].shape)
[1, 2048, 7, 7]

HGNetV2ForImageClassification

class transformers.HGNetV2ForImageClassification

< >

( config: HGNetV2Config )

Parameters

  • config (HGNetV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

HGNetV2 Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet.

This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See RTDetrImageProcessor.call() for details.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor)

A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HGNetV2Config) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
  • logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage.

The HGNetV2ForImageClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> import torch
>>> import requests
>>> from transformers import HGNetV2ForImageClassification, AutoImageProcessor
>>> from PIL import Image

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> model = HGNetV2ForImageClassification.from_pretrained("ustc-community/hgnet-v2")
>>> processor = AutoImageProcessor.from_pretrained("ustc-community/hgnet-v2")

>>> inputs = processor(images=image, return_tensors="pt")
>>> with torch.no_grad():
...     outputs = model(**inputs)
>>> outputs.logits.shape
torch.Size([1, 2])
< > Update on GitHub