HMAR: Pretrained Multi-Scale Autoregressive Image Generation Models
Code | Paper | Project Website
Model Overview
Description:
Visual AutoRegressive modeling (VAR) has shown promise in bridging the speed and quality gap between autoregressive image models and diffusion models. VAR reformulates autoregressive modeling by decomposing an image into successive resolution scales. During inference, an image is generated by predicting all the tokens in the next (higher-resolution) scale, conditioned on all tokens in all previous (lower-resolution) scales. However, this formulation suffers from reduced image quality due to parallel generation of all tokens in a resolution scale; has sequence lengths scaling superlinearly in image resolution; and requires retraining to change the sampling schedule.
We introduce Hierarchical Masked AutoRegressive modeling (HMAR), a new image generation algorithm that alleviates these issues using next-scale prediction and masked prediction to generate high-quality images with fast sampling. HMAR reformulates next-scale prediction as a Markovian process, wherein prediction of each resolution scale is conditioned only on tokens in its immediate predecessor instead of the tokens in all predecessor resolutions. When predicting a resolution scale, HMAR uses a controllable multi-step masked generation procedure to generate a subset of the tokens in each step. On ImageNet 256x256 and 512x512 benchmarks, HMAR models match or outperform parameter-matched VAR, diffusion, and autoregressive baselines. We develop efficient IO-aware block-sparse attention kernels that allow HMAR to achieve faster training and inference times over VAR by over 2.5x and 1.75x respectively, as well as over 3x lower inference memory footprint. Finally HMAR yields additional flexibility over VAR; its sampling schedule can be changed without further training, and it can be applied to image editing tasks in a zero-shot manner.
This model is for research and development/non-commercial use only.
Model Developer: NVIDIA
Model Versions
We release 4 trained checkpoints for models of different sizes: hmar-d16
, hmar-d20
, hmar-d24
and hmar-d30
with 0.46B, 0.84B, 1.3B and 2.4B trainable parameters, respectively.
- [hmar-d
N
] Given a ImageNet class label as input (from 0 to 999), the model produces an image that belongs to that class.
License:
This model is released under the NVIDIA One-Way Noncommercial License (NSCLv1). For a custom license, please contact [email protected].
Under the NVIDIA One-Way Noncommercial License (NSCLv1), NVIDIA confirms:
- Models are not for commercial use.
- NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
Deployment Geography:
Global
Use Case:
Conditional Image Generation: Generation of images conditioned on a class label from the ImageNet dataset.
Release Date:
- Github: 07/08/2025
- Huggingface: 07/08/2025
Model Architecture:
Architecture Type: Transformer
Network Architecture: Block-wise attention DiT
This model was developed based on VAR.
Input
Input Type(s): Class label (integer between 0 and 999)
Input Format(s):
- Class label: Integer
Input Parameters:
- Class label: One-dimensional (1D)
Other Properties Related to Input:
- The sampling configs can be modified at
config/sampling/hmar-d30.yaml
- The number of masked sampling steps can be changed at
utils/sampling_arg_util.py
Output
Output Type: Image
Output Format: PNG
Output Parameters: Image: Two-dimensional (2D)
Other Properties Related to Output: The generated images are RGB images of size 256x256.
- Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration
Runtime Engine(s):
Not Applicable (N/A)]
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Blackwell
- NVIDIA Hopper
Note: We have only tested doing inference with BF16 precision.
Operating System(s):
- Linux (We have not tested on other operating systems.)
Usage
See the HMAR repository for details.
Training, Testing, and Evaluation Datasets:
We use the ImageNet dataset in our experiments –both for training, testing, and evaluation. ImageNet is a widely used dataset that spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. Each data sample is composed of an image-label pair.
The total size (in number of data points): 1,431,167
Total number of datasets: 1
Dataset partition: Training [89.5 %], testing [7 %], validation [3.5 %]
Training Dataset: Official ImageNet training dataset
Link: https://www.image-net.org
Test Dataset: Official ImageNet test dataset
Link: https://www.image-net.org
Evaluation Dataset: Official ImageNet validation dataset
Link: https://www.image-net.org
Evaluation
Please see our technical paper for detailed evaluations.
Inference:
Acceleration Engine: PyTorch, flash attention
Test Hardware: H100, A100, GB200
- Minimum 1 GPU cards, multi nodes require Infiniband / ROCE connection
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
For more detailed information on ethical considerations for this model, please see the Explainability, Bias, Safety & Security, and Privacy subcards below.
Please report security vulnerabilities or NVIDIA AI Concerns here.
Plus Plus (++) Promise
We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
- Verified to comply with current applicable disclosure laws, regulations, and industry standards.
- Verified to comply with applicable privacy labeling requirements.
- Annotated to describe the collector/source (NVIDIA or a third-party).
- Characterized for technical limitations.
- Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
- Reviewed before release.
- Tagged for known restrictions and potential safety implications.
Bias
Field | Response |
---|---|
Participation considerations from adversely impacted groups protected classes in model design and testing: | None |
Measures taken to mitigate against unwanted bias: | None |
Explainability
Field | Response |
---|---|
Intended Application & Domain: | Image Generation |
Model Type: | Transformer |
Intended Users: | Research |
Output: | Image |
Describe how the model works: | Generates images based on a class label from ImageNet |
Technical Limitations: | Due to the stochastic nature of the model, it may not correctly follow the label on which the generation is conditioned at times. |
Verified to have met prescribed NVIDIA quality standards: | Yes |
Performance Metrics: | . We report multiple metrics for HMAR used for conditional image generation models. These are FID, IS, Precision and Recall metrics –see results table above. In addition, we perform human verification of the generated outputs to validate image quality and prompt following |
Potential Known Risks: | None Known |
Licensing: | NVIDIA One-Way Noncommercial License (NSCLv1) |
Privacy
Field | Response |
---|---|
Generatable or reverse engineerable personal data? | No |
Personal data used to create this model? | No |
How often is dataset reviewed? | Before Release |
Is there provenance for all datasets used in training? | Not Applicable. Only externally-sourced data was used. |
Does data labeling (annotation, metadata) comply with privacy laws? | Yes. |
Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. |
Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy/ = |
Safety
Field | Response |
---|---|
Model Application(s): | Conditional Image generation |
Describe the life critical impact (if present). | None Known |
Use Case Restrictions: | NVIDIA One-Way Noncommercial License (NSCLv1) |
Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |