Qwen2.5-VL-72B-Instruct-quantized-FP8-Dynamic

Model Overview

  • Model Architecture: Qwen2.5-VL-72B-Instruct
    • Input: Vision-Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
    • Activation quantization: FP8
  • Release Date: 2/24/2025
  • Version: 1.0
  • Model Developers: Neural Magic

Quantized version of Qwen/Qwen2.5-VL-72B-Instruct.

Model Optimizations

This model was obtained by quantizing the weights of Qwen/Qwen2.5-VL-72B-Instruct to FP8 data type, ready for inference with vLLM >= 0.5.2.

Deployment

Use with vLLM

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams

# prepare model
llm = LLM(
    model="neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic",
    trust_remote_code=True,
    max_model_len=4096,
    max_num_seqs=2,
)

# prepare inputs
question = "What is the content of this image?"
inputs = {
    "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
    "multi_modal_data": {
        "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
    },
}

# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT  : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")

vLLM also supports OpenAI-compatible serving. See the documentation for more details.

Creation

This model was created with llm-compressor by running the code snippet below as part a multimodal announcement blog.

Model Creation Code
import requests
import torch
from PIL import Image
from transformers import AutoProcessor
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import (
    TraceableQwen2_5_VLForConditionalGeneration,
)
from llmcompressor.modifiers.quantization import QuantizationModifier

# Load model.
model_id = Qwen/Qwen2.5-VL-72B-Instruct
model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
    model_id, device_map="auto", torch_dtype="auto"
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)

# Recipe
recipe = [
    QuantizationModifier(
        targets="Linear",
        scheme="FP8_DYNAMIC",
        sequential_targets=["MistralDecoderLayer"],
        ignore=["re:.*lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
    ),
]

SAVE_DIR=f"{model_id.split('/')[1]}-FP8-Dynamic"

# Perform oneshot
oneshot(
    model=model,
    recipe=recipe,
    trust_remote_code_model=True,
    output_dir=SAVE_DIR
)

Evaluation

The model was evaluated using mistral-evals for vision-related tasks and using lm_evaluation_harness for select text-based benchmarks. The evaluations were conducted using the following commands:

Evaluation Commands

Vision Tasks

  • vqav2
  • docvqa
  • mathvista
  • mmmu
  • chartqa
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7

python -m eval.run eval_vllm \
        --model_name neuralmagic/pixtral-12b-quantized.w8a8 \
        --url http://0.0.0.0:8000 \
        --output_dir ~/tmp \
        --eval_name <vision_task_name>

Text-based Tasks

MMLU

lm_eval \
  --model vllm \
  --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
  --tasks mmlu \
  --num_fewshot 5 \
  --batch_size auto \
  --output_path output_dir

MGSM

lm_eval \
  --model vllm \
  --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
  --tasks mgsm_cot_native \
  --num_fewshot 0 \
  --batch_size auto \
  --output_path output_dir

Accuracy

Category Metric Qwen/Qwen2.5-VL-72B-Instruct neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic Recovery (%)
Vision MMMU (val, CoT)
explicit_prompt_relaxed_correctness
64.33 66.88 103.96%
VQAv2 (val)
vqa_match
81.94 81.94 100.00%
DocVQA (val)
anls
94.71 94.64 99.93%
ChartQA (test, CoT)
anywhere_in_answer_relaxed_correctness
88.96 89.04 100.09%
Mathvista (testmini, CoT)
explicit_prompt_relaxed_correctness
78.18 77.78 99.49%
Average Score 81.62 81.86 100.29%
Text MGSM (CoT) 75.45 49.65 65.81%
MMLU (5-shot) 86.16 86.12 99.95%

Inference Performance

This model achieves up to 1.79x speedup in single-stream deployment and up to 1.84x speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario. The following performance benchmarks were conducted with vLLM version 0.7.2, and GuideLLM.

Benchmarking Command ``` guidellm --model neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=,generated_tokens=,images=,width=,height= --max seconds 120 --backend aiohttp_server ```

Single-stream performance (measured with vLLM version 0.7.2)

Document Visual Question Answering
1680W x 2240H
64/128
Visual Reasoning
640W x 480H
128/128
Image Captioning
480W x 360H
0/128
Hardware Number of GPUs Model Average Cost Reduction Latency (s) Queries Per Dollar Latency (s)th> Queries Per Dollar Latency (s) Queries Per Dollar
A100 4 Qwen/Qwen2.5-VL-72B-Instruct 6.4 78 4.5 111 4.4 113
2 neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8 1.85 7.0 143 4.9 205 4.8 211
1 neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16 3.33 9.4 213 5.1 396 4.8 420
H100 4 Qwen/Qwen2.5-VL-72B-Instruct 4.3 68 3.0 97 2.9 100
2 neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic 1.79 4.6 122 3.3 173 3.2 177
1 neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16 5.66 4.3 252 4.3 252 1.0 1065

**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens

**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).

Multi-stream asynchronous performance (measured with vLLM version 0.7.2)

Document Visual Question Answering
1680W x 2240H
64/128
Visual Reasoning
640W x 480H
128/128
Image Captioning
480W x 360H
0/128
Hardware Model Average Cost Reduction Maximum throughput (QPS) Queries Per Dollar Maximum throughput (QPS) Queries Per Dollar Maximum throughput (QPS) Queries Per Dollar
A100x4 Qwen/Qwen2.5-VL-72B-Instruct 0.4 180 1.1 539 1.2 595
neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8 1.80 1.2 578 4.0 2040 4.6 2266
neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16 2.75 2.8 1364 12.8 6352 16.4 8148
H100x4 Qwen/Qwen2.5-VL-72B-Instruct 0.5 134 1.2 357 1.3 379
neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic 1.73 1.8 479 4.4 1203 4.8 1296
neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16 8.27 13.2 3652 13.2 3652 99.2 27108

**Use case profiles: Image Size (WxH) / prompt tokens / generation tokens

**QPS: Queries per second.

**QPD: Queries per dollar, based on on-demand cost at Lambda Labs (observed on 2/18/2025).

Downloads last month
572
Safetensors
Model size
73.4B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nm-testing/Qwen2.5-VL-72B-Instruct-FP8-Dynamic

Unable to build the model tree, the base model loops to the model itself. Learn more.