---
tags:
- vllm
- vision
- w8a8
license: apache-2.0
license_link: >-
https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: Qwen/Qwen2-VL-72B-Instruct
library_name: transformers
---
# Qwen2-VL-72B-Instruct-quantized-w8a8
## Model Overview
- **Model Architecture:** Qwen/Qwen2-VL-72B-Instruct
- **Input:** Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT8
- **Activation quantization:** INT8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
Model Creation Code
```python
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from qwen_vl_utils import process_vision_info
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import TraceableQwen2VLForConditionalGeneration
from llmcompressor.transformers.utils.data_collator import qwen2_vl_data_collator
# Load model.
model_id = "Qwen/Qwen2-VL-72B-Instruct"
model = TraceableQwen2VLForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "lmms-lab/flickr30k"
DATASET_SPLIT = {"calibration": "test[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.01
# Apply chat template and tokenize inputs.
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
# Recipe
recipe = [
GPTQModifier(
targets="Linear",
scheme="W8A8",
sequential_targets=["Qwen2VLDecoderLayer"],
ignore=["lm_head", "re:visual.*"],
),
]
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=qwen2_vl_data_collator,
output_dir=SAVE_DIR
)
```
Evaluation Commands
### Vision Tasks
- vqav2
- docvqa
- mathvista
- mmmu
- chartqa
```
vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
python -m eval.run eval_vllm \
--model_name neuralmagic/pixtral-12b-quantized.w8a8 \
--url http://0.0.0.0:8000 \
--output_dir ~/tmp \
--eval_name
Category | Metric | Qwen/Qwen2-VL-72B-Instruct | neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8 | Recovery (%) |
---|---|---|---|---|
Vision | MMMU (val, CoT) explicit_prompt_relaxed_correctness |
62.11 | 61.78 | 99.47% |
VQAv2 (val) vqa_match |
82.51 | 82.50 | 99.99% | |
DocVQA (val) anls |
95.01 | 94.90 | 99.88% | |
ChartQA (test, CoT) anywhere_in_answer_relaxed_correctness |
83.40 | 83.32 | 99.90% | |
Mathvista (testmini, CoT) explicit_prompt_relaxed_correctness |
66.57 | 69.57 | 104.51% | |
Average Score | 77.12 | 77.21 | 100.12% | |
Text | MGSM (CoT) | 68.60 | 67.62 | 98.57% |
MMLU (5-shot) | 82.70 | 82.83 | 100.16% |
Document Visual Question Answering 1680W x 2240H 64/128 |
Visual Reasoning 640W x 480H 128/128 |
Image Captioning 480W x 360H 0/128 |
|||||||
---|---|---|---|---|---|---|---|---|---|
Hardware | Number of GPUs | Model | Average Cost Reduction | Latency (s) | QPD | Latency (s)th> | QPD | Latency (s) | QPD |
A100 | 4 | Qwen/Qwen2-VL-72B-Instruct | 6.5 | 77 | 4.6 | 110 | 4.4 | 113 | |
2 | neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8 | 1.85 | 7.2 | 139 | 4.9 | 206 | 4.8 | 211 | |
1 | neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16 | 3.32 | 10.0 | 202 | 5.0 | 398 | 4.8 | 419 | |
H100 | 4 | Qwen/Qwen2-VL-72B-Instruct | 4.4 | 66 | 3.0 | 97 | 2.9 | 99 | |
2 | neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic | 1.79 | 4.7 | 119 | 3.3 | 173 | 3.2 | 177 | |
1 | neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16 | 2.60 | 6.4 | 172 | 4.3 | 253 | 4.2 | 259 |
Document Visual Question Answering 1680W x 2240H 64/128 |
Visual Reasoning 640W x 480H 128/128 |
Image Captioning 480W x 360H 0/128 |
||||||
---|---|---|---|---|---|---|---|---|
Hardware | Model | Average Cost Reduction | Maximum throughput (QPS) | QPD | Maximum throughput (QPS) | QPD | Maximum throughput (QPS) | QPD |
A100x4 | Qwen/Qwen2-VL-72B-Instruct | 0.3 | 169 | 1.1 | 538 | 1.2 | 595 | |
neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8 | 1.84 | 1.2 | 586 | 4.0 | 2042 | 4.6 | 2270 | |
neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16 | 2.73 | 2.4 | 1256 | 12.8 | 6364 | 16.0 | 8076 | |
H100x4 | Qwen/Qwen2-VL-72B-Instruct | 0.5 | 137 | 1.2 | 356 | 1.3 | 377 | |
neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic | 1.70 | 1.6 | 457 | 4.4 | 1207 | 4.8 | 1296 | |
neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16 | 2.35 | 5.2 | 1400 | 13.2 | 3640 | 14.4 | 3976 |