shubhrapandit commited on
Commit
2586522
·
verified ·
1 Parent(s): fdd6ce2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +404 -0
README.md ADDED
@@ -0,0 +1,404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - vllm
4
+ - vision
5
+ - w4a16
6
+ license: apache-2.0
7
+ license_link: >-
8
+ https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
9
+ language:
10
+ - en
11
+ base_model: Qwen/Qwen2-VL-72B-Instruct
12
+ library_name: transformers
13
+ ---
14
+
15
+ # Qwen2-VL-72B-Instruct-quantized-w4a16
16
+
17
+ ## Model Overview
18
+ - **Model Architecture:** Qwen/Qwen2-VL-72B-Instruct
19
+ - **Input:** Vision-Text
20
+ - **Output:** Text
21
+ - **Model Optimizations:**
22
+ - **Weight quantization:** INT4
23
+ - **Activation quantization:** FP16
24
+ - **Release Date:** 2/24/2025
25
+ - **Version:** 1.0
26
+ - **Model Developers:** Neural Magic
27
+
28
+ Quantized version of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct).
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights of [Qwen/Qwen2-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-72B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
33
+
34
+ ## Deployment
35
+
36
+ ### Use with vLLM
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from vllm.assets.image import ImageAsset
42
+ from vllm import LLM, SamplingParams
43
+
44
+ # prepare model
45
+ llm = LLM(
46
+ model="neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16",
47
+ trust_remote_code=True,
48
+ max_model_len=4096,
49
+ max_num_seqs=2,
50
+ )
51
+
52
+ # prepare inputs
53
+ question = "What is the content of this image?"
54
+ inputs = {
55
+ "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
56
+ "multi_modal_data": {
57
+ "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
58
+ },
59
+ }
60
+
61
+ # generate response
62
+ print("========== SAMPLE GENERATION ==============")
63
+ outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
64
+ print(f"PROMPT : {outputs[0].prompt}")
65
+ print(f"RESPONSE: {outputs[0].outputs[0].text}")
66
+ print("==========================================")
67
+ ```
68
+
69
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
70
+
71
+ ## Creation
72
+
73
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
74
+
75
+ <details>
76
+ <summary>Model Creation Code</summary>
77
+
78
+ ```python
79
+ import base64
80
+ from io import BytesIO
81
+ import torch
82
+ from datasets import load_dataset
83
+ from qwen_vl_utils import process_vision_info
84
+ from transformers import AutoProcessor
85
+ from llmcompressor.modifiers.quantization import GPTQModifier
86
+ from llmcompressor.transformers import oneshot
87
+ from llmcompressor.transformers.tracing import TraceableQwen2VLForConditionalGeneration
88
+ from llmcompressor.transformers.utils.data_collator import qwen2_vl_data_collator
89
+ from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy, ActivationOrdering, QuantizationScheme
90
+
91
+ # Load model.
92
+ model_id = "Qwen/Qwen2-VL-72B-Instruct"
93
+
94
+ model = TraceableQwen2VLForConditionalGeneration.from_pretrained(
95
+ model_id,
96
+ device_map="auto",
97
+ torch_dtype="auto",
98
+ )
99
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
100
+
101
+ # Oneshot arguments
102
+ DATASET_ID = "lmms-lab/flickr30k"
103
+ DATASET_SPLIT = {"calibration": "test[:512]"}
104
+ NUM_CALIBRATION_SAMPLES = 512
105
+ MAX_SEQUENCE_LENGTH = 2048
106
+
107
+ # Load dataset and preprocess.
108
+ ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
109
+ ds = ds.shuffle(seed=42)
110
+ dampening_frac=0.01
111
+
112
+ # Apply chat template and tokenize inputs.
113
+ def preprocess_and_tokenize(example):
114
+ # preprocess
115
+ buffered = BytesIO()
116
+ example["image"].save(buffered, format="PNG")
117
+ encoded_image = base64.b64encode(buffered.getvalue())
118
+ encoded_image_text = encoded_image.decode("utf-8")
119
+ base64_qwen = f"data:image;base64,{encoded_image_text}"
120
+ messages = [
121
+ {
122
+ "role": "user",
123
+ "content": [
124
+ {"type": "image", "image": base64_qwen},
125
+ {"type": "text", "text": "What does the image show?"},
126
+ ],
127
+ }
128
+ ]
129
+ text = processor.apply_chat_template(
130
+ messages, tokenize=False, add_generation_prompt=True
131
+ )
132
+ image_inputs, video_inputs = process_vision_info(messages)
133
+
134
+ # tokenize
135
+ return processor(
136
+ text=[text],
137
+ images=image_inputs,
138
+ videos=video_inputs,
139
+ padding=False,
140
+ max_length=MAX_SEQUENCE_LENGTH,
141
+ truncation=True,
142
+ )
143
+
144
+ ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
145
+
146
+ # Recipe
147
+ recipe = GPTQModifier(
148
+ targets="Linear",
149
+ config_groups={
150
+ "config_group": QuantizationScheme(
151
+ targets=["Linear"],
152
+ weights=QuantizationArgs(
153
+ num_bits=4,
154
+ type=QuantizationType.INT,
155
+ strategy=QuantizationStrategy.GROUP,
156
+ group_size=128,
157
+ symmetric=True,
158
+ dynamic=False,
159
+ actorder=ActivationOrdering.WEIGHT,
160
+ ),
161
+ ),
162
+ },
163
+ sequential_targets=["Qwen2VLDecoderLayer"],
164
+ ignore=["lm_head", "re:visual.*"],
165
+ update_size=NUM_CALIBRATION_SAMPLES,
166
+ dampening_frac=dampening_frac
167
+ )
168
+
169
+ SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w4a16
170
+
171
+ # Perform oneshot
172
+ oneshot(
173
+ model=model,
174
+ tokenizer=model_id,
175
+ dataset=ds,
176
+ recipe=recipe,
177
+ max_seq_length=MAX_SEQUENCE_LENGTH,
178
+ num_calibration_samples=NUM_CALIBRATION_SAMPLES,
179
+ trust_remote_code_model=True,
180
+ data_collator=qwen2_vl_data_collator,
181
+ output_dir=SAVE_DIR
182
+ )
183
+
184
+ ```
185
+ </details>
186
+
187
+ ## Evaluation
188
+
189
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
190
+
191
+ <details>
192
+ <summary>Evaluation Commands</summary>
193
+
194
+ ```
195
+ ```
196
+
197
+ </details>
198
+
199
+ ### Accuracy
200
+
201
+ ## Inference Performance
202
+
203
+
204
+ This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
205
+ The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
206
+
207
+ <details>
208
+ <summary>Benchmarking Command</summary>
209
+ ```
210
+ guidellm --model neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
211
+ ```
212
+
213
+ </details>
214
+
215
+
216
+ ### Single-stream performance (measured with vLLM version 0.7.2)
217
+
218
+ <table border="1" class="dataframe">
219
+ <thead>
220
+ <tr>
221
+ <th></th>
222
+ <th></th>
223
+ <th></th>
224
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
225
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
226
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
227
+ </tr>
228
+ <tr>
229
+ <th>Hardware</th>
230
+ <th>Model</th>
231
+ <th>Average Cost Reduction</th>
232
+ <th>Latency (s)</th>
233
+ <th>QPD</th>
234
+ <th>Latency (s)th>
235
+ <th>QPD</th>
236
+ <th>Latency (s)</th>
237
+ <th>QPD</th>
238
+ </tr>
239
+ </thead>
240
+ <tbody>
241
+ <tr>
242
+ <td>A100x4</td>
243
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
244
+ <td></td>
245
+ <td>6.5</td>
246
+ <td>77</td>
247
+ <td>4.6</td>
248
+ <td>110</td>
249
+ <td>4.4</td>
250
+ <td>113</td>
251
+ </tr>
252
+ <tr>
253
+ <td>A100x2</td>
254
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
255
+ <td>1.85</td>
256
+ <td>7.2</td>
257
+ <td>139</td>
258
+ <td>4.9</td>
259
+ <td>206</td>
260
+ <td>4.8</td>
261
+ <td>211</td>
262
+ </tr>
263
+ <tr>
264
+ <td>A100x1</td>
265
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
266
+ <td>3.32</td>
267
+ <td>10.0</td>
268
+ <td>202</td>
269
+ <td>5.0</td>
270
+ <td>398</td>
271
+ <td>4.8</td>
272
+ <td>419</td>
273
+ </tr>
274
+ <tr>
275
+ <td>H100x4</td>
276
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
277
+ <td></td>
278
+ <td>4.4</td>
279
+ <td>66</td>
280
+ <td>3.0</td>
281
+ <td>97</td>
282
+ <td>2.9</td>
283
+ <td>99</td>
284
+ </tr>
285
+ <tr>
286
+ <td>H100x2</td>
287
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
288
+ <td>1.79</td>
289
+ <td>4.7</td>
290
+ <td>119</td>
291
+ <td>3.3</td>
292
+ <td>173</td>
293
+ <td>3.2</td>
294
+ <td>177</td>
295
+ </tr>
296
+ <tr>
297
+ <td>H100x1</td>
298
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
299
+ <td>2.60</td>
300
+ <td>6.4</td>
301
+ <td>172</td>
302
+ <td>4.3</td>
303
+ <td>253</td>
304
+ <td>4.2</td>
305
+ <td>259</td>
306
+ </tr>
307
+ </tbody>
308
+ </table>
309
+
310
+
311
+ ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
312
+
313
+ <table border="1" class="dataframe">
314
+ <thead>
315
+ <tr>
316
+ <th></th>
317
+ <th></th>
318
+ <th></th>
319
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
320
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
321
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
322
+ </tr>
323
+ <tr>
324
+ <th>Hardware</th>
325
+ <th>Model</th>
326
+ <th>Average Cost Reduction</th>
327
+ <th>Maximum throughput (QPS)</th>
328
+ <th>QPD</th>
329
+ <th>Maximum throughput (QPS)</th>
330
+ <th>QPD</th>
331
+ <th>Maximum throughput (QPS)</th>
332
+ <th>QPD</th>
333
+ </tr>
334
+ </thead>
335
+ <tbody>
336
+ <tr>
337
+ <th rowspan="3" valign="top">A100x4</th>
338
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
339
+ <td></td>
340
+ <td>0.3</td>
341
+ <td>169</td>
342
+ <td>1.1</td>
343
+ <td>538</td>
344
+ <td>1.2</td>
345
+ <td>595</td>
346
+ </tr>
347
+ <tr>
348
+ <td>A100x2</td>
349
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</td>
350
+ <td>1.84</td>
351
+ <td>0.6</td>
352
+ <td>293</td>
353
+ <td>2.0</td>
354
+ <td>1021</td>
355
+ <td>2.3</td>
356
+ <td>1135</td>
357
+ </tr>
358
+ <tr>
359
+ <td>A100x1</td>
360
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
361
+ <td>2.73</td>
362
+ <td>0.6</td>
363
+ <td>314</td>
364
+ <td>3.2</td>
365
+ <td>1591</td>
366
+ <td>4.0</td>
367
+ <td>2019</td>
368
+ </tr>
369
+ <tr>
370
+ <td>H100x4</td>
371
+ <td>Qwen/Qwen2-VL-72B-Instruct</td>
372
+ <td></td>
373
+ <td>0.5</td>
374
+ <td>137</td>
375
+ <td>1.2</td>
376
+ <td>356</td>
377
+ <td>1.3</td>
378
+ <td>377</td>
379
+ </tr>
380
+ <tr>
381
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-FP8-Dynamic</td>
382
+ <td>H100x2</td>
383
+ <td>1.70</td>
384
+ <td>0.8</td>
385
+ <td>236</td>
386
+ <td>2.2</td>
387
+ <td>623</td>
388
+ <td>2.4</td>
389
+ <td>669</td>
390
+ </tr>
391
+ <tr>
392
+ <td>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w4a16</td>
393
+ <td>H100x1</td>
394
+ <td>2.35</td>
395
+ <td>1.3</td>
396
+ <td>350</td>
397
+ <td>3.3</td>
398
+ <td>910</td>
399
+ <td>3.6</td>
400
+ <td>994</td>
401
+ </tr>
402
+ </tbody>
403
+ </table>
404
+