shubhrapandit commited on
Commit
19ad99c
·
verified ·
1 Parent(s): b0a86ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -1
README.md CHANGED
@@ -190,18 +190,125 @@ oneshot(
190
 
191
  ## Evaluation
192
 
193
- The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
194
 
195
  <details>
196
  <summary>Evaluation Commands</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
197
 
198
  ```
 
 
 
199
  ```
 
 
 
 
 
 
 
200
 
 
201
  </details>
202
 
 
203
  ### Accuracy
204
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
205
  ## Inference Performance
206
 
207
 
 
190
 
191
  ## Evaluation
192
 
193
+ The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
194
 
195
  <details>
196
  <summary>Evaluation Commands</summary>
197
+
198
+ ### Vision Tasks
199
+ - vqav2
200
+ - docvqa
201
+ - mathvista
202
+ - mmmu
203
+ - chartqa
204
+
205
+ ```
206
+ vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
207
+
208
+ python -m eval.run eval_vllm \
209
+ --model_name neuralmagic/pixtral-12b-quantized.w8a8 \
210
+ --url http://0.0.0.0:8000 \
211
+ --output_dir ~/tmp \
212
+ --eval_name <vision_task_name>
213
+ ```
214
+
215
+ ### Text-based Tasks
216
+ #### MMLU
217
+
218
+ ```
219
+ lm_eval \
220
+ --model vllm \
221
+ --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
222
+ --tasks mmlu \
223
+ --num_fewshot 5 \
224
+ --batch_size auto \
225
+ --output_path output_dir
226
 
227
  ```
228
+
229
+ #### MGSM
230
+
231
  ```
232
+ lm_eval \
233
+ --model vllm \
234
+ --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
235
+ --tasks mgsm_cot_native \
236
+ --num_fewshot 0 \
237
+ --batch_size auto \
238
+ --output_path output_dir
239
 
240
+ ```
241
  </details>
242
 
243
+
244
  ### Accuracy
245
 
246
+ <table>
247
+ <thead>
248
+ <tr>
249
+ <th>Category</th>
250
+ <th>Metric</th>
251
+ <th>Qwen/Qwen2.5-VL-72B-Instruct</th>
252
+ <th>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</th>
253
+ <th>Recovery (%)</th>
254
+ </tr>
255
+ </thead>
256
+ <tbody>
257
+ <tr>
258
+ <td rowspan="6"><b>Vision</b></td>
259
+ <td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
260
+ <td>64.33</td>
261
+ <td>62.89</td>
262
+ <td>97.76%</td>
263
+ </tr>
264
+ <tr>
265
+ <td>VQAv2 (val)<br><i>vqa_match</i></td>
266
+ <td>81.94</td>
267
+ <td>81.87</td>
268
+ <td>99.91%</td>
269
+ </tr>
270
+ <tr>
271
+ <td>DocVQA (val)<br><i>anls</i></td>
272
+ <td>94.71</td>
273
+ <td>94.72</td>
274
+ <td>100.01%</td>
275
+ </tr>
276
+ <tr>
277
+ <td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
278
+ <td>88.96</td>
279
+ <td>88.96</td>
280
+ <td>100.00%</td>
281
+ </tr>
282
+ <tr>
283
+ <td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
284
+ <td>78.18</td>
285
+ <td>77.68</td>
286
+ <td>99.36%</td>
287
+ </tr>
288
+ <tr>
289
+ <td><b>Average Score</b></td>
290
+ <td><b>81.62</b></td>
291
+ <td><b>—</b></td>
292
+ <td><b>—</b></td>
293
+ </tr>
294
+ <tr>
295
+ <td rowspan="2"><b>Text</b></td>
296
+ <td>MGSM (CoT)</td>
297
+ <td>75.45</td>
298
+ <td>75.13</td>
299
+ <td>99.58%</td>
300
+ </tr>
301
+ <tr>
302
+ <td>MMLU (5-shot)</td>
303
+ <td>86.16</td>
304
+ <td>85.36</td>
305
+ <td>99.07%</td>
306
+ </tr>
307
+ </tbody>
308
+ </table>
309
+
310
+
311
+
312
  ## Inference Performance
313
 
314