shubhrapandit commited on
Commit
c07b3ad
·
verified ·
1 Parent(s): c9edce1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -1
README.md CHANGED
@@ -169,18 +169,124 @@ oneshot(
169
 
170
  ## Evaluation
171
 
172
- The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
173
 
174
  <details>
175
  <summary>Evaluation Commands</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
  ```
 
 
 
178
  ```
 
 
 
 
 
 
 
179
 
 
180
  </details>
181
 
 
182
  ### Accuracy
183
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184
  ## Inference Performance
185
 
186
 
 
169
 
170
  ## Evaluation
171
 
172
+ The model was evaluated using [mistral-evals](https://github.com/neuralmagic/mistral-evals) for vision-related tasks and using [lm_evaluation_harness](https://github.com/neuralmagic/lm-evaluation-harness) for select text-based benchmarks. The evaluations were conducted using the following commands:
173
 
174
  <details>
175
  <summary>Evaluation Commands</summary>
176
+
177
+ ### Vision Tasks
178
+ - vqav2
179
+ - docvqa
180
+ - mathvista
181
+ - mmmu
182
+ - chartqa
183
+
184
+ ```
185
+ vllm serve neuralmagic/pixtral-12b-quantized.w8a8 --tensor_parallel_size 1 --max_model_len 25000 --trust_remote_code --max_num_seqs 8 --gpu_memory_utilization 0.9 --dtype float16 --limit_mm_per_prompt image=7
186
+
187
+ python -m eval.run eval_vllm \
188
+ --model_name neuralmagic/pixtral-12b-quantized.w8a8 \
189
+ --url http://0.0.0.0:8000 \
190
+ --output_dir ~/tmp \
191
+ --eval_name <vision_task_name>
192
+ ```
193
+
194
+ ### Text-based Tasks
195
+ #### MMLU
196
+
197
+ ```
198
+ lm_eval \
199
+ --model vllm \
200
+ --model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
201
+ --tasks mmlu \
202
+ --num_fewshot 5 \
203
+ --batch_size auto \
204
+ --output_path output_dir
205
 
206
  ```
207
+
208
+ #### MGSM
209
+
210
  ```
211
+ lm_eval \
212
+ --model vllm \
213
+ --model_args pretrained="<model_name>",dtype=auto,max_model_len=4096,max_gen_toks=2048,max_num_seqs=128,tensor_parallel_size=<n>,gpu_memory_utilization=0.9 \
214
+ --tasks mgsm_cot_native \
215
+ --num_fewshot 0 \
216
+ --batch_size auto \
217
+ --output_path output_dir
218
 
219
+ ```
220
  </details>
221
 
222
+
223
  ### Accuracy
224
 
225
+ <table>
226
+ <thead>
227
+ <tr>
228
+ <th>Category</th>
229
+ <th>Metric</th>
230
+ <th>Qwen/Qwen2-VL-72B-Instruct</th>
231
+ <th>neuralmagic/Qwen2-VL-72B-Instruct-quantized.w8a8</th>
232
+ <th>Recovery (%)</th>
233
+ </tr>
234
+ </thead>
235
+ <tbody>
236
+ <tr>
237
+ <td rowspan="6"><b>Vision</b></td>
238
+ <td>MMMU (val, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
239
+ <td>62.11</td>
240
+ <td>61.78</td>
241
+ <td>99.47%</td>
242
+ </tr>
243
+ <tr>
244
+ <td>VQAv2 (val)<br><i>vqa_match</i></td>
245
+ <td>82.51</td>
246
+ <td>82.50</td>
247
+ <td>99.99%</td>
248
+ </tr>
249
+ <tr>
250
+ <td>DocVQA (val)<br><i>anls</i></td>
251
+ <td>95.01</td>
252
+ <td>94.90</td>
253
+ <td>99.88%</td>
254
+ </tr>
255
+ <tr>
256
+ <td>ChartQA (test, CoT)<br><i>anywhere_in_answer_relaxed_correctness</i></td>
257
+ <td>83.40</td>
258
+ <td>83.32</td>
259
+ <td>99.90%</td>
260
+ </tr>
261
+ <tr>
262
+ <td>Mathvista (testmini, CoT)<br><i>explicit_prompt_relaxed_correctness</i></td>
263
+ <td>66.57</td>
264
+ <td>69.57</td>
265
+ <td>104.51%</td>
266
+ </tr>
267
+ <tr>
268
+ <td><b>Average Score</b></td>
269
+ <td><b>77.12</b></td>
270
+ <td><b>77.21</b></td>
271
+ <td><b>100.12%</b></td>
272
+ </tr>
273
+ <tr>
274
+ <td rowspan="2"><b>Text</b></td>
275
+ <td>MGSM (CoT)</td>
276
+ <td>68.60</td>
277
+ <td>67.62</td>
278
+ <td>98.57%</td>
279
+ </tr>
280
+ <tr>
281
+ <td>MMLU (5-shot)</td>
282
+ <td>82.70</td>
283
+ <td>82.83</td>
284
+ <td>100.16%</td>
285
+ </tr>
286
+ </tbody>
287
+ </table>
288
+
289
+
290
  ## Inference Performance
291
 
292