noneUsername commited on
Commit
309501d
·
verified ·
1 Parent(s): 148c4b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -12,6 +12,12 @@ I will verify the validity of the model and update the readme as soon as possibl
12
 
13
  edit: The performance in my ERP test was comparable to Mistral-Nemo-Instruct-2407-GPTQ-INT8, which I consider a successful quantization.
14
 
 
 
 
 
 
 
15
 
16
  lm_eval --model vllm --model_args pretrained="/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token",add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0 --tasks gsm8k --num_fewshot 5 --limit 250 --batch_size 1
17
  vllm (pretrained=/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token,add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 1
@@ -19,7 +25,7 @@ vllm (pretrained=/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-
19
  |-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
20
  |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.784|± |0.0261|
21
  | | |strict-match | 5|exact_match|↑ |0.768|± |0.0268|
22
-
23
 
24
 
25
  lm_eval --model vllm \
 
12
 
13
  edit: The performance in my ERP test was comparable to Mistral-Nemo-Instruct-2407-GPTQ-INT8, which I consider a successful quantization.
14
 
15
+ vllm (pretrained=/root/autodl-tmp/Mistral-Nemo-Instruct-2407,add_bos_token=true,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
16
+ |Tasks|Version| Filter |n-shot| Metric | |Value| |Stderr|
17
+ |-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
18
+ |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.800|± |0.0253|
19
+ | | |strict-match | 5|exact_match|↑ |0.784|± |0.0261|
20
+
21
 
22
  lm_eval --model vllm --model_args pretrained="/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token",add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0 --tasks gsm8k --num_fewshot 5 --limit 250 --batch_size 1
23
  vllm (pretrained=/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token,add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 1
 
25
  |-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
26
  |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.784|± |0.0261|
27
  | | |strict-match | 5|exact_match|↑ |0.768|± |0.0268|
28
+ In gsm8k, still a bit worse than the original...
29
 
30
 
31
  lm_eval --model vllm \