File size: 2,999 Bytes
e006462
 
 
 
daf9278
 
 
658687b
 
 
 
 
 
 
 
 
 
e3d0951
 
148c4b9
 
309501d
 
 
 
 
 
148c4b9
 
 
 
 
 
 
309501d
148c4b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
base_model:
- unsloth/Mistral-Nemo-Instruct-2407
---
Note: This model is no longer the optimal W8A8 quantization, please consider using a better quantization model I made later:
noneUsername/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token-better

My first quantization uses the quantization method provided by vllm:

https://docs.vllm.ai/en/latest/quantization/int8.html

NUM_CALIBRATION_SAMPLES = 2048

MAX_SEQUENCE_LENGTH = 8192

smoothing_strength=0.8

I will verify the validity of the model and update the readme as soon as possible.

edit: The performance in my ERP test was comparable to Mistral-Nemo-Instruct-2407-GPTQ-INT8, which I consider a successful quantization.

vllm (pretrained=/root/autodl-tmp/Mistral-Nemo-Instruct-2407,add_bos_token=true,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.800|±  |0.0253|
|     |       |strict-match    |     5|exact_match|↑  |0.784|±  |0.0261|


lm_eval --model vllm   --model_args pretrained="/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token",add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0   --tasks gsm8k   --num_fewshot 5   --limit 250   --batch_size 1
vllm (pretrained=/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token,add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 1
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.784|±  |0.0261|
|     |       |strict-match    |     5|exact_match|↑  |0.768|±  |0.0268|
In gsm8k, still a bit worse than the original...


lm_eval --model vllm \
l_args >   --model_args pretrained="/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token",add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0 \
ks hellaswag \
 >   --tasks hellaswag \
>    --limit 150 \
>   --num_fewshot 10 \
--batch_size 1

vllm (pretrained=/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token,add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 150.0, num_fewshot: 10, batch_size: 1
|  Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|---------|------:|------|-----:|--------|---|-----:|---|-----:|
|hellaswag|      1|none  |    10|acc     |↑  |0.5800|±  |0.0404|
|         |       |none  |    10|acc_norm|↑  |0.7533|±  |0.0353|