Note: This model is no longer the optimal W8A8 quantization, please consider using a better quantization model I made later: noneUsername/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token-better

My first quantization uses the quantization method provided by vllm:

https://docs.vllm.ai/en/latest/quantization/int8.html

NUM_CALIBRATION_SAMPLES = 2048

MAX_SEQUENCE_LENGTH = 8192

smoothing_strength=0.8

I will verify the validity of the model and update the readme as soon as possible.

edit: The performance in my ERP test was comparable to Mistral-Nemo-Instruct-2407-GPTQ-INT8, which I consider a successful quantization.

vllm (pretrained=/root/autodl-tmp/Mistral-Nemo-Instruct-2407,add_bos_token=true,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.800 ± 0.0253
strict-match 5 exact_match ↑ 0.784 ± 0.0261

lm_eval --model vllm --model_args pretrained="/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token",add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0 --tasks gsm8k --num_fewshot 5 --limit 250 --batch_size 1 vllm (pretrained=/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token,add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 1

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.784 ± 0.0261
strict-match 5 exact_match ↑ 0.768 ± 0.0268
In gsm8k, still a bit worse than the original...

lm_eval --model vllm
l_args > --model_args pretrained="/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token",add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0
ks hellaswag \

--tasks hellaswag
--limit 150
--num_fewshot 10
--batch_size 1

vllm (pretrained=/mnt/e/Code/models/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token,add_bos_token=true,dtype=half,tensor_parallel_size=2,max_model_len=4096,gpu_memory_utilization=0.85,swap_space=0), gen_kwargs: (None), limit: 150.0, num_fewshot: 10, batch_size: 1

Tasks Version Filter n-shot Metric Value Stderr
hellaswag 1 none 10 acc ↑ 0.5800 ± 0.0404
none 10 acc_norm ↑ 0.7533 ± 0.0353
Downloads last month
6
Safetensors
Model size
12.2B params
Tensor type
BF16
·
I8
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for noneUsername/Mistral-Nemo-Instruct-2407-W8A8-Dynamic-Per-Token

Finetuned
(22)
this model