Meta-Llama-3.1-405B-Instruct-quantized.w4a16
Model Overview
- Model Architecture: Meta-Llama-3
- Input: Text
- Output: Text
- Model Optimizations:
- Weight quantization: INT4
- Intended Use Cases: Intended for commercial and research use in English. Similarly to Meta-Llama-3.1-405B-Instruct, this models is intended for assistant-like chat.
- Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- Release Date: 8/9/2024
- Version: 1.0
- License(s): Llama3.1
- Model Developers: Neural Magic
This model is a quantized version of Meta-Llama-3.1-405B-Instruct. It was evaluated on a several tasks to assess the its quality in comparison to the unquatized model, including multiple-choice, math reasoning, and open-ended text generation. Meta-Llama-3.1-405B-Instruct-quantized.w4a16 achieves 98.7% recovery for the Arena-Hard evaluation, 100.0% for OpenLLM v1 (using Meta's prompting when available), 99.0% for OpenLLM v2, 98.0% for HumanEval pass@1, and 98.5% for HumanEval+ pass@1.
Model Optimizations
This model was obtained by quantizing the weights of Meta-Llama-3.1-405B-Instruct to INT4 data type. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%.
Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT4 and floating point representations of the quantized weights. The GPTQ algorithm is applied for quantization, as implemented in the llm-compressor library. GPTQ used a 1% damping factor and 512 sequences of 4,096 random tokens.
Deployment
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16"
number_gpus = 8
max_model_len = 4096
sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created by using the llm-compressor library as presented in the code snipet below.
from transformers import AutoTokenizer
from datasets import Dataset
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
import random
model_id = "meta-llama/Meta-Llama-3.1-405B-Instruct"
num_samples = 512
max_seq_len = 8192
tokenizer = AutoTokenizer.from_pretrained(model_id)
preprocess_fn = lambda example: {"text": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n{text}".format_map(example)}
dataset_name = "neuralmagic/LLM_compression_calibration"
dataset = load_dataset(dataset_name, split="train")
ds = dataset.shuffle().select(range(num_samples))
ds = ds.map(preprocess_fn)
recipe = GPTQModifier(
targets="Linear",
scheme="W4A16",
ignore=["lm_head"],
dampening_frac=0.01,
)
model = SparseAutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
trust_remote_code=True,
)
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=max_seq_len,
num_calibration_samples=num_samples,
)
model.save_pretrained("Meta-Llama-3.1-405B-Instruct-quantized.w4a16")
Evaluation
This model was evaluated on the well-known Arena-Hard, OpenLLM v1, OpenLLM v2, HumanEval, and HumanEval+ benchmarks. In all cases, model outputs were generated with the vLLM engine.
Arena-Hard evaluations were conducted using the Arena-Hard-Auto repository. The model generated a single answer for each prompt form Arena-Hard, and each answer was judged twice by GPT-4. We report below the scores obtained in each judgement and the average.
OpenLLM v1 and v2 evaluations were conducted using Neural Magic's fork of lm-evaluation-harness (branch llama_3.1_instruct). This version of the lm-evaluation-harness includes versions of MMLU, ARC-Challenge and GSM-8K that match the prompting style of Meta-Llama-3.1-Instruct-evals and a few fixes to OpenLLM v2 tasks.
HumanEval and HumanEval+ evaluations were conducted using Neural Magic's fork of the EvalPlus repository.
Detailed model outputs are available as HuggingFace datasets for Arena-Hard, OpenLLM v2, and HumanEval.
Note: Results have been updated after Meta modified the chat template.
Accuracy
Open LLM Leaderboard evaluation scores
Benchmark | Meta-Llama-3.1-405B-Instruct | Meta-Llama-3.1-405B-Instruct-quantized.w4a16 (this model) | Recovery |
Arena Hard | 67.4 (67.3 / 67.5) | 66.5 (66.5 / 66.4) | 98.7% |
OpenLLM v1 | |||
MMLU (5-shot) | 87.4 | 87.2 | 99.8% |
ARC Challenge (0-shot) | 95.0 | 95.3 | 100.4% |
GSM-8K (CoT, 8-shot, strict-match) | 96.4 | 96.3 | 99.8% |
Hellaswag (10-shot) | 88.3 | 88.3 | 99.9% |
Winogrande (5-shot) | 87.2 | 87.4 | 100.2% |
TruthfulQA (0-shot) | 64.6 | 65.3 | 101.0% |
Average | 86.8 | 86.8 | 100.0% |
OpenLLM v2 | |||
MMLU-Pro (5-shot) | 59.7 | 59.4 | 99.3% |
IFEval (0-shot) | 87.7 | 88.0 | 100.4% |
BBH (3-shot) | 67.0 | 67.5 | 100.7% |
Math-|v|-5 (4-shot) | 39.0 | 37.6 | 96.5% |
GPQA (0-shot) | 19.5 | 17.5 | 89.8% |
MuSR (0-shot) | 19.5 | 19.4 | 99.5% |
Average | 48.7 | 48.2 | 99.0% |
Coding | |||
HumanEval pass@1 | 86.8 | 85.1 | 98.0% |
HumanEval+ pass@1 | 80.1 | 78.9 | 98.5% |
Reproduction
The results were obtained using the following commands:
MMLU
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,max_model_len=4096,max_gen_toks=10,tensor_parallel_size=8 \
--tasks mmlu_llama_3.1_instruct \
--apply_chat_template \
--fewshot_as_multiturn \
--num_fewshot 5 \
--batch_size auto
ARC-Challenge
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,max_model_len=4096,tensor_parallel_size=8 \
--tasks arc_challenge_llama_3.1_instruct \
--apply_chat_template \
--num_fewshot 0 \
--batch_size auto
GSM-8K
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,max_model_len=4096,tensor_parallel_size=8 \
--tasks gsm8k_cot_llama_3.1_instruct \
--apply_chat_template \
--fewshot_as_multiturn \
--num_fewshot 8 \
--batch_size auto
Hellaswag
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8 \
--tasks hellaswag \
--num_fewshot 10 \
--batch_size auto
Winogrande
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8 \
--tasks winogrande \
--num_fewshot 5 \
--batch_size auto
TruthfulQA
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=8 \
--tasks truthfulqa \
--num_fewshot 0 \
--batch_size auto
OpenLLM v2
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,max_model_len=4096,tensor_parallel_size=8,enable_chunked_prefill=True \
--apply_chat_template \
--fewshot_as_multiturn \
--tasks leaderboard \
--batch_size auto
HumanEval and HumanEval+
Generation
python3 codegen/generate.py \
--model neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16 \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval \
--tp 8
Sanitization
python3 evalplus/sanitize.py \
humaneval/neuralmagic--Meta-Llama-3.1-405B-Instruct-quantized.w4a16_vllm_temp_0.2
Evaluation
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic--Meta-Llama-3.1-405B-Instruct-quantized.w4a16_vllm_temp_0.2-sanitized
- Downloads last month
- 2,437
Model tree for neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16
Base model
meta-llama/Llama-3.1-405B