Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic

Model Overview

  • Model Architecture: Llama-3.1-8B
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Sparsity: 2:4
    • Weight quantization: FP8
    • Activation quantization: FP8
  • Release Date: 11/15/2024
  • Version: 1.0
  • License(s): llama3.1
  • Model Developers: Neural Magic

This is a code completion AI model obtained by fine-tuning the 2:4 sparse Sparse-Llama-3.1-8B-2of4 on the evol-codealpaca-v1 dataset, followed by quantization On the HumanEval benchmark, it achieves a pass@1 of 49.0, compared to 48.5 for the fine-tuned dense model Llama-3.1-8B-evolcodealpaca — demonstrating over 100% accuracy recovery.

Model Optimizations

This model was obtained by quantizing the weights and of Sparse-Llama-3.1-8B-evolcodealpaca-2of4 to FP8 data type. This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x). Weight quantization also reduces disk size requirements by approximately 50%.

Only weights and activations of the linear operators within transformers blocks are quantized. Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between FP8 and BF16 representations for each output channel dimension. Linear scaling factors are computed via by minimizing the mean squarred error (MSE). Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between FP8 and BF16 representations.

Deployment with vLLM

This model can be deployed efficiently using the vLLM backend. vLLM also supports OpenAI-compatible serving. See the documentation for more details.

Evaluation

This model was evaluated on Neural Magic's fork of EvalPlus.

Accuracy

Human Benchmark

Metric Llama-3.1-8B-evolcodealpaca Sparse-Llama-3.1-8B-evolcodealpaca-2of4 Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
HumanEval pass@1 48.5 49.1 49.0
HumanEval+ pass@1 44.2 46.3 46.2
Downloads last month
12
Safetensors
Model size
8.03B params
Tensor type
BF16
·
F8_E4M3
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for neuralmagic/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic

Dataset used to train neuralmagic/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic

Collection including neuralmagic/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic