Get Started

Sparse-Llama-3.1 models use 2:4 semi-structured sparsity to deliver 2x model size and compute reduction. Explore the launch blog to learn more about Sparse-Llama-3.1 and its potential for efficient, scalable AI deployments. You can also find all available models in our Neural Magic HuggingFace collection.

Looking to build on top of sparse models? Whether you aim to reduce deployment costs, improve inference performance, or create highly optimized versions for your enterprise needs, Sparse Llama provides the ideal foundation. These models offer state-of-the-art efficiency with 2:4 structured sparsity, enabling cost-effective scaling without sacrificing accuracy. Connect with us to explore how we can help integrate sparsity into your AI workflows.

Sparse-Llama-3.1-8B-2of4

Model Overview

  • Model Architecture: Llama-3.1-8B
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Sparsity: 2:4
  • Release Date: 11/20/2024
  • Version: 1.0
  • License(s): llama3.1
  • Model Developers: Neural Magic

This is the 2:4 sparse version of Llama-3.1-8B. On the OpenLLM benchmark (version 1), it achieves an average score of 62.16, compared to 63.19 for the dense model—demonstrating a 98.37% accuracy recovery. On the Mosaic Eval Gauntlet benchmark (version v0.3), it achieves an average score of 53.85, versus 55.34 for the dense model—representing a 97.3% accuracy recovery.

Model Optimizations

This model was obtained by pruning all linear operators within transformer blocks to the 2:4 sparsity pattern: in each group of four weights, two are retained while two are pruned. In addition to pruning, the sparse model was trained with knowledge distillation for 13B tokens to recover the accuracy loss incurred by pruning. For pruning, we utilize optimized version of SparseGPT through LLM-Compressor, and for sparse training with knowledge distillation we utilize SquareHead approach.

Deployment with vLLM

This model can be deployed efficiently using the vLLM backend. vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.

Evaluation

This model was evaluated on the OpenLLM benchmark (version 1) with the vLLM engine for faster inference. In addition to the OpenLLM benchmark, the model was evaluated on the Mosaic Eval Gauntlet benchmark (version v0.3). The evaluation results are summarized below.

Accuracy

Open LLM Leaderboard evaluation scores

Benchmark Llama-3.1-8B Sparse-Llama-3.1-8B-2of4
ARC-C (25-shot) 58.2 59.4
MMLU (5-shot) 65.4 60.6
HellaSwag (10-shot) 82.3 79.8
WinoGrande (5-shot) 78.3 75.9
GSM8K (5-shot) 50.7 56.3
TruthfulQA (0-shot) 44.2 40.9
Average Score 63.19 62.16
Accuracy Recovery (%) 100 98.37

Mosaic Eval Gauntlet evaluation scores

Benchmark Llama-3.1-8B Sparse-Llama-3.1-8B-2of4
World Knowledge 59.4 55.6
Commonsense Reasoning 49.3 50.0
Language Understanding 69.8 69.0
Symbolic Problem Solving 40.0 37.1
Reading Comprehension 58.2 57.5
Average Score 55.34 53.85
Accuracy Recovery (%) 100 97.3
Downloads last month
1,479
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for neuralmagic/Sparse-Llama-3.1-8B-2of4

Finetuned
(520)
this model
Finetunes
3 models
Quantizations
3 models

Collection including neuralmagic/Sparse-Llama-3.1-8B-2of4