|
--- |
|
library_name: transformers |
|
license: llama3.2 |
|
base_model: |
|
- meta-llama/Llama-3.2-3B |
|
metrics: |
|
- perplexity |
|
- precision |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
This model is a pruned version of the Llama-3.2-3b model, with a parameter reduction of 20% in the MLP Layers. |
|
The pruning process aims to enhance computational efficiency while maintaining acceptable performance across specific tasks. |
|
This model is not intended to be used directly, but rather to be fine-tuned for specific tasks where it can achieve equal or superior performance compared to fine-tuning the base model for the same task. |
|
|
|
|
|
## Model Details |
|
|
|
- **Model Type:** Pruned version of LLaMA-3.2 using structured pruning |
|
- **Original Model:** meta-llama/Llama-3.2-1B |
|
- **Pruning Method:** Structured pruning of MLP layers using importance scores based on absolute maximum weights |
|
- **Size Reduction:** 13.1% (from 3.21B to 2.79B parameters) |
|
- **Architecture:** Same as original LLaMA but with reduced MLP layer sizes |
|
- **Language(s):** Same as original model |
|
- **License:** Same as original model |
|
- **Developed by:** [Pere Martra](https://huggingface.co/oopere) |
|
|
|
These models are part of the study "[Exploring GLU Expansion Ratios: Structured Pruning in Llama-3.2 Models](https://doi.org/10.31219/osf.io/qgxea)". They explore structured pruning in GLU-based architectures using Llama-3.2 (1B and 3B variants). The pruning experiments target optimal expansion ratios to balance performance, computational efficiency, and environmental sustainability. The models were evaluated across multiple benchmarks, including BoolQ, ARC-Easy, and MUSR, and demonstrate significant efficiency gains while maintaining robust task performance. |
|
|
|
|
|
### Performance on Standard Benchmarks |
|
|
|
| Benchmark | Original Model | Pruned Model | Relative Change | |
|
| ---- | ---- | ---- | ---- | |
|
| ARC-Easy | 65.19% | 58.54% | -10.2% | |
|
| BoolQ | 64.16% | 39.97% | -37.7% | |
|
| LAMBADA-OpenAI | 62.20% | 54.94% | -11.7% | |
|
| LAMBADA-Standard | 53.46% | 49.25% | -7.9% | |
|
|
|
### Key Findings |
|
- The pruned model shows a moderate degradation on reasoning tasks (ARC-Easy) but maintains reasonable performance relative to its size reduction. |
|
- Performance on binary classification tasks (BoolQ) is more significantly impacted, indicating limitations for such use cases. |
|
- For language completion tasks (LAMBADA), the model experiences mild to moderate degradation but remains usable for less demanding applications. |
|
|
|
### Limitations |
|
- Reduced performance on tasks requiring complex reasoning or classification: Tasks such as BoolQ see significant drops in accuracy. |
|
- Impacts on long-range comprehension: While less severe than BoolQ, tasks like LAMBADA show noticeable degradation. |
|
- Limited utility for high-accuracy applications: The pruned model is less suitable for scenarios demanding peak performance in understanding or generating complex language. |
|
|
|
### Implementation Details |
|
- **Pruning Notebook:** [Detailed implementation and methodology](https://github.com/peremartra/Large-Language-Model-Notebooks-Course/blob/main/6-PRUNING/6_3_pruning_structured_llama3.2-1b_OK.ipynb) |
|
- **GitHub Repository:** [LLM Course](https://github.com/peremartra/Large-Language-Model-Notebooks-Course) |
|
- **Article explaining pruning methodology:** [How to Prune LLaMA 3.2 and Similar Large Language Models](https://medium.com/towards-data-science/how-to-prune-llama-3-2-and-similar-large-language-models-cf18e9a2afb6?sk=af4c5e40e967437325050f019b3ae606) |
|
|
|
### Pruning Method |
|
- **Technique:** Structured pruning targeting MLP layers |
|
- **Pruning Ratio:** 20% of neurons removed from MLP layers |
|
- **Selection Criteria:** Importance scoring based on absolute maximum weights |
|
- **Architecture Specifics:** Maintained GLU structure during pruning |
|
|
|
### Hardware Requirements |
|
- Reduced memory footprint compared to original model |
|
- Can run on hardware with ~15% less memory than original |
|
|
|
## Acknowledgments |
|
- Thanks to [Mariusz Kurman](https://huggingface.co/mkurman) for creating [llama-pruning](https://github.com/MedITSolutionsKurman/llama-pruning), a library that extends and improve this pruning methodology. |