File size: 1,859 Bytes
5d4d175 3ff629d 89254f5 5d4d175 3ff629d 5d4d175 3ff629d 5d4d175 3ff629d 5d4d175 7f589c1 8945be6 7f589c1 5d4d175 3ff629d 5d4d175 7f589c1 5d4d175 3ff629d 5d4d175 3ff629d 7f589c1 3ff629d 5d4d175 3ff629d 5d4d175 3ff629d 5d4d175 3ff629d 5d4d175 89254f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
language: en
license: apache-2.0
library_name: transformers
---
# SQFT Base Model: sqft-mistral-7b-v0.3-50-base
- Source Model: [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3)
- Sparse Method: [Wanda](https://github.com/locuslab/wanda)
- Sparsity: 50%
- Quantization: No
## Model Sources
**Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT)
**Paper:**
- [SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models](https://arxiv.org/abs/2410.03750)
- [Low-Rank Adapters Meet Neural Architecture Search for LLM Compression](https://arxiv.org/abs/2501.16372)
## How to get this model
Refer to the command in [SQFT/run_command/mistral-7b-v0.3/sparse_quantization.sh#11](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/SQFT/legacy/run_command/mistral-7b-v0.3/sparse_quantization.sh#11).
## Citation
```bash
@inproceedings{munoz-etal-2024-sqft,
title = "{SQFT}: Low-cost Model Adaptation in Low-precision Sparse Foundation Models",
author = "Munoz, Juan Pablo and
Yuan, Jinjie and
Jain, Nilesh",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.749",
pages = "12817--12832",
}
```
## Acknowledgement
Thanks to the work Wanda ([paper](https://arxiv.org/abs/2306.11695), [code](https://github.com/locuslab/wanda)), which provides a simple but effective pruning approach.
## License
Apache-2.0 |