File size: 1,001 Bytes
813d186 4468be5 813d186 4468be5 813d186 4468be5 813d186 4468be5 813d186 4468be5 813d186 4468be5 813d186 4468be5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
language:
- en
library_name: transformers
tags:
- autoround
license: apache-2.0
base_model:
- Qwen/QwQ-32B
---
## Model Details
This is [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main) (symmetric quantization) and serialized with the GPTQ format in 3-bit. The model has been created, tested, and evaluated by The Kaitchup.
The model is compatible with vLLM and Transformers.

Details on the quantization process and how to use the model here: [The Kaitchup](https://kaitchup.substack.com/)
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Language(s) (NLP):** English
- **License:** Apache 2.0 license
## How to Support My Work
Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. |