keyfan's picture
Update README.md
ccc3bf0
---
license: apache-2.0
language:
- en
---
This is 2-bit quantization of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using [QuIP#](https://cornell-relaxml.github.io/quip-sharp/)
## Model loading
Please follow the instruction of [QuIP-for-all](https://github.com/chu-tianxiang/QuIP-for-all) for usage.
As an alternative, you can use [vLLM branch](https://github.com/chu-tianxiang/vllm-gptq/tree/quip_gemv) for faster inference. QuIP has to launch like 5 kernels for each linear layer, so it's very helpful for vLLM to use cuda-graph to reduce launching overhead. BTW, If you have problem installing fast-hadamard-transform from pip, you can also install it from [source](https://github.com/Dao-AILab/fast-hadamard-transform)
## Perplexity
Measured at Wikitext with 4096 context length
| fp16 | 2-bit |
| ------- | ------- |
| 3.8825 | 5.2799 |
## Speed
Measured with `examples/benchmark_latency.py` script at vLLM repo.
At batch size = 1, it generates at 16.3 tokens/s with single 3090.