Megamix-A1-13B-exl2 / README.md
R136a1's picture
Update README.md
c7459b1
---
license: other
language:
- en
---
[EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Putri's Megamix-A1](https://huggingface.co/gradientputri/Megamix-A1-13B).
GGUF quants from [Sao10K](https://huggingface.co/Sao10K) here: [MegaMix-L2-13B-GGUF](https://huggingface.co/Sao10K/MegaMix-L2-13B-GGUF)
## Model details
Quantized at 5.33bpw
## Prompt Format
I'm using Alpaca format:
```
### Instruction:
### Response:
```