license: other | |
language: | |
- en | |
[EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Putri's Megamix-A1](https://huggingface.co/gradientputri/Megamix-A1-13B). | |
GGUF quants from [Sao10K](https://huggingface.co/Sao10K) here: [MegaMix-L2-13B-GGUF](https://huggingface.co/Sao10K/MegaMix-L2-13B-GGUF) | |
## Model details | |
Quantized at 5.33bpw | |
## Prompt Format | |
I'm using Alpaca format: | |
``` | |
### Instruction: | |
### Response: | |
``` |