license: other
license_name: other
license_link: LICENSE
language:
- en
quantized_by: TheMelonGod
pipeline_tag: text-generation
tags:
- quantized
- safetensors
- exllamav2
- mistral
- GRPO
base_model:
- Nitral-AI/Captain-Eris-BMO_Violent-GRPO-v0.420
base_model_relation: quantized
Orignal Model by: Nitral-AI
Orignal Model: Captain-Eris-BMO_Violent-GRPO-v0.420
For more information about the model, I highly recommend checking out the original model page and the creator while you're at it.
ExLlamaV2 Quantizations:
8.0bpw: 8hb | 6hb
7.5bpw: 8hb | 6hb
7.0bpw: 8hb | 6hb
6.5bpw: 8hb | 6hb
6.0bpw: 8hb | 6hb
5.5bpw: 8hb | 6hb
5.0bpw: 8hb | 6hb
4.5bpw: 8hb | 6hb
4.25bpw: 8hb | 6hb
4.0bpw: 8hb | 6hb
3.75bpw: 8hb | 6hb
3.5bpw: 8hb | 6hb
3.0bpw: 8hb | 6hb
2.75bpw: 8hb | 6hb
2.5bpw: 8hb | 6hb
2.25bpw: 8hb | 6hb
2.0bpw: 8hb | 6hb
Measurement File (Default/built-in calibration dataset was used)
If you need a specific model quantized or particular bits per weight, please let me know. I’m happy to help.
Your feedback and suggestions are always welcome! They help me improve and make quantizations better for everyone.
Special thanks to turboderp for developing the tools that made these quantizations possible. Your contributions are greatly appreciated!