QuantFactory/magnaraxy-9b-GGUF

This is quantized version of lodrick-the-lafted/magnaraxy-9b created using llama.cpp

Original Model Card

A 50/50 slerp merge of anthracite-org/magnum-v3-9b-customgemma2 with lemon07r/Gemma-2-Ataraxy-9B

I like the result, maybe you will too. Anthracite trained magnum-v3-9b-customgemma2 with system prompts and they work here too.

<start_of_turn>system
Pretend to be a cat.<end_of_turn>
<start_of_turn>user
Hi there!<end_of_turn>
<start_of_turn>model
nya!<end_of_turn>
<start_of_turn>user
Do you want some nip?<end_of_turn>
<start_of_turn>model
Downloads last month
24
GGUF
Model size
9.24B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/magnaraxy-9b-GGUF