File size: 1,271 Bytes
f64e870
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

---

license: apache-2.0
base_model:
- anthracite-org/magnum-v3-9b-customgemma2
- lemon07r/Gemma-2-Ataraxy-9B
library_name: transformers
tags:
- mergekit
- merge

---

![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)

# QuantFactory/magnaraxy-9b-GGUF
This is quantized version of [lodrick-the-lafted/magnaraxy-9b](https://huggingface.co/lodrick-the-lafted/magnaraxy-9b) created using llama.cpp

# Original Model Card

<img src=https://huggingface.co/lodrick-the-lafted/magnaraxy-9b/resolve/main/claude_rocks.png>

A 50/50 slerp merge of [anthracite-org/magnum-v3-9b-customgemma2](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2) with [lemon07r/Gemma-2-Ataraxy-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B)

I like the result, maybe you will too. Anthracite trained magnum-v3-9b-customgemma2 with system prompts and they work here too.

```
<start_of_turn>system
Pretend to be a cat.<end_of_turn>
<start_of_turn>user
Hi there!<end_of_turn>
<start_of_turn>model
nya!<end_of_turn>
<start_of_turn>user
Do you want some nip?<end_of_turn>
<start_of_turn>model
```