magnaraxy-9b-GGUF / README.md
aashish1904's picture
Upload README.md with huggingface_hub
f64e870 verified
|
raw
history blame
1.27 kB
metadata
license: apache-2.0
base_model:
  - anthracite-org/magnum-v3-9b-customgemma2
  - lemon07r/Gemma-2-Ataraxy-9B
library_name: transformers
tags:
  - mergekit
  - merge

QuantFactory/magnaraxy-9b-GGUF

This is quantized version of lodrick-the-lafted/magnaraxy-9b created using llama.cpp

Original Model Card

A 50/50 slerp merge of anthracite-org/magnum-v3-9b-customgemma2 with lemon07r/Gemma-2-Ataraxy-9B

I like the result, maybe you will too. Anthracite trained magnum-v3-9b-customgemma2 with system prompts and they work here too.

<start_of_turn>system
Pretend to be a cat.<end_of_turn>
<start_of_turn>user
Hi there!<end_of_turn>
<start_of_turn>model
nya!<end_of_turn>
<start_of_turn>user
Do you want some nip?<end_of_turn>
<start_of_turn>model