base_model:
- mistralai/Mistral-Small-Instruct-2409
- TheDrummer/Cydonia-22B-v1.1
library_name: transformers
tags:
- mergekit
- merge
license: other
"A balancing act between smart do-gooders and creative evil-doers."
The Drummer turns into a Joshi Youchien
This is a merge of pre-trained language models created using mergekit.
GGUF quants : knifeayumu/Lite-Cydonia-22B-v1.1-50-50-GGUF
Inspiration
I thought both BeaverAI/Cydonia-22B-v1f-GGUF and BeaverAI/Cydonia-22B-v1e-GGUF versions were a bit too evil. The sense of morality was too screwed up, and it was quite deterministic (swipes didn't offer much variety) compared to the base model. Then an idea popped into my mind — why not merge it back with the base model? That way, we could give it a sense of "good" again, at least a little. Maybe this would also fix some of the deterministic generations.
Quick testing shows... it works? Zero-shot evil Q&A no longer works, but with a bit of persuasion, it did answer. Unlike knifeayumu/Lite-Cydonia-22B-v1.1-75-25, this merge is more censored but also smarter.
Credits to TheDrummer and BeaverAI who makes such finetunes. "Lightly decensored" is a heavy understatement in this case.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using mistralai/Mistral-Small-Instruct-2409 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: mistralai/Mistral-Small-Instruct-2409
parameters:
weight: 0.5
- model: TheDrummer/Cydonia-22B-v1.1
parameters:
weight: 0.5
merge_method: task_arithmetic
base_model: mistralai/Mistral-Small-Instruct-2409
dtype: float16