---
license: apache-2.0
datasets:
- AuriAetherwiing/Allura
- kalomaze/Opus_Instruct_25k
base_model:
- AuriAetherwiing/Yi-1.5-9B-32K-tokfix
---
**EVA Yi 1.5 9B v1**
A RP/storywriting focused model, full-parameter finetune of Yi-1.5-9B-32K on mixture of synthetic and natural data.
A continuation of nothingiisreal's Celeste 1.x series, made to improve stability and versatility, without losing unique, diverse writing style of Celeste.
Prompt format is ChatML.
Recommended sampler values:
- Temperature: 1
- Min-P: 0.05
Recommeded SillyTavern presets (via CalamitousFelicitousness):
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
Training data:
- Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
- Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
Hardware used:
Model was trained by Kearm and Auri.
Special thanks:
- to Lemmy, Gryphe, Kalomaze and Nopm for the data
- to ALK, Fizz and CalamitousFelicitousness for Yi tokenizer fix
- and to InfermaticAI's community for their continued support for our endeavors