Dans-PersonalityEngine-V1.2.0-24b
This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline.
It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.
Key Details
BASE MODEL: mistralai/Mistral-Small-24B-Base-2501 LICENSE: apache-2.0 LANGUAGE: English CONTEXT LENGTH: 32768 tokens

Recommended Settings
TEMPERATURE: 1.0 TOP_P: 0.95 MIN_P: 0.05
Prompting Format
The model uses standard "ChatML" format:
<|im_start|>system system prompt<|im_end|> <|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|>A word of caution: As of Feb 19 2025 backends can't seem to agree on automatically addind a "bos" token to the start, which they should! I'm investigating if there is a way I can change the config to mitigate this but for now if you have incoherent outputs not typical of a 24b model (verbatim repeating what you said back to you for instance) then try adding "<s>" to the very beginning of your context.
SillyTavern Templates
Context Template
{ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n", "example_separator": "", "chat_start": "", "use_stop_strings": false, "allow_jailbreak": false, "always_force_name2": false, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Dan-ChatML" }
Instruct Template
{ "system_prompt": "Write {{char}}'s actions and dialogue, user will write {{user}}'s.", "input_sequence": "<|im_start|>user\n", "output_sequence": "<|im_start|>assistant\n", "first_output_sequence": "", "last_output_sequence": "", "system_sequence_prefix": "", "system_sequence_suffix": "", "stop_sequence": "<|im_end|>", "wrap": false, "macro": true, "names": false, "names_force_groups": false, "activation_regex": "", "skip_examples": false, "output_suffix": "<|im_end|>\n", "input_suffix": "<|im_end|>\n", "system_sequence": "<|im_start|>system\n", "system_suffix": "<|im_end|>\n", "user_alignment_message": "", "last_system_sequence": "", "system_same_as_user": false, "first_input_sequence": "", "last_input_sequence": "", "name": "Dan-ChatML" }
A Chub.AI Sponsored Model
Character Hub supported this model with 65 hours on a 4x H200 144GB system. This is only some of what they've provided me for training and I am very grateful for their contributions, this model especially would have been difficult without it.
Character Hub has been supporting model development for quite a while now and they may be interested in your projects! Contact them through this google form.
Support Development
Development is limited by funding and resources. To help support:
- Contact on HF
- Email: [email protected]
- Downloads last month
- 263
Model tree for PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
Base model
mistralai/Mistral-Small-24B-Base-2501