This is a double fine-tuned version of Mistral Small 24B Base 2501.

Stage 1 was shoving 30M tokens of human-writen story content into it using completion training (ToastyPigeon/ms3-base-roselily), which is about half of my WIP Roselily dataset (~60M tokens total).

Stage 2 was teaching it instruct (this model) using a mix of public instruction following data and a private instruct dataset from ZeusLabs.

This model should accept (in theory) any of the following instruct formats:

Tekken v7

[SYSTEM_PROMPT]{system prompt}[/SYSTEM_PROMPT][INST]{user message}[/INST]{assistant response}</s>

ChatML

<|im_start|>system
{system prompt}<|im_end|>
<|im_start|>user
{user message}<|im_end|>
<|im_start|>assistant
{assistant response}<|im_end|>

Fizzpaca

### System:
{system prompt}

### Instruction:
{user message}

### Response:
{assistant response}</s>

The Tekken tokens were already in the tokenizer. unused special tokens #20 and 21 were repurposed for the ChatML tokens. Fizzpaca did not add any.

You may need to add both </s> and <|im_end|> as stop tokens for it to work properly with all formats.

Downloads last month
225
Safetensors
Model size
23.6B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for ToastyPigeon/ms3-roselily-instruct

Finetuned
(27)
this model
Finetunes
1 model
Merges
1 model
Quantizations
2 models

Datasets used to train ToastyPigeon/ms3-roselily-instruct