LoRA trained in 4-bit with 8k context using mistralai/Mistral-Nemo-Base-2407 as the base model for 1 epoch.

Dataset used is mpasila/LimaRP-PIPPA-freedom-rp-Mix-8K which was made using grimulkan/LimaRP-augmented, KaraKaraWitch/PIPPA-ShareGPT-formatted and openerotica/freedom-rp.

Merged LoRA: mpasila/Mistral-freeLiPPA-12B

Prompt format: ChatML

Changed to ChatML since it might be confusing to use Llama 3 Instruct template on a Mistral model...

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-nemo-base-2407-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
5
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mpasila/Mistral-freeLiPPA-LoRA-12B

Adapter
(7)
this model
Merges
2 models

Datasets used to train mpasila/Mistral-freeLiPPA-LoRA-12B