metadata
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: alpindale/Mistral-7B-v0.2
Mistral-7B-v0.2-OpenHermes
SFT Training Params:
- Learning Rate: 2e-4
- Batch Size: 8
- Gradient Accumulation steps: 4
- Dataset: teknium/OpenHermes-2.5 (200k split contains a slight bias towards rp and theory of life)
- r: 16
- Lora Alpha: 16
Training Time: 13 hours on A100
- Developed by: macadeliccc
- License: apache-2.0
- Finetuned from model : alpindale/Mistral-7B-v0.2
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.