mistral-7b-openhermes-sft
mistral-7b-openhermes-sft is an SFT fine-tuned version of unsloth/mistral-7b-bnb-4bit using the teknium/openhermes dataset.
Fine-tuning configuration
LoRA
- r: 256
- LoRA alpha: 128
- LoRA dropout: 0.0
Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: adamw_torch_fused
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.1
- Learning rate scheduler type: linear
- Max seq length: 2048
- 4-bit bnb: True
Trained with Unsloth and Huggingface's TRL library.
- Downloads last month
- 194
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.