File size: 972 Bytes
39ff1d4
 
617ce1f
 
 
 
 
 
39ff1d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: apache-2.0
tags:
- sft
dataset:
- teknium/openhermes
base_model:
- unsloth/mistral-7b-bnb-4bit
---

# mistral-7b-openhermes-sft

mistral-7b-openhermes-sft is an SFT fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) using the [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes) dataset.

## Fine-tuning configuration
### LoRA
- r: 256
- LoRA alpha: 128
- LoRA dropout: 0.0

### Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: adamw_torch_fused
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.1
- Learning rate scheduler type: linear
- Max seq length: 2048
- 4-bit bnb: True

Trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)