File size: 2,205 Bytes
bf1d723
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
986fd72
bf1d723
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: apache-2.0
datasets:
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
library_name: transformers
tags:
- llama-factory
- unsloth
---
# h2o-danube2 with ChatML template

This model was first fine-tuned with [BAdam](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") on [m-a-p/CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) and [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback), unfiltered from the latest [dolphin dataset](https://huggingface.co/datasets/cognitivecomputations/dolphin-2.9.3), using LLama-Factory.

## Template

```jinja
<|im_start|>system
You are a helpful coding assistant.<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>
```

## BAdam config

**System:** You are a helpful coding assistant.

```yaml
### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 10
seed: 720

### dataset
dataset: codefeedback_instruct_unfiltered,codefeedback_unfiltered
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: code-feedback-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 0.00001
num_train_epochs: 1
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 2000
```

### BAdam training results

| Training Loss | Epoch  | Step  | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.6181        | 0.1789 | 2000  | 0.6044          |
| 0.6835        | 0.3578 | 4000  | 0.5949          |
| 0.5649        | 0.5367 | 6000  | 0.5893          |
| 0.6559        | 0.7155 | 8000  | 0.5850          |
| 0.6591        | 0.8944 | 10000 | 0.5839          |