trollek's picture
Update README.md
a6f7147 verified
|
raw
history blame
2.32 kB
metadata
license: apache-2.0
datasets:
  - cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
language:
  - en
library_name: transformers
base_model: h2oai/h2o-danube2-1.8b-base
tags:
  - llama-factory
  - unsloth

h2o-danube2 with ChatML template

This model was first fine-tuned with BAdam on cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split using LLama-Factory.

Template

<|im_start|>system
You are a helpful assistant that gives long and detailed answers.<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>

BAdam config

### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 6
seed: 720

### dataset
dataset: wizardlm_evol_v2_196k_unfiltered
template: ninja_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: wizardlm-evol-v2-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 0.00001
num_train_epochs: 1
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.01
pure_bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 1000

BAdam training results

Training Loss Epoch Step Validation Loss
0.6195 0.1050 1000 0.7363
0.6788 0.2100 2000 0.7252
0.689 0.3150 3000 0.7172
0.6707 0.4200 4000 0.7133
0.6674 0.5250 5000 0.7091
0.7365 0.6301 6000 0.7085
0.7037 0.7351 7000 0.7066
0.709 0.8401 8000 0.7041
0.6652 0.9451 9000 0.7042