lesso18's picture
End of training
d2bbb66 verified
metadata
library_name: peft
license: llama3
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: ba8111ef-38b6-4394-80e6-fea84ce06ab4
    results: []

Built with Axolotl

ba8111ef-38b6-4394-80e6-fea84ce06ab4

This model is a fine-tuned version of scb10x/llama-3-typhoon-v1.5-8b-instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0022

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000218
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 180
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0000 1 0.4080
0.012 0.0020 50 0.0177
0.0225 0.0040 100 0.0250
0.0085 0.0060 150 0.0175
0.0166 0.0081 200 0.0136
0.012 0.0101 250 0.0087
0.0003 0.0121 300 0.0050
0.0007 0.0141 350 0.0043
0.0008 0.0161 400 0.0028
0.0006 0.0181 450 0.0023
0.0026 0.0201 500 0.0022

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1