lsmille's picture
Update README.md
3080808 verified
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: togethercomputer/evo-1-8k-base
model-index:
  - name: lora_evo_ta_all_layers_2
    results: []

lora_evo_ta_all_layers_2

This model is a fine-tuned version of togethercomputer/evo-1-8k-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1660

Model description

lora_alpha = 32

lora_dropout = 0.05

lora_r = 16

epochs = 9 <---------------

learning rate = 3e-4

warmup_steps=0.5

gradient_accumulation_steps = 8

train_batch = 1

eval_batch = 1

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_steps: 0.5
  • num_epochs: 9

Training results

Training Loss Epoch Step Validation Loss
3.0681 0.9925 33 2.9815
2.9165 1.9850 66 2.9530
2.8091 2.9774 99 2.9446
2.6361 4.0 133 2.9406
2.6312 4.9925 166 2.9409
2.57 5.9850 199 2.9978
2.5215 6.9774 232 3.0450
2.4107 8.0 266 3.0763
2.4272 8.9323 297 3.1660

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1