lsmille's picture
Update README.md
2772d1f verified
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: togethercomputer/evo-1-8k-base
model-index:
  - name: lora_evo_ta_all_layers_17
    results: []

lora_evo_ta_all_layers_17

This model is a fine-tuned version of togethercomputer/evo-1-8k-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5577

Model description

Trained on single ID token 5K dataset filtered to 10k sequences (30% for test data = 3000)

lora_alpha = 64 <--------------

lora_dropout = 0.1

lora_r = 128

epochs = 3

learning rate = 3e-4

warmup_steps=500

gradient_accumulation_steps = 1

train_batch = 2

eval_batch = 2

ALL Linear layers

Changed ' token to > <--------------

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
2.7754 0.3749 1312 2.6704
2.6257 0.7497 2624 2.6140
2.576 1.1246 3936 2.5976
2.5475 1.4994 5248 2.5839
2.5424 1.8743 6560 2.5722
2.498 2.2491 7872 2.5708
2.4993 2.624 9184 2.5647
2.4939 2.9989 10496 2.5577

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1