dmariko's picture
SmolLM-1.7B-Instruct_fsdp_qlora_nf4_adapter
99bdfeb verified
|
raw
history blame
2.07 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
base_model: HuggingFaceTB/SmolLM-1.7B-Instruct
datasets:
  - generator
model-index:
  - name: SmolLM_1_7B_Instruct_qlora_nf4
    results: []

SmolLM_1_7B_Instruct_qlora_nf4

This model is a fine-tuned version of HuggingFaceTB/SmolLM-1.7B-Instruct on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6524

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
2.0516 0.9756 10 1.8985
1.8456 1.9512 20 1.7876
1.7672 2.9268 30 1.7327
1.712 4.0 41 1.6967
1.6874 4.9756 51 1.6761
1.6643 5.9512 61 1.6636
1.6426 6.9268 71 1.6565
1.6388 8.0 82 1.6530
1.6387 8.9756 92 1.6524
1.6343 9.7561 100 1.6524

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.0
  • Pytorch 2.1.0
  • Datasets 2.18.0
  • Tokenizers 0.19.1