Heejindo's picture
training_output/rationale_model_e10_save5000/
83808c0 verified
metadata
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: rationale_model_e10_save5000
    results: []

rationale_model_e10_save5000

This model is a fine-tuned version of meta-llama/Llama-3.2-1B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.6975

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
0.3986 0.9538 5000 2.6975
0.1468 1.9077 10000 3.2221
0.1156 2.8615 15000 3.4922
0.0981 3.8153 20000 3.6490
0.0847 4.7692 25000 3.8345
0.0704 5.7230 30000 3.9968
0.0551 6.6768 35000 4.2504
0.0433 7.6307 40000 4.5271
0.0354 8.5845 45000 4.7534
0.0317 9.5383 50000 4.9696

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.3.0
  • Datasets 2.14.4
  • Tokenizers 0.20.3