deepakshirkem's picture
Upload LlamaForCausalLM
428e9d2 verified
metadata
base_model: HuggingFaceTB/SmolLM2-360M
datasets:
  - arrow
library_name: transformers
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: image-description_to_emotion
    results: []

image-description_to_emotion

This model is a fine-tuned version of HuggingFaceTB/SmolLM2-360M on the arrow dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1650

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.5979 0.3361 50 0.5039
0.3262 0.6723 100 0.2783
0.2599 1.0084 150 0.2305
0.2211 1.3445 200 0.2071
0.2004 1.6807 250 0.1969
0.2094 2.0168 300 0.1840
0.1788 2.3529 350 0.1797
0.1709 2.6891 400 0.1739
0.1604 3.0252 450 0.1693
0.141 3.3613 500 0.1671
0.1479 3.6975 550 0.1650

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1