hkivancoral's picture
End of training
3fa3943
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_deit_small_rms_001_fold3
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.715

smids_3x_deit_small_rms_001_fold3

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6470
  • Accuracy: 0.715

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1298 1.0 225 1.0984 0.34
1.1021 2.0 450 1.0452 0.4267
0.8891 3.0 675 0.8675 0.52
0.8324 4.0 900 0.9012 0.57
0.8729 5.0 1125 0.9895 0.4433
0.8594 6.0 1350 0.8010 0.605
1.2058 7.0 1575 0.8492 0.5583
0.8886 8.0 1800 0.8315 0.6067
0.825 9.0 2025 0.7910 0.6167
0.82 10.0 2250 0.8740 0.545
0.8151 11.0 2475 0.8350 0.54
0.8595 12.0 2700 0.8022 0.5917
0.7431 13.0 2925 0.7832 0.6233
0.7711 14.0 3150 0.8235 0.6017
0.743 15.0 3375 0.7910 0.6083
0.7919 16.0 3600 0.7423 0.645
0.7646 17.0 3825 0.7716 0.645
0.7563 18.0 4050 0.7602 0.6017
0.7776 19.0 4275 0.7391 0.6517
0.679 20.0 4500 0.9075 0.585
0.7215 21.0 4725 0.8407 0.5817
0.697 22.0 4950 0.7647 0.6367
0.6799 23.0 5175 0.7300 0.65
0.6618 24.0 5400 0.7249 0.6533
0.7275 25.0 5625 0.6970 0.6783
0.6922 26.0 5850 0.7048 0.66
0.6032 27.0 6075 0.7956 0.6433
0.6867 28.0 6300 0.7208 0.6633
0.7286 29.0 6525 0.7360 0.6533
0.5865 30.0 6750 0.7249 0.6833
0.6196 31.0 6975 0.7133 0.6933
0.6323 32.0 7200 0.7099 0.6617
0.6683 33.0 7425 0.6777 0.6967
0.6008 34.0 7650 0.7425 0.6517
0.6135 35.0 7875 0.6674 0.6967
0.6008 36.0 8100 0.6639 0.7033
0.6752 37.0 8325 0.6658 0.6867
0.5964 38.0 8550 0.6380 0.7067
0.57 39.0 8775 0.6573 0.7033
0.5546 40.0 9000 0.6537 0.71
0.556 41.0 9225 0.6444 0.72
0.5972 42.0 9450 0.6277 0.7217
0.4929 43.0 9675 0.6416 0.7217
0.5311 44.0 9900 0.6558 0.72
0.5177 45.0 10125 0.6499 0.7183
0.5402 46.0 10350 0.6436 0.7283
0.5836 47.0 10575 0.6389 0.7133
0.531 48.0 10800 0.6442 0.7133
0.5194 49.0 11025 0.6460 0.7117
0.5631 50.0 11250 0.6470 0.715

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2