hkivancoral's picture
End of training
116c895
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_deit_small_sgd_0001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.775

smids_3x_deit_small_sgd_0001_fold5

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5879
  • Accuracy: 0.775

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0612 1.0 225 1.0496 0.4583
1.0331 2.0 450 1.0241 0.4917
0.9976 3.0 675 0.9994 0.5317
0.9646 4.0 900 0.9754 0.5567
0.9242 5.0 1125 0.9523 0.5833
0.9274 6.0 1350 0.9296 0.6183
0.9164 7.0 1575 0.9074 0.6367
0.9203 8.0 1800 0.8860 0.6517
0.8456 9.0 2025 0.8654 0.6783
0.8517 10.0 2250 0.8458 0.6767
0.8446 11.0 2475 0.8273 0.685
0.8321 12.0 2700 0.8097 0.6933
0.8204 13.0 2925 0.7928 0.695
0.8011 14.0 3150 0.7770 0.7017
0.737 15.0 3375 0.7621 0.7017
0.7399 16.0 3600 0.7486 0.7067
0.7567 17.0 3825 0.7359 0.715
0.7583 18.0 4050 0.7243 0.7167
0.7119 19.0 4275 0.7132 0.7233
0.6839 20.0 4500 0.7031 0.7317
0.6897 21.0 4725 0.6934 0.7317
0.6996 22.0 4950 0.6842 0.7333
0.6814 23.0 5175 0.6758 0.7417
0.6885 24.0 5400 0.6680 0.7433
0.6315 25.0 5625 0.6607 0.7417
0.6519 26.0 5850 0.6539 0.7417
0.6951 27.0 6075 0.6475 0.7467
0.6243 28.0 6300 0.6416 0.75
0.6218 29.0 6525 0.6361 0.7533
0.5941 30.0 6750 0.6309 0.7533
0.5704 31.0 6975 0.6263 0.755
0.5836 32.0 7200 0.6219 0.7583
0.6485 33.0 7425 0.6178 0.76
0.5854 34.0 7650 0.6142 0.76
0.5905 35.0 7875 0.6108 0.7617
0.5617 36.0 8100 0.6076 0.7633
0.5964 37.0 8325 0.6047 0.7683
0.5721 38.0 8550 0.6021 0.7683
0.5681 39.0 8775 0.5996 0.7683
0.5364 40.0 9000 0.5974 0.7683
0.5643 41.0 9225 0.5955 0.7683
0.6152 42.0 9450 0.5938 0.77
0.5824 43.0 9675 0.5924 0.7717
0.627 44.0 9900 0.5911 0.7733
0.5753 45.0 10125 0.5900 0.7733
0.5992 46.0 10350 0.5892 0.7733
0.6048 47.0 10575 0.5886 0.775
0.5934 48.0 10800 0.5882 0.775
0.5665 49.0 11025 0.5880 0.775
0.5873 50.0 11250 0.5879 0.775

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2