hkivancoral's picture
End of training
cb91f10
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_small_sgd_0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6193656093489148

smids_1x_deit_small_sgd_0001_fold1

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9238
  • Accuracy: 0.6194

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1708 1.0 76 1.1671 0.2154
1.1329 2.0 152 1.1541 0.2421
1.1491 3.0 228 1.1427 0.2538
1.1383 4.0 304 1.1321 0.2654
1.095 5.0 380 1.1227 0.2838
1.1047 6.0 456 1.1138 0.3055
1.1073 7.0 532 1.1055 0.3322
1.1015 8.0 608 1.0974 0.3456
1.0955 9.0 684 1.0899 0.3639
1.0349 10.0 760 1.0825 0.3706
1.0617 11.0 836 1.0752 0.3957
1.0611 12.0 912 1.0681 0.4140
1.0517 13.0 988 1.0610 0.4240
1.0458 14.0 1064 1.0541 0.4357
1.0495 15.0 1140 1.0471 0.4391
1.032 16.0 1216 1.0402 0.4457
1.0199 17.0 1292 1.0334 0.4608
1.0216 18.0 1368 1.0267 0.4691
1.0137 19.0 1444 1.0202 0.4825
1.0117 20.0 1520 1.0136 0.5058
0.9951 21.0 1596 1.0073 0.5142
0.9978 22.0 1672 1.0011 0.5175
0.9715 23.0 1748 0.9952 0.5242
0.9775 24.0 1824 0.9895 0.5392
0.9696 25.0 1900 0.9841 0.5409
0.9601 26.0 1976 0.9789 0.5492
0.9807 27.0 2052 0.9740 0.5593
0.9357 28.0 2128 0.9694 0.5626
0.9396 29.0 2204 0.9650 0.5710
0.9629 30.0 2280 0.9608 0.5743
0.9473 31.0 2356 0.9570 0.5826
0.9153 32.0 2432 0.9532 0.5860
0.9343 33.0 2508 0.9497 0.5927
0.953 34.0 2584 0.9465 0.5993
0.949 35.0 2660 0.9435 0.6027
0.9108 36.0 2736 0.9407 0.6043
0.9432 37.0 2812 0.9382 0.6060
0.9019 38.0 2888 0.9358 0.6093
0.9269 39.0 2964 0.9337 0.6093
0.9369 40.0 3040 0.9318 0.6110
0.8967 41.0 3116 0.9301 0.6110
0.9379 42.0 3192 0.9286 0.6144
0.8809 43.0 3268 0.9273 0.6160
0.9188 44.0 3344 0.9263 0.6177
0.8925 45.0 3420 0.9254 0.6177
0.8985 46.0 3496 0.9247 0.6177
0.9127 47.0 3572 0.9242 0.6194
0.9099 48.0 3648 0.9239 0.6194
0.9072 49.0 3724 0.9238 0.6194
0.9001 50.0 3800 0.9238 0.6194

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0