hkivancoral's picture
End of training
4459bb9
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_small_sgd_0001_fold2
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7054908485856906

smids_1x_deit_small_sgd_0001_fold2

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7825
  • Accuracy: 0.7055

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0783 1.0 75 1.0542 0.4509
1.0523 2.0 150 1.0379 0.4892
1.0294 3.0 225 1.0237 0.5225
1.0365 4.0 300 1.0109 0.5391
1.0245 5.0 375 0.9996 0.5524
0.9804 6.0 450 0.9885 0.5674
1.0004 7.0 525 0.9780 0.5857
0.979 8.0 600 0.9674 0.5957
0.9543 9.0 675 0.9573 0.5973
0.9564 10.0 750 0.9477 0.6023
0.9543 11.0 825 0.9383 0.6190
0.9466 12.0 900 0.9293 0.6256
0.9322 13.0 975 0.9204 0.6273
0.8987 14.0 1050 0.9121 0.6356
0.9075 15.0 1125 0.9040 0.6356
0.9086 16.0 1200 0.8961 0.6456
0.8923 17.0 1275 0.8885 0.6473
0.8806 18.0 1350 0.8814 0.6506
0.8958 19.0 1425 0.8744 0.6522
0.8609 20.0 1500 0.8676 0.6572
0.9085 21.0 1575 0.8614 0.6656
0.8274 22.0 1650 0.8554 0.6689
0.8481 23.0 1725 0.8495 0.6739
0.8248 24.0 1800 0.8441 0.6739
0.8342 25.0 1875 0.8389 0.6722
0.8564 26.0 1950 0.8341 0.6805
0.8458 27.0 2025 0.8294 0.6822
0.7955 28.0 2100 0.8250 0.6839
0.8045 29.0 2175 0.8208 0.6839
0.8063 30.0 2250 0.8168 0.6822
0.8139 31.0 2325 0.8131 0.6855
0.8204 32.0 2400 0.8097 0.6855
0.7688 33.0 2475 0.8065 0.6889
0.8146 34.0 2550 0.8035 0.6938
0.7717 35.0 2625 0.8006 0.6938
0.7969 36.0 2700 0.7981 0.6955
0.805 37.0 2775 0.7957 0.6955
0.8385 38.0 2850 0.7936 0.6988
0.7682 39.0 2925 0.7916 0.6988
0.7759 40.0 3000 0.7898 0.7005
0.8019 41.0 3075 0.7883 0.7005
0.7801 42.0 3150 0.7869 0.7005
0.7773 43.0 3225 0.7857 0.6988
0.788 44.0 3300 0.7847 0.7005
0.7811 45.0 3375 0.7839 0.7022
0.7761 46.0 3450 0.7833 0.7022
0.7855 47.0 3525 0.7828 0.7038
0.7857 48.0 3600 0.7826 0.7055
0.7597 49.0 3675 0.7825 0.7055
0.7828 50.0 3750 0.7825 0.7055

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0