hkivancoral's picture
End of training
daf3165
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_small_adamax_00001_fold3
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6511627906976745

hushem_1x_deit_small_adamax_00001_fold3

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6553
  • Accuracy: 0.6512

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.3641 0.3953
1.3358 2.0 12 1.2934 0.4186
1.3358 3.0 18 1.2307 0.4419
1.1053 4.0 24 1.1728 0.5814
0.9503 5.0 30 1.1200 0.5814
0.9503 6.0 36 1.0691 0.5814
0.8249 7.0 42 1.0268 0.6047
0.8249 8.0 48 1.0002 0.6279
0.6991 9.0 54 0.9588 0.6279
0.62 10.0 60 0.9254 0.6279
0.62 11.0 66 0.8988 0.6744
0.5003 12.0 72 0.8718 0.6279
0.5003 13.0 78 0.8636 0.6279
0.4251 14.0 84 0.8486 0.6279
0.3584 15.0 90 0.8228 0.6279
0.3584 16.0 96 0.8029 0.6512
0.2955 17.0 102 0.7980 0.6279
0.2955 18.0 108 0.7871 0.6047
0.2345 19.0 114 0.7646 0.6279
0.2022 20.0 120 0.7571 0.6279
0.2022 21.0 126 0.7433 0.6512
0.1667 22.0 132 0.7314 0.6744
0.1667 23.0 138 0.7263 0.6279
0.1461 24.0 144 0.7221 0.6744
0.1251 25.0 150 0.7120 0.6512
0.1251 26.0 156 0.6954 0.6512
0.1033 27.0 162 0.6904 0.6512
0.1033 28.0 168 0.6870 0.6744
0.0941 29.0 174 0.6821 0.6744
0.0792 30.0 180 0.6785 0.6744
0.0792 31.0 186 0.6761 0.6744
0.0681 32.0 192 0.6723 0.6744
0.0681 33.0 198 0.6679 0.6744
0.0621 34.0 204 0.6648 0.6512
0.0554 35.0 210 0.6628 0.6512
0.0554 36.0 216 0.6584 0.6744
0.0533 37.0 222 0.6569 0.6744
0.0533 38.0 228 0.6569 0.6512
0.0487 39.0 234 0.6565 0.6512
0.0478 40.0 240 0.6552 0.6512
0.0478 41.0 246 0.6553 0.6512
0.0459 42.0 252 0.6553 0.6512
0.0459 43.0 258 0.6553 0.6512
0.0488 44.0 264 0.6553 0.6512
0.0454 45.0 270 0.6553 0.6512
0.0454 46.0 276 0.6553 0.6512
0.0445 47.0 282 0.6553 0.6512
0.0445 48.0 288 0.6553 0.6512
0.0487 49.0 294 0.6553 0.6512
0.0463 50.0 300 0.6553 0.6512

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1