hkivancoral's picture
End of training
7e57d35
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_small_sgd_00001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.49833333333333335

smids_1x_deit_small_sgd_00001_fold5

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0422
  • Accuracy: 0.4983

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0773 1.0 75 1.0761 0.4267
1.0923 2.0 150 1.0743 0.43
1.0827 3.0 225 1.0725 0.4317
1.0592 4.0 300 1.0709 0.4333
1.0688 5.0 375 1.0693 0.43
1.0722 6.0 450 1.0678 0.4317
1.0759 7.0 525 1.0664 0.4317
1.0583 8.0 600 1.0650 0.4333
1.0565 9.0 675 1.0637 0.4383
1.0589 10.0 750 1.0624 0.44
1.0608 11.0 825 1.0612 0.4483
1.0706 12.0 900 1.0600 0.4517
1.0517 13.0 975 1.0589 0.4567
1.0525 14.0 1050 1.0579 0.4567
1.0257 15.0 1125 1.0569 0.4583
1.0608 16.0 1200 1.0559 0.4617
1.0548 17.0 1275 1.0550 0.46
1.0482 18.0 1350 1.0541 0.4617
1.0606 19.0 1425 1.0533 0.4633
1.0832 20.0 1500 1.0524 0.4667
1.0387 21.0 1575 1.0517 0.4717
1.0524 22.0 1650 1.0510 0.4733
1.043 23.0 1725 1.0503 0.4733
1.0404 24.0 1800 1.0496 0.475
1.0507 25.0 1875 1.0490 0.4767
1.026 26.0 1950 1.0484 0.48
1.0409 27.0 2025 1.0478 0.48
1.0569 28.0 2100 1.0473 0.4867
1.0416 29.0 2175 1.0468 0.4867
1.0319 30.0 2250 1.0463 0.4867
1.0368 31.0 2325 1.0459 0.49
1.0498 32.0 2400 1.0455 0.4933
1.0315 33.0 2475 1.0451 0.4933
1.0281 34.0 2550 1.0447 0.49
1.0165 35.0 2625 1.0444 0.4917
1.0233 36.0 2700 1.0441 0.4933
1.0217 37.0 2775 1.0438 0.4967
1.0413 38.0 2850 1.0435 0.4967
1.0419 39.0 2925 1.0433 0.4967
1.0408 40.0 3000 1.0431 0.4967
1.0269 41.0 3075 1.0429 0.4983
1.0155 42.0 3150 1.0428 0.4983
1.0319 43.0 3225 1.0426 0.4983
1.015 44.0 3300 1.0425 0.4983
1.0304 45.0 3375 1.0424 0.4983
1.037 46.0 3450 1.0424 0.4983
1.0444 47.0 3525 1.0423 0.4983
1.0465 48.0 3600 1.0423 0.4983
1.0337 49.0 3675 1.0423 0.4983
1.0221 50.0 3750 1.0422 0.4983

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0