hkivancoral's picture
End of training
04b2bc6
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_tiny_sgd_lr001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.47619047619047616

hushem_1x_deit_tiny_sgd_lr001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2453
  • Accuracy: 0.4762

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.5316 0.1905
1.5831 2.0 12 1.5015 0.1905
1.5831 3.0 18 1.4762 0.1667
1.5346 4.0 24 1.4541 0.1905
1.5081 5.0 30 1.4366 0.2381
1.5081 6.0 36 1.4200 0.2857
1.4598 7.0 42 1.4054 0.2857
1.4598 8.0 48 1.3912 0.2857
1.4326 9.0 54 1.3788 0.3095
1.3952 10.0 60 1.3675 0.3571
1.3952 11.0 66 1.3571 0.3810
1.3596 12.0 72 1.3480 0.3810
1.3596 13.0 78 1.3393 0.3810
1.363 14.0 84 1.3316 0.3810
1.3301 15.0 90 1.3251 0.4048
1.3301 16.0 96 1.3178 0.4048
1.3095 17.0 102 1.3113 0.4048
1.3095 18.0 108 1.3061 0.4048
1.3044 19.0 114 1.3014 0.4048
1.2995 20.0 120 1.2970 0.4048
1.2995 21.0 126 1.2921 0.4048
1.2717 22.0 132 1.2882 0.4048
1.2717 23.0 138 1.2838 0.4048
1.2926 24.0 144 1.2801 0.4048
1.2458 25.0 150 1.2760 0.4048
1.2458 26.0 156 1.2723 0.4286
1.2592 27.0 162 1.2686 0.4286
1.2592 28.0 168 1.2659 0.4286
1.2355 29.0 174 1.2631 0.4286
1.2526 30.0 180 1.2605 0.4286
1.2526 31.0 186 1.2579 0.4524
1.2439 32.0 192 1.2557 0.4524
1.2439 33.0 198 1.2536 0.4524
1.1949 34.0 204 1.2519 0.4524
1.2285 35.0 210 1.2501 0.4524
1.2285 36.0 216 1.2488 0.4524
1.2118 37.0 222 1.2477 0.4524
1.2118 38.0 228 1.2468 0.4762
1.2136 39.0 234 1.2462 0.4762
1.2259 40.0 240 1.2457 0.4762
1.2259 41.0 246 1.2454 0.4762
1.2204 42.0 252 1.2453 0.4762
1.2204 43.0 258 1.2453 0.4762
1.2061 44.0 264 1.2453 0.4762
1.2146 45.0 270 1.2453 0.4762
1.2146 46.0 276 1.2453 0.4762
1.2137 47.0 282 1.2453 0.4762
1.2137 48.0 288 1.2453 0.4762
1.2227 49.0 294 1.2453 0.4762
1.2027 50.0 300 1.2453 0.4762

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1