hkivancoral's picture
End of training
e56b28a
|
raw
history blame
4.81 kB
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_tiny_sgd_0001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.2857142857142857

hushem_1x_deit_tiny_sgd_0001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5092
  • Accuracy: 0.2857

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.6866 0.2857
1.7029 2.0 12 1.6755 0.2857
1.7029 3.0 18 1.6648 0.2857
1.6819 4.0 24 1.6543 0.2857
1.7084 5.0 30 1.6452 0.2857
1.7084 6.0 36 1.6365 0.2857
1.661 7.0 42 1.6277 0.2857
1.661 8.0 48 1.6195 0.2857
1.6506 9.0 54 1.6113 0.2857
1.6321 10.0 60 1.6035 0.2857
1.6321 11.0 66 1.5969 0.2857
1.605 12.0 72 1.5900 0.2857
1.605 13.0 78 1.5837 0.2857
1.6205 14.0 84 1.5775 0.2857
1.6128 15.0 90 1.5717 0.2857
1.6128 16.0 96 1.5663 0.2857
1.5818 17.0 102 1.5613 0.2857
1.5818 18.0 108 1.5566 0.2857
1.6012 19.0 114 1.5522 0.2857
1.6068 20.0 120 1.5482 0.2857
1.6068 21.0 126 1.5443 0.2857
1.5674 22.0 132 1.5409 0.2857
1.5674 23.0 138 1.5376 0.2857
1.565 24.0 144 1.5344 0.2857
1.5842 25.0 150 1.5314 0.2857
1.5842 26.0 156 1.5286 0.2857
1.5593 27.0 162 1.5260 0.2857
1.5593 28.0 168 1.5236 0.2857
1.5824 29.0 174 1.5216 0.2857
1.537 30.0 180 1.5196 0.2857
1.537 31.0 186 1.5181 0.2857
1.5437 32.0 192 1.5165 0.2857
1.5437 33.0 198 1.5150 0.2857
1.5369 34.0 204 1.5137 0.2857
1.5371 35.0 210 1.5125 0.2857
1.5371 36.0 216 1.5116 0.2857
1.5229 37.0 222 1.5109 0.2857
1.5229 38.0 228 1.5102 0.2857
1.5623 39.0 234 1.5097 0.2857
1.5343 40.0 240 1.5094 0.2857
1.5343 41.0 246 1.5093 0.2857
1.5211 42.0 252 1.5092 0.2857
1.5211 43.0 258 1.5092 0.2857
1.5618 44.0 264 1.5092 0.2857
1.5309 45.0 270 1.5092 0.2857
1.5309 46.0 276 1.5092 0.2857
1.5362 47.0 282 1.5092 0.2857
1.5362 48.0 288 1.5092 0.2857
1.5728 49.0 294 1.5092 0.2857
1.5244 50.0 300 1.5092 0.2857

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1