hkivancoral's picture
End of training
45f06e9
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_tiny_sgd_00001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.2857142857142857

hushem_1x_deit_tiny_sgd_00001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6751
  • Accuracy: 0.2857

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.6974 0.2857
1.71 2.0 12 1.6962 0.2857
1.71 3.0 18 1.6951 0.2857
1.7036 4.0 24 1.6940 0.2857
1.7465 5.0 30 1.6930 0.2857
1.7465 6.0 36 1.6921 0.2857
1.709 7.0 42 1.6911 0.2857
1.709 8.0 48 1.6901 0.2857
1.712 9.0 54 1.6892 0.2857
1.7048 10.0 60 1.6882 0.2857
1.7048 11.0 66 1.6874 0.2857
1.6828 12.0 72 1.6866 0.2857
1.6828 13.0 78 1.6858 0.2857
1.7139 14.0 84 1.6850 0.2857
1.719 15.0 90 1.6842 0.2857
1.719 16.0 96 1.6835 0.2857
1.6904 17.0 102 1.6828 0.2857
1.6904 18.0 108 1.6821 0.2857
1.7154 19.0 114 1.6815 0.2857
1.7326 20.0 120 1.6809 0.2857
1.7326 21.0 126 1.6804 0.2857
1.6942 22.0 132 1.6799 0.2857
1.6942 23.0 138 1.6794 0.2857
1.6945 24.0 144 1.6789 0.2857
1.728 25.0 150 1.6784 0.2857
1.728 26.0 156 1.6780 0.2857
1.7026 27.0 162 1.6776 0.2857
1.7026 28.0 168 1.6772 0.2857
1.7403 29.0 174 1.6769 0.2857
1.6716 30.0 180 1.6766 0.2857
1.6716 31.0 186 1.6764 0.2857
1.6806 32.0 192 1.6761 0.2857
1.6806 33.0 198 1.6759 0.2857
1.6988 34.0 204 1.6757 0.2857
1.6893 35.0 210 1.6755 0.2857
1.6893 36.0 216 1.6754 0.2857
1.6718 37.0 222 1.6753 0.2857
1.6718 38.0 228 1.6752 0.2857
1.7279 39.0 234 1.6751 0.2857
1.6803 40.0 240 1.6751 0.2857
1.6803 41.0 246 1.6751 0.2857
1.6785 42.0 252 1.6751 0.2857
1.6785 43.0 258 1.6751 0.2857
1.7169 44.0 264 1.6751 0.2857
1.6924 45.0 270 1.6751 0.2857
1.6924 46.0 276 1.6751 0.2857
1.6961 47.0 282 1.6751 0.2857
1.6961 48.0 288 1.6751 0.2857
1.7415 49.0 294 1.6751 0.2857
1.681 50.0 300 1.6751 0.2857

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0