Augusto777's picture
End of training
9ce8871 verified
metadata
library_name: transformers
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: swinv2-tiny-patch4-window8-256-DMAE-da2-colab
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7608695652173914

swinv2-tiny-patch4-window8-256-DMAE-da2-colab

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9368
  • Accuracy: 0.7609

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.4149 0.9565 11 1.3905 0.2174
1.3431 1.9348 22 1.3828 0.3043
1.2396 2.9130 33 1.2675 0.4348
1.1377 3.9783 45 1.2067 0.3478
1.0144 4.9565 56 0.9060 0.6087
0.9016 5.9348 67 0.8025 0.6739
0.7941 6.9130 78 0.7812 0.6957
0.6986 7.9783 90 0.9441 0.5870
0.6245 8.9565 101 0.8641 0.6957
0.6044 9.9348 112 0.8648 0.6087
0.536 10.9130 123 0.8800 0.5870
0.4825 11.9783 135 0.8388 0.7391
0.4972 12.9565 146 0.8763 0.7174
0.4284 13.9348 157 0.8228 0.6957
0.3961 14.9130 168 0.8260 0.7174
0.3877 15.9783 180 0.9368 0.7609
0.3744 16.9565 191 1.1221 0.6304
0.3266 17.9348 202 1.0177 0.6739
0.3257 18.9130 213 1.0300 0.6957
0.3164 19.9783 225 1.1344 0.6957
0.2965 20.9565 236 0.9283 0.7391
0.293 21.9348 247 1.0128 0.6957
0.2929 22.9130 258 1.0450 0.7609
0.2878 23.9783 270 1.1482 0.7174
0.2447 24.9565 281 1.0716 0.7174
0.2601 25.9348 292 1.0770 0.6957
0.2299 26.9130 303 1.1769 0.7391
0.2401 27.9783 315 1.1407 0.7174
0.2347 28.9565 326 1.1929 0.6957
0.2584 29.9348 337 1.0957 0.6739
0.2204 30.9130 348 1.1721 0.6739
0.2031 31.9783 360 1.0843 0.6739
0.2241 32.9565 371 1.1350 0.6957
0.1798 33.9348 382 1.2419 0.6957
0.2435 34.9130 393 1.1522 0.6957
0.1857 35.9783 405 1.1207 0.6957
0.1889 36.9565 416 1.1711 0.6957
0.2043 37.9348 427 1.1978 0.6957
0.1951 38.9130 438 1.2107 0.7174
0.1901 39.1087 440 1.2108 0.7174

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3