vit-base-patch16-224-in21k-finetuned

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1228
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.5651 0.9816 40 0.7021 0.5
0.3002 1.9877 81 0.7162 0.625
0.251 2.9939 122 0.8250 0.625
0.1628 4.0 163 0.8735 0.625
0.1763 4.9816 203 0.7803 0.625
0.1694 5.9877 244 0.3916 0.6875
0.1572 6.9939 285 0.6275 0.8125
0.1343 8.0 326 1.3112 0.625
0.1629 8.9816 366 0.5798 0.625
0.1675 9.9877 407 0.4662 0.8125
0.1254 10.9939 448 0.4484 0.8125
0.136 12.0 489 0.3055 0.8125
0.1303 12.9816 529 0.2235 0.875
0.177 13.9877 570 0.4362 0.8125
0.125 14.9939 611 0.5964 0.625
0.1059 16.0 652 0.5711 0.6875
0.1012 16.9816 692 0.1228 1.0
0.0945 17.9877 733 0.1478 1.0
0.1169 18.9939 774 0.2164 0.9375
0.0968 19.6319 800 0.2333 0.875

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
19
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Towen/vit-base-patch16-224-in21k-finetuned

Finetuned
(2008)
this model

Evaluation results