vit-base-beans / README.md
zqTensor's picture
End of training
d5a345c verified
metadata
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - image-classification
  - vision
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: vit-base-beans
    results: []

vit-base-beans

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the beans dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0079
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 1337
  • distributed_type: multi-GPU
  • num_devices: 2
  • total_train_batch_size: 16
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Accuracy Validation Loss
0.2859 1.0 130 0.9624 0.2189
0.1316 2.0 260 0.9699 0.1334
0.1438 3.0 390 0.9699 0.0981
0.0833 4.0 520 0.9925 0.0656
0.1107 5.0 650 0.9774 0.0817
0.098 11.0 715 0.9925 0.0570
0.0935 12.0 780 1.0 0.0418
0.0907 13.0 845 0.9699 0.1093
0.0947 14.0 910 1.0 0.0347
0.1259 15.0 975 0.9850 0.0710
0.0325 16.0 1040 0.9774 0.0587
0.1397 17.0 1105 0.9925 0.0495
0.0456 18.0 1170 0.9774 0.0519
0.0439 19.0 1235 1.0 0.0216
0.0484 20.0 1300 0.9925 0.0316
0.0276 21.0 1365 1.0 0.0192
0.0348 22.0 1430 1.0 0.0177
0.0326 23.0 1495 1.0 0.0175
0.1014 24.0 1560 0.9925 0.0235
0.0395 25.0 1625 0.9850 0.0451
0.0265 26.0 1690 0.9925 0.0297
0.0569 27.0 1755 0.9925 0.0263
0.0666 28.0 1820 0.9850 0.0245
0.0285 29.0 1885 0.9774 0.0418
0.0892 30.0 1950 0.9925 0.0204
0.0371 31.0 2015 0.9850 0.0339
0.0105 32.0 2080 1.0 0.0143
0.0563 33.0 2145 1.0 0.0140
0.0573 34.0 2210 1.0 0.0102
0.0409 35.0 2275 1.0 0.0096
0.0523 36.0 2340 0.9925 0.0149
0.0131 37.0 2405 0.9925 0.0197
0.0329 38.0 2470 1.0 0.0109
0.0577 39.0 2535 1.0 0.0096
0.0085 40.0 2600 0.9925 0.0147
0.0618 41.0 2665 1.0 0.0094
0.0847 42.0 2730 0.9925 0.0197
0.0291 43.0 2795 1.0 0.0089
0.0568 44.0 2860 1.0 0.0087
0.0077 45.0 2925 1.0 0.0104
0.008 46.0 2990 1.0 0.0138
0.0272 47.0 3055 1.0 0.0081
0.008 48.0 3120 1.0 0.0084
0.0112 49.0 3185 1.0 0.0082
0.013 50.0 3250 1.0 0.0079

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1