talli96123's picture
End of training
dc3587a verified
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: meat_calssify_fresh_crop_fixed_V_0_2
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7884615384615384

meat_calssify_fresh_crop_fixed_V_0_2

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5986
  • Accuracy: 0.7885

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0979 1.0 10 1.0867 0.3910
1.0723 2.0 20 1.0606 0.4487
1.0368 3.0 30 1.0202 0.4936
0.968 4.0 40 0.9396 0.5449
0.8927 5.0 50 0.8491 0.6410
0.8256 6.0 60 0.8543 0.6282
0.7477 7.0 70 0.8216 0.6410
0.6567 8.0 80 0.7805 0.6282
0.6121 9.0 90 0.7005 0.7308
0.6303 10.0 100 0.7170 0.6923
0.5335 11.0 110 0.7192 0.7051
0.5375 12.0 120 0.6438 0.7436
0.4651 13.0 130 0.7292 0.7115
0.5207 14.0 140 0.6449 0.7244
0.4692 15.0 150 0.6545 0.7244
0.4146 16.0 160 0.6789 0.7372
0.383 17.0 170 0.6214 0.7564
0.3612 18.0 180 0.6287 0.7372
0.3444 19.0 190 0.7465 0.6987
0.3562 20.0 200 0.6255 0.7756
0.3149 21.0 210 0.5088 0.8141
0.2883 22.0 220 0.6508 0.7179
0.2829 23.0 230 0.7362 0.7179
0.2713 24.0 240 0.5616 0.7692
0.2562 25.0 250 0.7014 0.7244
0.2819 26.0 260 0.6033 0.7628
0.2237 27.0 270 0.5719 0.7885
0.2486 28.0 280 0.7404 0.7179
0.2049 29.0 290 0.6897 0.75
0.2185 30.0 300 0.6415 0.7564
0.239 31.0 310 0.6182 0.7821
0.2315 32.0 320 0.7067 0.75
0.1775 33.0 330 0.6307 0.7628
0.1829 34.0 340 0.5605 0.8205
0.1712 35.0 350 0.6619 0.7692
0.1896 36.0 360 0.5419 0.7949
0.1961 37.0 370 0.6204 0.7885
0.1825 38.0 380 0.5401 0.8013
0.1986 39.0 390 0.5964 0.7821
0.1623 40.0 400 0.5319 0.8269
0.1356 41.0 410 0.6096 0.7821
0.1615 42.0 420 0.6163 0.7692
0.1515 43.0 430 0.5757 0.7821
0.1655 44.0 440 0.6040 0.7756
0.1353 45.0 450 0.6121 0.7564
0.1133 46.0 460 0.4764 0.8141
0.1073 47.0 470 0.6337 0.7821
0.1266 48.0 480 0.5615 0.8077
0.1156 49.0 490 0.5092 0.8205
0.1344 50.0 500 0.5986 0.7885

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1