Edit model card

vit-base-patch16-224-dmae-va-U5-100-iN

This model is a fine-tuned version of google/vit-base-patch16-224 on Augusto777/dmae-ve-U5 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6381
  • Accuracy: 0.8667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9 7 1.3812 0.45
1.3848 1.94 15 1.3606 0.5
1.3686 2.97 23 1.3075 0.5333
1.2965 4.0 31 1.2370 0.4667
1.2965 4.9 38 1.1168 0.5333
1.1753 5.94 46 1.0310 0.5667
1.0294 6.97 54 0.9316 0.6
0.902 8.0 62 0.8728 0.6833
0.902 8.9 69 0.8129 0.7667
0.7812 9.94 77 0.7006 0.8
0.6419 10.97 85 0.6381 0.8667
0.5109 12.0 93 0.6327 0.8167
0.3838 12.9 100 0.5442 0.8667
0.3838 13.94 108 0.6755 0.75
0.285 14.97 116 0.7756 0.7167
0.2672 16.0 124 0.8107 0.7167
0.2466 16.9 131 0.5219 0.8333
0.2466 17.94 139 0.7041 0.7833
0.2312 18.97 147 0.7879 0.75
0.1933 20.0 155 0.7090 0.8
0.1692 20.9 162 0.5395 0.8333
0.1578 21.94 170 0.6419 0.8167
0.1578 22.97 178 0.5736 0.8333
0.1321 24.0 186 0.7471 0.75
0.1114 24.9 193 0.6447 0.7667
0.1385 25.94 201 0.6158 0.8167
0.1385 26.97 209 0.6467 0.8
0.1136 28.0 217 0.6180 0.85
0.0997 28.9 224 0.8578 0.75
0.1064 29.94 232 0.6778 0.8167
0.0775 30.97 240 0.8124 0.8
0.0775 32.0 248 0.7783 0.8
0.0921 32.9 255 0.8320 0.7333
0.0919 33.94 263 0.8310 0.7833
0.0888 34.97 271 0.6576 0.85
0.0888 36.0 279 0.7044 0.8333
0.0693 36.9 286 0.7608 0.8167
0.061 37.94 294 0.7802 0.8
0.0699 38.97 302 0.7762 0.8167
0.0652 40.0 310 0.7579 0.8
0.0652 40.9 317 0.9985 0.75
0.0562 41.94 325 0.8027 0.8167
0.0534 42.97 333 0.9705 0.7833
0.0519 44.0 341 0.7301 0.8333
0.0519 44.9 348 0.8433 0.8
0.0529 45.94 356 0.8534 0.8
0.0772 46.97 364 0.8562 0.8
0.0644 48.0 372 0.8419 0.8
0.0644 48.9 379 1.1251 0.7667
0.0467 49.94 387 0.7537 0.8333
0.0576 50.97 395 0.7517 0.8333
0.0344 52.0 403 0.8343 0.8
0.0663 52.9 410 0.7636 0.8
0.0663 53.94 418 0.8253 0.8167
0.0353 54.97 426 0.9348 0.8
0.0524 56.0 434 0.8217 0.8167
0.0479 56.9 441 0.7586 0.8167
0.0479 57.94 449 0.8147 0.8
0.0595 58.97 457 1.0000 0.7833
0.0475 60.0 465 0.9291 0.7833
0.049 60.9 472 0.9588 0.7833
0.0398 61.94 480 0.9501 0.8
0.0398 62.97 488 0.9499 0.8
0.0496 64.0 496 0.9279 0.8
0.0354 64.9 503 0.9677 0.75
0.0325 65.94 511 0.8371 0.8333
0.0325 66.97 519 0.9683 0.8
0.0335 68.0 527 1.0455 0.7833
0.0375 68.9 534 0.9027 0.8167
0.0424 69.94 542 0.8043 0.85
0.0383 70.97 550 0.9035 0.7833
0.0383 72.0 558 0.9360 0.7833
0.0295 72.9 565 0.9841 0.7833
0.0307 73.94 573 0.9300 0.8
0.0376 74.97 581 0.9630 0.7833
0.0376 76.0 589 0.9777 0.7833
0.0259 76.9 596 0.9323 0.8
0.0345 77.94 604 0.9075 0.8
0.0346 78.97 612 0.8951 0.8
0.0319 80.0 620 0.9676 0.8
0.0319 80.9 627 0.9884 0.8
0.0226 81.94 635 0.9851 0.7833
0.033 82.97 643 0.9710 0.7833
0.0262 84.0 651 0.9851 0.7833
0.0262 84.9 658 0.9868 0.7833
0.0345 85.94 666 0.9702 0.7833
0.0299 86.97 674 0.9889 0.7833
0.0347 88.0 682 1.0003 0.7833
0.0347 88.9 689 0.9913 0.7833
0.0288 89.94 697 0.9859 0.7833
0.0198 90.32 700 0.9858 0.7833

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu118
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
2
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Augusto777/vit-base-patch16-224-dmae-va-U5-100-iN

Finetuned
(497)
this model

Dataset used to train Augusto777/vit-base-patch16-224-dmae-va-U5-100-iN