VideoMAE_WLASL_100_200_epochs_p20_SR_8_kinetics

This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6213
  • Top 1 Accuracy: 0.6716
  • Top 5 Accuracy: 0.8994
  • Top 10 Accuracy: 0.9379
  • Accuracy: 0.6716
  • Precision: 0.7449
  • Recall: 0.6716
  • F1: 0.6675

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 36000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Top 1 Accuracy Top 5 Accuracy Top 10 Accuracy Accuracy Precision Recall F1
18.5163 0.005 180 4.6097 0.0089 0.0562 0.1124 0.0089 0.0005 0.0089 0.0008
18.4047 1.0050 360 4.5832 0.0178 0.0740 0.1361 0.0178 0.0025 0.0178 0.0043
18.096 2.0050 540 4.4750 0.0444 0.1420 0.2219 0.0444 0.0259 0.0444 0.0210
17.0336 3.0050 721 4.1906 0.0917 0.3166 0.4556 0.0917 0.0847 0.0917 0.0612
15.1957 4.005 901 3.8256 0.2633 0.5207 0.6213 0.2633 0.2334 0.2633 0.1964
13.6256 5.0050 1081 3.4320 0.3254 0.6479 0.7574 0.3254 0.3160 0.3254 0.2719
11.5827 6.0050 1261 3.0442 0.4112 0.7337 0.8609 0.4112 0.3981 0.4112 0.3537
9.4052 7.0050 1442 2.6789 0.5266 0.8136 0.8876 0.5266 0.5014 0.5266 0.4744
7.3195 8.005 1622 2.3768 0.5562 0.8639 0.9172 0.5562 0.6125 0.5562 0.5358
5.6096 9.0050 1802 2.0526 0.6331 0.8728 0.9349 0.6331 0.6811 0.6331 0.6143
4.1271 10.0050 1982 1.8377 0.6746 0.8846 0.9467 0.6746 0.7145 0.6746 0.6560
2.8909 11.0050 2163 1.6035 0.6864 0.9053 0.9527 0.6864 0.7437 0.6864 0.6756
2.11 12.005 2343 1.4429 0.6893 0.9053 0.9497 0.6893 0.7392 0.6893 0.6790
1.3243 13.0050 2523 1.2918 0.7130 0.9290 0.9704 0.7130 0.7461 0.7130 0.6986
0.9066 14.0050 2703 1.2568 0.7041 0.9349 0.9734 0.7041 0.7495 0.7041 0.6946
0.5573 15.0050 2884 1.1904 0.7101 0.9290 0.9675 0.7071 0.7494 0.7071 0.7009
0.4602 16.005 3064 1.1545 0.7337 0.9231 0.9556 0.7337 0.7812 0.7337 0.7277
0.2747 17.0050 3244 1.2449 0.6805 0.9142 0.9497 0.6805 0.7267 0.6805 0.6706
0.241 18.0050 3424 1.1410 0.6953 0.9290 0.9645 0.6953 0.7501 0.6953 0.6932
0.2258 19.0050 3605 1.0789 0.7130 0.9201 0.9556 0.7130 0.7400 0.7130 0.6938
0.1156 20.005 3785 1.1841 0.7130 0.9172 0.9408 0.7130 0.7546 0.7130 0.7053
0.1339 21.0050 3965 1.1466 0.7130 0.9053 0.9556 0.7130 0.7382 0.7130 0.6982
0.0827 22.0050 4145 1.1599 0.7189 0.9231 0.9615 0.7219 0.7461 0.7219 0.7062
0.0987 23.0050 4326 1.2831 0.7160 0.9201 0.9615 0.7160 0.7579 0.7160 0.7049
0.1273 24.005 4506 1.2927 0.7041 0.9172 0.9586 0.7041 0.7634 0.7041 0.7014
0.086 25.0050 4686 1.3708 0.7041 0.8994 0.9408 0.7041 0.7368 0.7041 0.6908
0.1436 26.0050 4866 1.2470 0.7189 0.9172 0.9586 0.7189 0.7526 0.7189 0.7097
0.0837 27.0050 5047 1.3399 0.7041 0.9201 0.9615 0.7041 0.7554 0.7041 0.6993
0.0605 28.005 5227 1.3397 0.7219 0.9172 0.9586 0.7219 0.7781 0.7219 0.7160
0.0508 29.0050 5407 1.2904 0.7130 0.9083 0.9645 0.7130 0.7471 0.7130 0.7022
0.1116 30.0050 5587 1.5462 0.6746 0.9112 0.9556 0.6746 0.7478 0.6746 0.6694
0.0343 31.0050 5768 1.4960 0.7012 0.9112 0.9586 0.7012 0.7383 0.7012 0.6832
0.0886 32.005 5948 1.5807 0.6716 0.8876 0.9290 0.6716 0.6948 0.6716 0.6494
0.0852 33.0050 6128 1.4407 0.6953 0.9201 0.9527 0.6953 0.7420 0.6953 0.6886
0.111 34.0050 6308 1.6302 0.6598 0.8994 0.9379 0.6598 0.7054 0.6598 0.6543
0.1465 35.0050 6489 1.4035 0.7071 0.8905 0.9556 0.7071 0.7528 0.7071 0.7023
0.1215 36.005 6669 1.6213 0.6716 0.8994 0.9379 0.6716 0.7449 0.6716 0.6675

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month
22
Safetensors
Model size
86.3M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Shawon16/VideoMAE_WLASL_100_200_epochs_p20_SR_8_kinetics

Finetuned
(78)
this model