VideoMAE_WLASL_2000_200_epochs_p20_SR_8_kinetics

This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3164
  • Top 1 Accuracy: 0.3721
  • Top 5 Accuracy: 0.6987
  • Top 10 Accuracy: 0.7911
  • Accuracy: 0.3718
  • Precision: 0.3566
  • Recall: 0.3718
  • F1: 0.3366

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 357200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Top 1 Accuracy Top 5 Accuracy Top 10 Accuracy Accuracy Precision Recall F1
30.4419 0.005 1786 7.5999 0.0013 0.0041 0.0077 0.0013 0.0004 0.0013 0.0003
30.3231 1.0050 3572 7.5706 0.0018 0.0077 0.0117 0.0018 0.0007 0.0018 0.0006
29.5546 2.0050 5358 7.3846 0.0074 0.0240 0.0416 0.0074 0.0017 0.0074 0.0019
28.1321 3.0050 7145 7.0972 0.0240 0.0707 0.1093 0.0240 0.0088 0.0240 0.0087
26.7987 4.005 8931 6.7656 0.0465 0.1272 0.1913 0.0465 0.0170 0.0465 0.0184
24.7006 5.0050 10717 6.4074 0.0702 0.1982 0.2865 0.0702 0.0305 0.0702 0.0330
22.9951 6.0050 12503 6.0314 0.1034 0.2694 0.3685 0.1034 0.0511 0.1034 0.0538
21.0796 7.0050 14290 5.5934 0.1456 0.3455 0.4597 0.1456 0.0804 0.1456 0.0840
18.8279 8.005 16076 5.1535 0.1729 0.4091 0.5299 0.1729 0.1020 0.1729 0.1060
16.0168 9.0050 17862 4.6872 0.2270 0.4870 0.6124 0.2270 0.1483 0.2270 0.1535
13.4662 10.0050 19648 4.2174 0.2704 0.5585 0.6803 0.2707 0.1825 0.2707 0.1926
10.8825 11.0050 21435 3.7937 0.3159 0.6182 0.7288 0.3166 0.2341 0.3166 0.2405
8.493 12.005 23221 3.4529 0.3401 0.6619 0.7602 0.3407 0.2685 0.3407 0.2747
6.3864 13.0050 25007 3.1427 0.3631 0.6974 0.7852 0.3631 0.3058 0.3631 0.3063
4.8598 14.0050 26793 2.9818 0.3670 0.6997 0.8008 0.3672 0.3237 0.3672 0.3195
3.5894 15.0050 28580 2.7900 0.3925 0.7360 0.8207 0.3927 0.3493 0.3927 0.3457
2.7053 16.005 30366 2.7045 0.3920 0.7403 0.8297 0.3920 0.3579 0.3920 0.3486
2.0517 17.0050 32152 2.7339 0.3884 0.7344 0.8205 0.3879 0.3598 0.3879 0.3481
1.9862 18.0050 33938 2.7749 0.3820 0.7285 0.8172 0.3818 0.3614 0.3818 0.3446
1.9352 19.0050 35725 2.8157 0.3634 0.7135 0.8156 0.3634 0.3396 0.3634 0.3256
1.6389 20.005 37511 2.7968 0.3800 0.7247 0.8144 0.3800 0.3532 0.3800 0.3403
1.4166 21.0050 39297 2.8414 0.3739 0.7132 0.8067 0.3741 0.3498 0.3741 0.3358
1.3113 22.0050 41083 2.9111 0.3667 0.7033 0.8041 0.3670 0.3508 0.3670 0.3340
1.4698 23.0050 42870 2.9282 0.3675 0.7071 0.7947 0.3677 0.3466 0.3677 0.3296
1.1594 24.005 44656 2.9186 0.3899 0.7125 0.7988 0.3899 0.3662 0.3899 0.3532
0.8815 25.0050 46442 3.0210 0.3828 0.7053 0.7965 0.3825 0.3640 0.3825 0.3469
1.348 26.0050 48228 3.0267 0.3772 0.7074 0.7978 0.3772 0.3537 0.3772 0.3397
1.0531 27.0050 50015 3.0055 0.3805 0.7196 0.8108 0.3807 0.3656 0.3807 0.3464
1.2516 28.005 51801 3.1702 0.3634 0.6841 0.7883 0.3631 0.3353 0.3631 0.3244
1.2142 29.0050 53587 3.1537 0.3744 0.6948 0.7972 0.3746 0.3627 0.3746 0.3409
1.1783 30.0050 55373 3.2329 0.3659 0.6951 0.7916 0.3659 0.3400 0.3659 0.3278
1.2075 31.0050 57160 3.2251 0.3695 0.6971 0.7880 0.3693 0.3521 0.3693 0.3348
1.1369 32.005 58946 3.3422 0.3458 0.6803 0.7737 0.3460 0.3288 0.3460 0.3114
1.1948 33.0050 60732 3.3476 0.3634 0.6726 0.7735 0.3634 0.3375 0.3634 0.3240
1.0164 34.0050 62518 3.3188 0.3687 0.6969 0.7855 0.3687 0.3434 0.3687 0.3312
0.8987 35.0050 64305 3.3164 0.3721 0.6987 0.7911 0.3718 0.3566 0.3718 0.3366

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month
12
Safetensors
Model size
87.8M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Shawon16/VideoMAE_WLASL_2000_200_epochs_p20_SR_8_kinetics

Finetuned
(84)
this model