finetuned-Accident-SingleLabel-Final
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0015
- Accuracy: 0.6176
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 0.08 | 4 | 1.7644 | 0.1304 |
No log | 1.08 | 8 | 1.6450 | 0.4783 |
1.6076 | 2.08 | 12 | 1.4210 | 0.5652 |
1.6076 | 3.08 | 16 | 1.1925 | 0.6087 |
1.0244 | 4.08 | 20 | 1.1087 | 0.6087 |
1.0244 | 5.08 | 24 | 0.9824 | 0.5652 |
1.0244 | 6.08 | 28 | 1.0297 | 0.5217 |
0.9684 | 7.08 | 32 | 1.0348 | 0.6522 |
0.9684 | 8.08 | 36 | 0.9426 | 0.6522 |
0.7826 | 9.08 | 40 | 1.0071 | 0.6087 |
0.7826 | 10.08 | 44 | 0.9811 | 0.6087 |
0.7826 | 11.08 | 48 | 0.9040 | 0.6087 |
0.7829 | 12.04 | 50 | 0.8987 | 0.6087 |
Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for pavitemple/finetuned-Accident-SingleLabel-Final
Base model
MCG-NJU/videomae-base