audioclass-alpha / README.md
giovaldir's picture
End of training
5ee3c9f
|
raw
history blame
4.56 kB
metadata
license: apache-2.0
base_model: facebook/wav2vec2-base-960h
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: audioclass-alpha
    results: []

audioclass-alpha

This model is a fine-tuned version of facebook/wav2vec2-base-960h on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0998
  • Accuracy: 0.9660

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
3.4273 1.0 62 3.4237 0.0431
3.4084 2.0 124 3.3905 0.1179
3.3119 3.0 186 3.2305 0.2789
3.0335 4.0 248 2.8316 0.3537
2.5734 5.0 310 2.3766 0.4308
2.1768 6.0 372 1.9373 0.5760
1.8634 7.0 434 1.6130 0.6712
1.6591 8.0 496 1.3387 0.7347
1.3715 9.0 558 1.1461 0.7868
1.1765 10.0 620 0.9773 0.8027
1.0398 11.0 682 0.7819 0.8481
0.845 12.0 744 0.7010 0.8549
0.7978 13.0 806 0.6215 0.8662
0.6727 14.0 868 0.5388 0.8707
0.6023 15.0 930 0.4660 0.8844
0.6219 16.0 992 0.4607 0.8844
0.4981 17.0 1054 0.3918 0.8889
0.4693 18.0 1116 0.3890 0.8753
0.421 19.0 1178 0.3288 0.8866
0.4219 20.0 1240 0.3367 0.8934
0.3775 21.0 1302 0.3176 0.8866
0.3618 22.0 1364 0.3077 0.9002
0.3046 23.0 1426 0.3206 0.9093
0.3092 24.0 1488 0.2413 0.9320
0.2793 25.0 1550 0.2777 0.9252
0.3082 26.0 1612 0.2795 0.9274
0.2147 27.0 1674 0.2467 0.9388
0.1996 28.0 1736 0.2538 0.9388
0.3012 29.0 1798 0.1885 0.9501
0.2203 30.0 1860 0.1916 0.9524
0.2663 31.0 1922 0.2053 0.9501
0.1942 32.0 1984 0.1698 0.9524
0.1852 33.0 2046 0.1689 0.9569
0.2856 34.0 2108 0.1276 0.9615
0.2095 35.0 2170 0.1376 0.9592
0.1791 36.0 2232 0.1346 0.9615
0.1528 37.0 2294 0.1452 0.9569
0.1584 38.0 2356 0.1326 0.9524
0.1868 39.0 2418 0.1309 0.9615
0.1764 40.0 2480 0.1314 0.9592
0.1877 41.0 2542 0.1546 0.9546
0.1319 42.0 2604 0.1505 0.9546
0.1196 43.0 2666 0.1386 0.9569
0.1606 44.0 2728 0.1305 0.9569
0.2146 45.0 2790 0.1088 0.9615
0.1377 46.0 2852 0.1167 0.9592
0.1544 47.0 2914 0.1027 0.9637
0.1396 48.0 2976 0.0998 0.9660
0.1702 49.0 3038 0.0991 0.9660
0.1555 50.0 3100 0.0980 0.9660

Framework versions

  • Transformers 4.36.0.dev0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1