wav2vec2-Y_speed_pause

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.3872
  • Cer: 84.5042

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
34.8148 0.1289 200 5.1751 100.0
5.0795 0.2579 400 4.7170 100.0
4.8952 0.3868 600 4.8290 100.0
4.8428 0.5158 800 4.6400 100.0
4.7434 0.6447 1000 4.8620 100.0
4.7062 0.7737 1200 4.7443 97.9500
4.6626 0.9026 1400 4.8160 98.8898
4.5942 1.0316 1600 4.6878 93.9145
4.5619 1.1605 1800 5.0049 95.1598
4.512 1.2895 2000 4.9230 93.9380
4.4579 1.4184 2200 5.0219 94.1377
4.4503 1.5474 2400 4.5823 89.5383
4.35 1.6763 2600 4.6256 91.7704
4.3369 1.8053 2800 4.5154 93.3858
4.2476 1.9342 3000 4.4953 89.7968
4.1991 2.0632 3200 4.4719 83.4234
4.1593 2.1921 3400 4.4864 91.2359
4.0994 2.3211 3600 4.3783 89.6264
4.0728 2.4500 3800 4.4647 87.2122
4.0264 2.5790 4000 4.4085 87.1182
3.9909 2.7079 4200 4.3572 85.7025
3.9374 2.8369 4400 4.3802 84.3574
3.9498 2.9658 4600 4.3872 84.5042

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
4
Safetensors
Model size
317M params
Tensor type
F32
·
Inference Providers NEW
Inference Providers available for this model are disabled. Settings

Model tree for Gummybear05/wav2vec2-Y_speed_pause

Finetuned
(614)
this model