wav2vec-large-en
This model is a fine-tuned version of facebook/wav2vec2-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.8842
- Wer: 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
No log | 0.3521 | 50 | 6.2209 | 1.0 |
7.7114 | 0.7042 | 100 | 5.8125 | 1.0 |
7.7114 | 1.0563 | 150 | 5.5788 | 1.0 |
6.2512 | 1.4085 | 200 | 5.5017 | 1.0 |
6.2512 | 1.7606 | 250 | 5.0778 | 1.0 |
5.152 | 2.1127 | 300 | 3.0842 | 1.0 |
5.152 | 2.4648 | 350 | 2.9082 | 1.0 |
2.9862 | 2.8169 | 400 | 2.9040 | 1.0 |
2.9862 | 3.1690 | 450 | 2.9172 | 1.0 |
2.9346 | 3.5211 | 500 | 3.0595 | 1.0 |
2.9346 | 3.8732 | 550 | 2.8957 | 1.0 |
2.9037 | 4.2254 | 600 | 2.9380 | 1.0 |
2.9037 | 4.5775 | 650 | 2.8769 | 1.0 |
2.8894 | 4.9296 | 700 | 2.9011 | 1.0 |
2.8894 | 5.2817 | 750 | 2.8993 | 1.0 |
2.8886 | 5.6338 | 800 | 2.8978 | 1.0 |
2.8886 | 5.9859 | 850 | 2.8762 | 1.0 |
2.8831 | 6.3380 | 900 | 2.8726 | 1.0 |
2.8831 | 6.6901 | 950 | 2.8747 | 1.0 |
2.8772 | 7.0423 | 1000 | 3.0238 | 1.0 |
2.8772 | 7.3944 | 1050 | 2.8764 | 1.0 |
2.8913 | 7.7465 | 1100 | 2.8801 | 1.0 |
2.8913 | 8.0986 | 1150 | 2.8926 | 1.0 |
2.8677 | 8.4507 | 1200 | 2.9241 | 1.0 |
2.8677 | 8.8028 | 1250 | 2.9400 | 1.0 |
2.8713 | 9.1549 | 1300 | 2.9158 | 1.0 |
2.8713 | 9.5070 | 1350 | 2.8834 | 1.0 |
2.8702 | 9.8592 | 1400 | 2.8683 | 1.0 |
2.8702 | 10.2113 | 1450 | 2.8976 | 1.0 |
2.8817 | 10.5634 | 1500 | 2.9263 | 1.0 |
2.8817 | 10.9155 | 1550 | 2.8732 | 1.0 |
2.8657 | 11.2676 | 1600 | 2.9270 | 1.0 |
2.8657 | 11.6197 | 1650 | 2.8860 | 1.0 |
2.8618 | 11.9718 | 1700 | 2.8889 | 1.0 |
2.8618 | 12.3239 | 1750 | 2.8942 | 1.0 |
2.8846 | 12.6761 | 1800 | 2.8856 | 1.0 |
2.8846 | 13.0282 | 1850 | 2.9049 | 1.0 |
2.8635 | 13.3803 | 1900 | 2.8727 | 1.0 |
2.8635 | 13.7324 | 1950 | 2.8900 | 1.0 |
2.8634 | 14.0845 | 2000 | 2.9005 | 1.0 |
2.8634 | 14.4366 | 2050 | 2.9035 | 1.0 |
2.8624 | 14.7887 | 2100 | 2.9030 | 1.0 |
2.8624 | 15.1408 | 2150 | 2.8754 | 1.0 |
2.8627 | 15.4930 | 2200 | 2.8944 | 1.0 |
2.8627 | 15.8451 | 2250 | 2.8651 | 1.0 |
2.8595 | 16.1972 | 2300 | 2.9012 | 1.0 |
2.8595 | 16.5493 | 2350 | 2.9074 | 1.0 |
2.8591 | 16.9014 | 2400 | 2.8764 | 1.0 |
2.8591 | 17.2535 | 2450 | 2.8917 | 1.0 |
2.8615 | 17.6056 | 2500 | 2.8757 | 1.0 |
2.8615 | 17.9577 | 2550 | 2.8945 | 1.0 |
2.861 | 18.3099 | 2600 | 2.9263 | 1.0 |
2.861 | 18.6620 | 2650 | 2.8842 | 1.0 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Tokenizers 0.20.3
- Downloads last month
- 21
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for dongim04/wav2vec-large-en
Base model
facebook/wav2vec2-large