--- library_name: transformers license: mit base_model: facebook/w2v-bert-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: w2v-bert-punjabi_v2 results: [] --- # w2v-bert-punjabi_v2 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2031 - Wer: 0.1135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 60000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 0.4419 | 0.2174 | 2000 | 0.3828 | 0.2268 | | 0.3492 | 0.4348 | 4000 | 0.3401 | 0.1836 | | 0.3205 | 0.6522 | 6000 | 0.2932 | 0.1712 | | 0.2813 | 0.8696 | 8000 | 0.2844 | 0.1590 | | 0.255 | 1.0870 | 10000 | 0.2562 | 0.1469 | | 0.2451 | 1.3043 | 12000 | 0.2431 | 0.1386 | | 0.2305 | 1.5217 | 14000 | 0.2299 | 0.1312 | | 0.2156 | 1.7391 | 16000 | 0.2191 | 0.1274 | | 0.2119 | 1.9565 | 18000 | 0.2269 | 0.1205 | | 0.182 | 2.1739 | 20000 | 0.2091 | 0.1181 | | 0.1789 | 2.3913 | 22000 | 0.1980 | 0.1136 | | 0.1766 | 2.6087 | 24000 | 0.1945 | 0.1092 | | 0.1657 | 2.8261 | 26000 | 0.1881 | 0.1079 | | 0.1461 | 3.0435 | 28000 | 0.1809 | 0.1050 | | 0.1454 | 3.2609 | 30000 | 0.1810 | 0.1029 | | 0.1697 | 3.4783 | 32000 | 0.2085 | 0.1210 | | 0.1763 | 3.6957 | 34000 | 0.2017 | 0.1172 | | 0.1642 | 3.9130 | 36000 | 0.2031 | 0.1135 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0