TheAIchemist13's picture
End of training
2f548cd
|
raw
history blame
2.44 kB
metadata
license: mit
base_model: Harveenchadha/vakyansh-wav2vec2-hindi-him-4200
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: hindi_wav2vec2_optimized_2
    results: []

hindi_wav2vec2_optimized_2

This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-hindi-him-4200 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4408
  • Wer: 0.3122

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Wer
3.2061 5.56 25 1.5744 0.5338
0.4727 11.11 50 1.4735 0.4409
0.2901 16.67 75 1.6240 0.4536
0.1643 22.22 100 1.4403 0.4726
0.0991 27.78 125 1.3516 0.4114
0.0512 33.33 150 1.3740 0.4409
0.0645 38.89 175 1.3560 0.3903
0.0461 44.44 200 1.3505 0.3840
0.0372 50.0 225 1.5078 0.4114
0.0277 55.56 250 1.5161 0.3629
0.0201 61.11 275 1.4834 0.3819
0.016 66.67 300 1.4693 0.3608
0.0122 72.22 325 1.4352 0.3439
0.0147 77.78 350 1.4382 0.3291
0.0098 83.33 375 1.4329 0.3122
0.015 88.89 400 1.4408 0.3122

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1