You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

whisper-large-v3-Assamese-Version1

This model is a fine-tuned version of openai/whisper-large-v3 on the fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2353
  • Wer: 62.8123

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.3803 5.0505 2000 0.3681 78.7302
0.295 10.1010 4000 0.2985 71.4589
0.277 15.1515 6000 0.2724 68.1526
0.2493 20.2020 8000 0.2586 66.3248
0.2316 25.2525 10000 0.2492 64.9954
0.2236 30.3030 12000 0.2435 63.9927
0.2351 35.3535 14000 0.2401 63.2306
0.2089 40.4040 16000 0.2372 62.8295
0.2205 45.4545 18000 0.2358 62.5086
0.2253 50.5051 20000 0.2353 62.8123

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.4.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for khushi1234455687/whisper-large-v3-Assamese-Version1

Finetuned
(432)
this model

Dataset used to train khushi1234455687/whisper-large-v3-Assamese-Version1