metadata
language:
- et
license: apache-2.0
base_model: openai/whisper-medium
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
model-index:
- name: salmon-whisper-medium-smj
results: []
salmon-whisper-medium-smj
This model is a fine-tuned version of openai/whisper-medium on the NbAiLab/salmon-asr-smj dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- lr_scheduler_type: linear
- per_device_train_batch_size: 8
- total_train_batch_size_per_node: 64
- total_train_batch_size: 64
- total_optimization_steps: 10,000
- starting_optimization_step: None
- finishing_optimization_step: 10,000
- num_train_dataset_workers: 32
- num_hosts: 1
- total_num_training_examples: 640,000
- steps_per_epoch: 287
- num_beams: None
- weight_decay: 0.01
- adam_beta1: 0.9
- adam_beta2: 0.98
- adam_epsilon: 1e-06
- dropout: True
- bpe_dropout_probability: 0.2
- activation_dropout_probability: 0.1
Training results
step | validation_loss | train_loss | validation_wer | validation_cer | validation_exact_wer | validation_exact_cer |
---|---|---|---|---|---|---|
0 | 6.7175 | 4.8002 | 101.9947 | 49.5329 | 103.4574 | 50.6274 |
1000 | 1.8389 | 0.4886 | 22.2074 | 6.0723 | 26.1968 | 6.6321 |
2000 | 1.2170 | 0.2845 | 18.75 | 5.6661 | 22.8723 | 6.1741 |
3000 | 1.3422 | 0.3153 | 15.5585 | 4.7116 | 18.0851 | 5.0588 |
4000 | 1.2924 | 0.2716 | 16.4894 | 5.1178 | 20.7447 | 5.7558 |
Framework versions
- Transformers 4.35.0.dev0
- Datasets 2.14.6
- Tokenizers 0.14.1