Lguyogiro's picture
wav2vec2-large-mms-1b-nhi-ft-3hrs
951fbc5 verified
|
raw
history blame
1.95 kB
metadata
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
  - generated_from_trainer
datasets:
  - audiofolder
metrics:
  - wer
model-index:
  - name: wav2vec2-large-mms-1b-nhi-ft-3hrs
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: audiofolder
          type: audiofolder
          config: default
          split: train
          args: default
        metrics:
          - name: Wer
            type: wer
            value: 0.6467391304347826

wav2vec2-large-mms-1b-nhi-ft-3hrs

This model is a fine-tuned version of facebook/mms-1b-all on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7237
  • Wer: 0.6467

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.6463 0.4149 100 1.0969 0.7919
1.1669 0.8299 200 0.8578 0.7023
0.987 1.2448 300 0.7607 0.6603
0.9324 1.6598 400 0.7237 0.6467

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.4.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1