dmusingu's picture
asr-africa/w2v2_bert-Wolof-10-hours-alffa-plus-fleurs-dataset
e930f6f verified
metadata
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
  - generated_from_trainer
datasets:
  - fleurs
metrics:
  - wer
model-index:
  - name: w2v2_bert-Wolof-10-hours-alffa-plus-fleurs-dataset
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: fleurs
          type: fleurs
          config: wo_sn
          split: None
          args: wo_sn
        metrics:
          - name: Wer
            type: wer
            value: 0.4684414448193976

Visualize in Weights & Biases

w2v2_bert-Wolof-10-hours-alffa-plus-fleurs-dataset

This model is a fine-tuned version of facebook/w2v-bert-2.0 on the fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7861
  • Wer: 0.4684
  • Cer: 0.1628

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Wer Cer
1.4511 2.3704 400 0.9136 0.6117 0.2060
0.6242 4.7407 800 1.1236 0.7233 0.2499
0.6014 7.1111 1200 1.1048 0.6495 0.2343
0.4898 9.4815 1600 1.0724 0.6610 0.2389
0.4124 11.8519 2000 0.9146 0.5919 0.2216
0.3378 14.2222 2400 1.0265 0.5888 0.2079
0.2931 16.5926 2800 0.8130 0.5017 0.1818
0.2369 18.9630 3200 1.0162 0.5872 0.2286
0.1975 21.3333 3600 0.7969 0.4896 0.1744
0.1432 23.7037 4000 0.8140 0.5291 0.1880
0.1176 26.0741 4400 0.8178 0.5812 0.2064
0.0864 28.4444 4800 1.0055 0.4963 0.1741
0.0674 30.8148 5200 0.8577 0.5019 0.1770
0.0494 33.1852 5600 0.9468 0.5139 0.1766
0.0356 35.5556 6000 1.0305 0.4718 0.1671
0.0213 37.9259 6400 1.1650 0.4986 0.1750
0.0144 40.2963 6800 1.2664 0.4763 0.1697
0.0077 42.6667 7200 1.3433 0.4687 0.1620
0.0039 45.0370 7600 1.5958 0.4776 0.1664
0.0021 47.4074 8000 1.7292 0.4729 0.1649
0.0009 49.7778 8400 1.7861 0.4684 0.1628

Framework versions

  • Transformers 4.44.1
  • Pytorch 2.1.0+cu118
  • Datasets 2.17.0
  • Tokenizers 0.19.1