ngia commited on
Commit
54fe194
·
verified ·
1 Parent(s): d2ccfb2

End of training

Browse files
README.md CHANGED
@@ -19,13 +19,11 @@ model-index:
19
  dataset:
20
  name: ASR Wolof Dataset
21
  type: IndabaxSenegal/asr-wolof-dataset
22
- config: wo_sn
23
- split: test
24
  args: 'config: wo, split: test'
25
  metrics:
26
  - name: Wer
27
  type: wer
28
- value: 43.507061617297836
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -35,8 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR Wolof Dataset dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 0.9866
39
- - Wer: 43.5071
40
 
41
  ## Model description
42
 
@@ -59,18 +57,20 @@ The following hyperparameters were used during training:
59
  - train_batch_size: 16
60
  - eval_batch_size: 8
61
  - seed: 42
 
 
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
- - lr_scheduler_type: linear
64
- - num_epochs: 3.0
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
- | 1.1308 | 1.0 | 142 | 0.9861 | 43.7820 |
72
- | 0.6317 | 2.0 | 284 | 0.9646 | 43.4946 |
73
- | 0.4205 | 3.0 | 426 | 0.9866 | 43.5071 |
74
 
75
 
76
  ### Framework versions
 
19
  dataset:
20
  name: ASR Wolof Dataset
21
  type: IndabaxSenegal/asr-wolof-dataset
 
 
22
  args: 'config: wo, split: test'
23
  metrics:
24
  - name: Wer
25
  type: wer
26
+ value: 52.95489047928865
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR Wolof Dataset dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.0929
37
+ - Wer: 52.9549
38
 
39
  ## Model description
40
 
 
57
  - train_batch_size: 16
58
  - eval_batch_size: 8
59
  - seed: 42
60
+ - gradient_accumulation_steps: 2
61
+ - total_train_batch_size: 32
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
+ - lr_scheduler_type: cosine
64
+ - num_epochs: 3
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
+ | 0.1788 | 1.0 | 450 | 1.0593 | 54.4766 |
72
+ | 0.1082 | 2.0 | 900 | 1.0710 | 53.9887 |
73
+ | 0.0663 | 3.0 | 1350 | 1.0929 | 52.9549 |
74
 
75
 
76
  ### Framework versions
runs/Dec01_11-10-10_d08f9ecc9e77/events.out.tfevents.1733051419.d08f9ecc9e77.30.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70ec440a5e643c270e6354211dde0531d11fae9700f912c47c40d43f9faab280
3
- size 13564
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:663d9e27814e93075c63385cd53d217c86cdf96c06e9200e5dd72afb1539e692
3
+ size 14236