ngia commited on
Commit
2a8f585
·
verified ·
1 Parent(s): 350d2b5

End of training

Browse files
Files changed (2) hide show
  1. README.md +13 -13
  2. model.safetensors +1 -1
README.md CHANGED
@@ -19,11 +19,13 @@ model-index:
19
  dataset:
20
  name: ASR Wolof Dataset
21
  type: IndabaxSenegal/asr-wolof-dataset
 
 
22
  args: 'config: wo, split: test'
23
  metrics:
24
  - name: Wer
25
  type: wer
26
- value: 52.14391822141652
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +35,8 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR Wolof Dataset dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.9443
37
- - Wer: 52.1439
38
 
39
  ## Model description
40
 
@@ -54,28 +56,26 @@ More information needed
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 1e-05
57
- - train_batch_size: 48
58
- - eval_batch_size: 16
59
  - seed: 42
60
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
  - lr_scheduler_type: linear
62
- - lr_scheduler_warmup_steps: 500
63
- - num_epochs: 4
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
70
- | 0.2474 | 1.0 | 967 | 0.8705 | 51.7197 |
71
- | 0.1774 | 2.0 | 1934 | 0.8798 | 52.3592 |
72
- | 0.1011 | 3.0 | 2901 | 0.9154 | 51.8179 |
73
- | 0.0541 | 4.0 | 3868 | 0.9443 | 52.1439 |
74
 
75
 
76
  ### Framework versions
77
 
78
  - Transformers 4.46.3
79
- - Pytorch 2.1.0+cu118
80
  - Datasets 3.1.0
81
- - Tokenizers 0.20.3
 
19
  dataset:
20
  name: ASR Wolof Dataset
21
  type: IndabaxSenegal/asr-wolof-dataset
22
+ config: wo_sn
23
+ split: test
24
  args: 'config: wo, split: test'
25
  metrics:
26
  - name: Wer
27
  type: wer
28
+ value: 43.507061617297836
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR Wolof Dataset dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 0.9866
39
+ - Wer: 43.5071
40
 
41
  ## Model description
42
 
 
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 1e-05
59
+ - train_batch_size: 16
60
+ - eval_batch_size: 8
61
  - seed: 42
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: linear
64
+ - num_epochs: 3.0
 
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
+ | 1.1308 | 1.0 | 142 | 0.9861 | 43.7820 |
72
+ | 0.6317 | 2.0 | 284 | 0.9646 | 43.4946 |
73
+ | 0.4205 | 3.0 | 426 | 0.9866 | 43.5071 |
 
74
 
75
 
76
  ### Framework versions
77
 
78
  - Transformers 4.46.3
79
+ - Pytorch 2.4.0
80
  - Datasets 3.1.0
81
+ - Tokenizers 0.20.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6447b0eb5e44d7962db882a2316781d6ea5a1b3a7d7b45ba0b7f0dc458063e8a
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53289b0744bcee57815395931d3cabb227e4c6732cb3a21c2324f4b1b3405d0f
3
  size 966995080