amitkayal commited on
Commit
f3dab40
·
1 Parent(s): 7f069e6

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -11
README.md CHANGED
@@ -10,7 +10,7 @@ datasets:
10
  metrics:
11
  - wer
12
  model-index:
13
- - name: WhsprTiny-hi
14
  results:
15
  - task:
16
  name: Automatic Speech Recognition
@@ -24,18 +24,18 @@ model-index:
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 43.83526309755818
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
  should probably proofread and complete it, then remove this comment. -->
32
 
33
- # WhsprTiny-hi
34
 
35
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.9060
38
- - Wer: 43.8353
39
 
40
  ## Model description
41
 
@@ -58,9 +58,10 @@ The following hyperparameters were used during training:
58
  - train_batch_size: 16
59
  - eval_batch_size: 8
60
  - seed: 42
 
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
- - lr_scheduler_warmup_steps: 400
64
  - training_steps: 5000
65
  - mixed_precision_training: Native AMP
66
 
@@ -68,11 +69,11 @@ The following hyperparameters were used during training:
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
- | 0.2265 | 3.04 | 1000 | 0.5764 | 43.5028 |
72
- | 0.12 | 7.02 | 2000 | 0.6292 | 42.1271 |
73
- | 0.0375 | 10.05 | 3000 | 0.7510 | 43.5229 |
74
- | 0.0119 | 14.04 | 4000 | 0.8505 | 43.6203 |
75
- | 0.0094 | 18.02 | 5000 | 0.9060 | 43.8353 |
76
 
77
 
78
  ### Framework versions
 
10
  metrics:
11
  - wer
12
  model-index:
13
+ - name: WhpTiny-hi-v2
14
  results:
15
  - task:
16
  name: Automatic Speech Recognition
 
24
  metrics:
25
  - name: Wer
26
  type: wer
27
+ value: 43.666169895678095
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
  should probably proofread and complete it, then remove this comment. -->
32
 
33
+ # WhpTiny-hi-v2
34
 
35
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 1.0825
38
+ - Wer: 43.6662
39
 
40
  ## Model description
41
 
 
58
  - train_batch_size: 16
59
  - eval_batch_size: 8
60
  - seed: 42
61
+ - gradient_accumulation_steps: 2
62
+ - total_train_batch_size: 32
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
  - lr_scheduler_type: linear
 
65
  - training_steps: 5000
66
  - mixed_precision_training: Native AMP
67
 
 
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
72
+ | 0.1627 | 7.01 | 1000 | 0.5714 | 40.9378 |
73
+ | 0.0275 | 14.02 | 2000 | 0.7620 | 42.5943 |
74
+ | 0.0032 | 22.0 | 3000 | 0.9561 | 43.0443 |
75
+ | 0.0012 | 29.01 | 4000 | 1.0517 | 43.4426 |
76
+ | 0.0008 | 36.02 | 5000 | 1.0825 | 43.6662 |
77
 
78
 
79
  ### Framework versions