Thaihoa commited on
Commit
360a56d
·
verified ·
1 Parent(s): eccfb40

End of training

Browse files
README.md CHANGED
@@ -7,7 +7,7 @@ base_model: openai/whisper-small
7
  tags:
8
  - generated_from_trainer
9
  datasets:
10
- - mozilla-foundation/common_voice_16_0
11
  model-index:
12
  - name: Whisper Small Vi - Thai Hoa
13
  results: []
@@ -18,7 +18,16 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # Whisper Small Vi - Thai Hoa
20
 
21
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16 dataset.
 
 
 
 
 
 
 
 
 
22
 
23
  ## Model description
24
 
@@ -37,7 +46,7 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 1e-05
41
  - train_batch_size: 64
42
  - eval_batch_size: 32
43
  - seed: 42
@@ -46,7 +55,7 @@ The following hyperparameters were used during training:
46
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: constant_with_warmup
48
  - lr_scheduler_warmup_steps: 50
49
- - training_steps: 1400
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Framework versions
 
7
  tags:
8
  - generated_from_trainer
9
  datasets:
10
+ - mozilla-foundation/common_voice_11_0
11
  model-index:
12
  - name: Whisper Small Vi - Thai Hoa
13
  results: []
 
18
 
19
  # Whisper Small Vi - Thai Hoa
20
 
21
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11 dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - eval_loss: 1.0220
24
+ - eval_wer_ortho: 31.6430
25
+ - eval_wer: 27.3131
26
+ - eval_runtime: 296.4189
27
+ - eval_samples_per_second: 4.298
28
+ - eval_steps_per_second: 0.135
29
+ - epoch: 27.7778
30
+ - step: 500
31
 
32
  ## Model description
33
 
 
46
  ### Training hyperparameters
47
 
48
  The following hyperparameters were used during training:
49
+ - learning_rate: 1.4444000000000001e-05
50
  - train_batch_size: 64
51
  - eval_batch_size: 32
52
  - seed: 42
 
55
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
  - lr_scheduler_type: constant_with_warmup
57
  - lr_scheduler_warmup_steps: 50
58
+ - training_steps: 2000
59
  - mixed_precision_training: Native AMP
60
 
61
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:113960f379cca08fb5c1c823bb5906c7de040d02cb3a4122c3e1efaf2038ca6e
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:666340dc74f8d0a0fcd6595c5de366df052cb76a481d4ce65c24dc036f8c2933
3
  size 966995080
runs/Dec23_12-01-57_b3d14713af75/events.out.tfevents.1734955328.b3d14713af75.1662.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7181a01c6a97e42de3d9bcd4e28ae595c3a799b7348849d7cc5b29adb8781dd1
3
- size 10544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08350d0ed61299652f7d9d5ff9a1d992a1d714c96b45103251be6c30ab412688
3
+ size 12021