JulieHinge commited on
Commit
bacf2bd
1 Parent(s): 003a588

End of training

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  language:
4
- - en
5
  license: apache-2.0
6
  base_model: openai/whisper-large
7
  tags:
@@ -24,7 +24,7 @@ model-index:
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 55.483870967741936
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,8 +34,8 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the ftspeech dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 1.8143
38
- - Wer: 55.4839
39
 
40
  ## Model description
41
 
@@ -55,23 +55,26 @@ More information needed
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 1e-05
58
- - train_batch_size: 16
59
  - eval_batch_size: 8
60
  - seed: 42
 
 
61
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
62
  - lr_scheduler_type: linear
63
- - lr_scheduler_warmup_steps: 500
64
- - training_steps: 4000
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:------:|:----:|:---------------:|:-------:|
71
- | 0.0 | 1000.0 | 1000 | 1.2419 | 47.0968 |
72
- | 0.0 | 2000.0 | 2000 | 1.5725 | 50.9677 |
73
- | 0.0 | 3000.0 | 3000 | 1.7241 | 54.8387 |
74
- | 0.0 | 4000.0 | 4000 | 1.8143 | 55.4839 |
 
75
 
76
 
77
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  language:
4
+ - dk
5
  license: apache-2.0
6
  base_model: openai/whisper-large
7
  tags:
 
24
  metrics:
25
  - name: Wer
26
  type: wer
27
+ value: 24.476331512025737
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the ftspeech dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.3820
38
+ - Wer: 24.4763
39
 
40
  ## Model description
41
 
 
55
 
56
  The following hyperparameters were used during training:
57
  - learning_rate: 1e-05
58
+ - train_batch_size: 8
59
  - eval_batch_size: 8
60
  - seed: 42
61
+ - gradient_accumulation_steps: 2
62
+ - total_train_batch_size: 16
63
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
  - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_steps: 200
66
+ - training_steps: 1000
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:------:|:----:|:---------------:|:-------:|
73
+ | 0.5793 | 0.0032 | 200 | 0.5536 | 30.4519 |
74
+ | 0.4187 | 0.0064 | 400 | 0.4508 | 27.5208 |
75
+ | 0.3587 | 0.0096 | 600 | 0.4125 | 25.5569 |
76
+ | 0.3477 | 0.0129 | 800 | 0.3907 | 24.9318 |
77
+ | 0.3786 | 0.0161 | 1000 | 0.3820 | 24.4763 |
78
 
79
 
80
  ### Framework versions