mmtg commited on
Commit
c77b0a4
·
verified ·
1 Parent(s): 602dab7

End of training

Browse files
Files changed (2) hide show
  1. README.md +12 -11
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
- base_model: openai/whisper-base
6
  metrics:
7
  - wer
8
  model-index:
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.1366
21
- - Wer: 73.2710
22
 
23
  ## Model description
24
 
@@ -38,23 +38,24 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 3e-05
41
- - train_batch_size: 8
42
  - eval_batch_size: 4
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
47
  - training_steps: 5000
 
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss | Wer |
52
- |:-------------:|:-------:|:----:|:---------------:|:-------:|
53
- | 0.015 | 16.1290 | 1000 | 0.9646 | 78.6843 |
54
- | 0.0014 | 32.2581 | 2000 | 1.0654 | 74.4364 |
55
- | 0.0002 | 48.3871 | 3000 | 1.1048 | 70.1733 |
56
- | 0.0001 | 64.5161 | 4000 | 1.1278 | 75.6939 |
57
- | 0.0001 | 80.6452 | 5000 | 1.1366 | 73.2710 |
58
 
59
 
60
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
+ base_model: openai/whisper-base
4
  tags:
5
  - generated_from_trainer
 
6
  metrics:
7
  - wer
8
  model-index:
 
17
 
18
  This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.2197
21
+ - Wer: 73.5930
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 3e-05
41
+ - train_batch_size: 12
42
  - eval_batch_size: 4
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
47
  - training_steps: 5000
48
+ - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
53
+ |:-------------:|:--------:|:----:|:---------------:|:-------:|
54
+ | 0.0002 | 23.8095 | 1000 | 1.1097 | 77.0127 |
55
+ | 0.0001 | 47.6190 | 2000 | 1.1609 | 74.0991 |
56
+ | 0.0001 | 71.4286 | 3000 | 1.1919 | 73.0716 |
57
+ | 0.0 | 95.2381 | 4000 | 1.2115 | 73.3783 |
58
+ | 0.0 | 119.0476 | 5000 | 1.2197 | 73.5930 |
59
 
60
 
61
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3abdbb026a607c101a7e177b26ff251ac6b6506f28c4a78a8b6b1bfb69aa0fce
3
  size 290403936
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14a35f723b6006534afb4b1f6439257469a32bcc5a1fd019a0170dbef1a9d7d7
3
  size 290403936