Gabi00 commited on
Commit
2f8ac44
1 Parent(s): 5372934

End of training

Browse files
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: openai/whisper-large-v3
3
+ datasets:
4
+ - Gabi00/english-mistakes
5
+ language:
6
+ - eng
7
+ library_name: peft
8
+ license: apache-2.0
9
+ metrics:
10
+ - wer
11
+ tags:
12
+ - generated_from_trainer
13
+ model-index:
14
+ - name: Whisper Small Eng - Gabriel Mora
15
+ results:
16
+ - task:
17
+ type: automatic-speech-recognition
18
+ name: Automatic Speech Recognition
19
+ dataset:
20
+ name: English-mistakes
21
+ type: Gabi00/english-mistakes
22
+ config: default
23
+ split: validation
24
+ args: 'config: eng, split: test'
25
+ metrics:
26
+ - type: wer
27
+ value: 12.326814527624153
28
+ name: Wer
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # Whisper Small Eng - Gabriel Mora
35
+
36
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the English-mistakes dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.3590
39
+ - Wer: 12.3268
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 8
60
+ - eval_batch_size: 8
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 50
65
+ - training_steps: 100000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
71
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
72
+ | 0.9094 | 0.1270 | 500 | 0.6347 | 24.3686 |
73
+ | 0.5517 | 0.2541 | 1000 | 0.4835 | 18.0769 |
74
+ | 0.5364 | 0.3811 | 1500 | 0.4330 | 15.1149 |
75
+ | 0.5503 | 0.5081 | 2000 | 0.4113 | 13.6524 |
76
+ | 0.6521 | 0.6352 | 2500 | 0.3987 | 13.5897 |
77
+ | 0.6044 | 0.7622 | 3000 | 0.3912 | 13.0538 |
78
+ | 0.5487 | 0.8892 | 3500 | 0.3835 | 12.6119 |
79
+ | 0.5297 | 1.0163 | 4000 | 0.3791 | 12.4408 |
80
+ | 0.46 | 1.1433 | 4500 | 0.3751 | 12.3525 |
81
+ | 0.4947 | 1.2703 | 5000 | 0.3721 | 12.1415 |
82
+ | 0.524 | 1.3974 | 5500 | 0.3682 | 13.0139 |
83
+ | 0.4743 | 1.5244 | 6000 | 0.3649 | 13.3388 |
84
+ | 0.5338 | 1.6514 | 6500 | 0.3621 | 12.9397 |
85
+ | 0.5162 | 1.7785 | 7000 | 0.3597 | 13.3246 |
86
+ | 0.5004 | 1.9055 | 7500 | 0.3590 | 12.3268 |
87
+
88
+
89
+ ### Framework versions
90
+
91
+ - PEFT 0.11.1
92
+ - Transformers 4.42.4
93
+ - Pytorch 2.1.0+cu118
94
+ - Datasets 2.20.0
95
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c93c737a92eaab2fd2f729caaf56c70b3bd2d7dea018938879a6e49e4a32272
3
  size 31512224
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cccaf752b6ce5bcdfc89ceaa5d08d309110304129cf85c5b4c116a182b0fc139
3
  size 31512224
runs/Jul22_07-54-00_cac7b9a8f3bc/events.out.tfevents.1721634842.cac7b9a8f3bc CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2cf85ea41db73930ccdbf45d9b0f11915d6bc0f4e5f7a2d34917c20d652414c
3
- size 68835
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:870fd6828b861e0130176b1bb400e7136127334da01bd35754da29f3e49766b7
3
+ size 74045