Rziane commited on
Commit
351f1c2
·
verified ·
1 Parent(s): 1fa8755

End of training

Browse files
Files changed (3) hide show
  1. README.md +27 -9
  2. generation_config.json +1 -0
  3. model.safetensors +1 -1
README.md CHANGED
@@ -8,9 +8,22 @@ tags:
8
  - generated_from_trainer
9
  datasets:
10
  - AT
 
 
11
  model-index:
12
  - name: Whisper medium AT
13
- results: []
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -20,13 +33,8 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the AT dataset.
22
  It achieves the following results on the evaluation set:
23
- - eval_loss: 1.3971
24
- - eval_wer: 77.0447
25
- - eval_runtime: 560.1849
26
- - eval_samples_per_second: 2.085
27
- - eval_steps_per_second: 0.13
28
- - epoch: 1.0
29
- - step: 293
30
 
31
  ## Model description
32
 
@@ -52,9 +60,19 @@ The following hyperparameters were used during training:
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_steps: 500
55
- - training_steps: 4000
56
  - mixed_precision_training: Native AMP
57
 
 
 
 
 
 
 
 
 
 
 
58
  ### Framework versions
59
 
60
  - Transformers 4.45.1
 
8
  - generated_from_trainer
9
  datasets:
10
  - AT
11
+ metrics:
12
+ - wer
13
  model-index:
14
  - name: Whisper medium AT
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: AT
21
+ type: AT
22
+ args: 'config: aeb, split: test'
23
+ metrics:
24
+ - name: Wer
25
+ type: wer
26
+ value: 65.98418372874012
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
33
 
34
  This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the AT dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.9915
37
+ - Wer: 65.9842
 
 
 
 
 
38
 
39
  ## Model description
40
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_steps: 500
63
+ - num_epochs: 4
64
  - mixed_precision_training: Native AMP
65
 
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
70
+ | No log | 1.0 | 293 | 1.3198 | 74.6073 |
71
+ | 1.7949 | 2.0 | 586 | 1.0108 | 70.6316 |
72
+ | 1.7949 | 3.0 | 879 | 0.9583 | 65.9517 |
73
+ | 0.5076 | 4.0 | 1172 | 0.9915 | 65.9842 |
74
+
75
+
76
  ### Framework versions
77
 
78
  - Transformers 4.45.1
generation_config.json CHANGED
@@ -134,6 +134,7 @@
134
  "<|yo|>": 50325,
135
  "<|zh|>": 50260
136
  },
 
137
  "max_initial_timestamp_index": 50,
138
  "max_length": 448,
139
  "no_timestamps_token_id": 50363,
 
134
  "<|yo|>": 50325,
135
  "<|zh|>": 50260
136
  },
137
+ "language": "arabic",
138
  "max_initial_timestamp_index": 50,
139
  "max_length": 448,
140
  "no_timestamps_token_id": 50363,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3735706231ef6634f4be4d3ca520992d61facb435a3f32d0af0b9c2c388718f1
3
  size 3055544304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a90dbeef71405f1a24839c0a1052fbf131f598cab5cbd7b38d600f06f888af6a
3
  size 3055544304