KevinKibe commited on
Commit
da1a92b
1 Parent(s): 0c6c8fb

Model save

Browse files
Files changed (1) hide show
  1. README.md +18 -17
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
- license: apache-2.0
 
 
3
  library_name: peft
 
4
  tags:
5
  - generated_from_trainer
6
- base_model: openai/whisper-small
7
  model-index:
8
  - name: whisper-small-finetuned
9
  results: []
@@ -12,19 +14,19 @@ model-index:
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/zbv26o2q)
16
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/zbv26o2q)
17
  # whisper-small-finetuned
18
 
19
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - eval_loss: 2.2990
22
- - eval_wer: 94.3081
23
- - eval_runtime: 602.9832
24
- - eval_samples_per_second: 0.415
25
- - eval_steps_per_second: 0.053
26
- - epoch: 0.25
27
- - step: 500
28
 
29
  ## Model description
30
 
@@ -43,14 +45,13 @@ More information needed
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 0.001
47
- - train_batch_size: 32
48
- - eval_batch_size: 8
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
- - lr_scheduler_warmup_steps: 20
53
- - training_steps: 2000
54
  - mixed_precision_training: Native AMP
55
 
56
  ### Framework versions
 
1
  ---
2
+ base_model: openai/whisper-small
3
+ datasets:
4
+ - common_voice_16_1
5
  library_name: peft
6
+ license: apache-2.0
7
  tags:
8
  - generated_from_trainer
 
9
  model-index:
10
  - name: whisper-small-finetuned
11
  results: []
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/h79p39mv)
18
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/h79p39mv)
19
  # whisper-small-finetuned
20
 
21
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_16_1 dataset.
22
  It achieves the following results on the evaluation set:
23
+ - eval_loss: 4.5110
24
+ - eval_wer: 134.4828
25
+ - eval_runtime: 13.445
26
+ - eval_samples_per_second: 0.744
27
+ - eval_steps_per_second: 0.074
28
+ - epoch: 4.1
29
+ - step: 10
30
 
31
  ## Model description
32
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - learning_rate: 0.0001
49
+ - train_batch_size: 16
50
+ - eval_batch_size: 16
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
+ - training_steps: 20
 
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Framework versions