KevinKibe commited on
Commit
f7ea690
·
verified ·
1 Parent(s): 62dff2f

Model save

Browse files
Files changed (1) hide show
  1. README.md +5 -13
README.md CHANGED
@@ -14,19 +14,11 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/lxf5zeja)
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/lxf5zeja)
19
  # whisper-large-v3-finetuned
20
 
21
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the common_voice_16_1 dataset.
22
- It achieves the following results on the evaluation set:
23
- - eval_loss: 1.3368
24
- - eval_wer: 67.1074
25
- - eval_runtime: 2686.5777
26
- - eval_samples_per_second: 0.744
27
- - eval_steps_per_second: 0.047
28
- - epoch: 1.0
29
- - step: 100
30
 
31
  ## Model description
32
 
@@ -46,12 +38,12 @@ More information needed
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 0.0001
49
- - train_batch_size: 16
50
- - eval_batch_size: 16
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: constant_with_warmup
54
- - lr_scheduler_warmup_steps: 50
55
  - training_steps: 100
56
  - mixed_precision_training: Native AMP
57
 
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/lb835u75)
18
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/keviinkibe/huggingface/runs/lb835u75)
19
  # whisper-large-v3-finetuned
20
 
21
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the common_voice_16_1 dataset.
 
 
 
 
 
 
 
 
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0001
41
+ - train_batch_size: 4
42
+ - eval_batch_size: 4
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: constant_with_warmup
46
+ - lr_scheduler_warmup_steps: 10
47
  - training_steps: 100
48
  - mixed_precision_training: Native AMP
49