dtrifuno commited on
Commit
31d6aaa
1 Parent(s): f784355

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -10
README.md CHANGED
@@ -8,6 +8,8 @@ datasets:
8
  model-index:
9
  - name: wav2vec2-xls-r-300m-fleurs-mk
10
  results: []
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,15 +17,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # wav2vec2-xls-r-300m-fleurs-mk
17
 
18
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
19
  It achieves the following results on the evaluation set:
20
- - eval_loss: 0.1416
21
- - eval_wer: 0.1565
22
- - eval_runtime: 214.3232
23
- - eval_samples_per_second: 4.54
24
- - eval_steps_per_second: 0.569
25
- - epoch: 9.3
26
- - step: 1600
27
 
28
  ## Model description
29
 
@@ -51,7 +48,7 @@ The following hyperparameters were used during training:
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 500
54
- - num_epochs: 20
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Framework versions
@@ -59,4 +56,4 @@ The following hyperparameters were used during training:
59
  - Transformers 4.35.2
60
  - Pytorch 2.1.0+cu121
61
  - Datasets 2.17.0
62
- - Tokenizers 0.15.1
 
8
  model-index:
9
  - name: wav2vec2-xls-r-300m-fleurs-mk
10
  results: []
11
+ language:
12
+ - mk
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # wav2vec2-xls-r-300m-fleurs-mk
19
 
20
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for the Macedonian language using the train and validation splits of the FLEURS dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.1416
23
+ - WER: 0.1565
 
 
 
 
 
24
 
25
  ## Model description
26
 
 
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_steps: 500
51
+ - num_epochs: 9.3
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Framework versions
 
56
  - Transformers 4.35.2
57
  - Pytorch 2.1.0+cu121
58
  - Datasets 2.17.0
59
+ - Tokenizers 0.15.1