juanjucm commited on
Commit
5c94f4d
·
verified ·
1 Parent(s): f65cfd9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -16
README.md CHANGED
@@ -9,6 +9,10 @@ metrics:
9
  model-index:
10
  - name: nllb-200-distilled-600M-FLEURS-GL-EN
11
  results: []
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,25 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # nllb-200-distilled-600M-FLEURS-GL-EN
18
 
19
- This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.0877
22
  - Bleu: 43.7516
23
 
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
@@ -67,4 +57,4 @@ The following hyperparameters were used during training:
67
  - Transformers 4.47.1
68
  - Pytorch 2.4.1+cu121
69
  - Datasets 3.2.0
70
- - Tokenizers 0.21.0
 
9
  model-index:
10
  - name: nllb-200-distilled-600M-FLEURS-GL-EN
11
  results: []
12
+ datasets:
13
+ - juanjucm/FLEURS-SpeechT-GL-EN
14
+ language:
15
+ - gl
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  # nllb-200-distilled-600M-FLEURS-GL-EN
22
 
23
+ This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on [juanjucm/FLEURS-SpeechT-GL-EN](https://huggingface.co/datasets/juanjucm/FLEURS-SpeechT-GL-EN).
24
  It achieves the following results on the evaluation set:
25
  - Loss: 0.0877
26
  - Bleu: 43.7516
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ### Training hyperparameters
29
 
30
  The following hyperparameters were used during training:
 
57
  - Transformers 4.47.1
58
  - Pytorch 2.4.1+cu121
59
  - Datasets 3.2.0
60
+ - Tokenizers 0.21.0