Fairseq
Catalan
Spanish
carlosep93 commited on
Commit
f357c4e
1 Parent(s): e205871

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -10
README.md CHANGED
@@ -121,7 +121,7 @@ The model was trained using shards of 10 million sentences, for a total of 13.00
121
 
122
  ### Variable and metrics
123
 
124
- We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/), [United Nations](https://zenodo.org/record/3888414#.Y33-_tLMIW0), [Cybersecurity](https://elrc-share.eu/repository/browse/cyber-mt-test-set/2bd93faab98c11ec9c1a00155d026706b96a490ed3e140f0a29a80a08c46e91e/), [wmt19 biomedical test set](), [wmt13 news test set](https://elrc-share.eu/repository/browse/catalan-wmt2013-machine-translation-shared-task-test-set/84a96139b98611ec9c1a00155d0267061a0aa1b62e2248e89aab4952f3c230fc/), [aina aapp]()
125
 
126
  ### Evaluation results
127
 
@@ -129,15 +129,14 @@ Below are the evaluation results on the machine translation from Catalan to Span
129
 
130
  | Test set | SoftCatalà | Google Translate | mt-aina-ca-es |
131
  |----------------------|------------|------------------|---------------|
132
- | Spanish Constitution | 66,2 | **77,1** | 75,5 |
133
- | United Nations | 72,0 | 84,3 | **86,3** |
134
- | aina_aapp | 78,1 | 80,8 | **81,8** |
135
- | Flores 101 dev | 23,8 | 24 | **24,1** |
136
- | Flores 101 devtest | 23,9 | 24,2 | **24,4** |
137
- | Cybersecurity | 73,5 | **76,9** | 75,1 |
138
- | wmt 19 biomedical | 60,0 | 62,7 | **63,0** |
139
- | wmt 13 news | 22,7 | 23,1 | **23,4** |
140
- | Average | 52,5 | 56,6 | **56,7** |
141
 
142
 
143
  ## Additional information
 
121
 
122
  ### Variable and metrics
123
 
124
+ We use the BLEU score for evaluation on test sets: [Flores-101](https://github.com/facebookresearch/flores), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/), [United Nations](https://zenodo.org/record/3888414#.Y33-_tLMIW0), [Cybersecurity](https://elrc-share.eu/repository/browse/cyber-mt-test-set/2bd93faab98c11ec9c1a00155d026706b96a490ed3e140f0a29a80a08c46e91e/), [wmt19 biomedical test set](), [wmt13 news test set](https://elrc-share.eu/repository/browse/catalan-wmt2013-machine-translation-shared-task-test-set/84a96139b98611ec9c1a00155d0267061a0aa1b62e2248e89aab4952f3c230fc/)
125
 
126
  ### Evaluation results
127
 
 
129
 
130
  | Test set | SoftCatalà | Google Translate | mt-aina-ca-es |
131
  |----------------------|------------|------------------|---------------|
132
+ | Spanish Constitution | 70,7 | **77,1** | 75,5 |
133
+ | United Nations | 78,1 | 84,3 | **86,3** |
134
+ | Flores 101 dev | 23,5 | 24 | **24,1** |
135
+ | Flores 101 devtest | 24,1 | 24,2 | **24,4** |
136
+ | Cybersecurity | 67,3 | **76,9** | 75,1 |
137
+ | wmt 19 biomedical | 60,4 | 62,7 | **63,0** |
138
+ | wmt 13 news | 22,5 | 23,1 | **23,4** |
139
+ | Average | 49,5 | 53,2 | **53,1** |
 
140
 
141
 
142
  ## Additional information