repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-en-tiv
|
Helsinki-NLP
|
marian
| 10 | 12 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-en-tiv
* source languages: en
* target languages: tiv
* OPUS readme: [en-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tiv | 31.6 | 0.497 |
|
Helsinki-NLP/opus-mt-en-tl
|
Helsinki-NLP
|
marian
| 10 | 67 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 791 |
### opus-mt-en-tl
* source languages: en
* target languages: tl
* OPUS readme: [en-tl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tl/README.md)
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tl/opus+bt-2020-02-26.zip)
* test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tl/opus+bt-2020-02-26.test.txt)
* test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tl/opus+bt-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.tl | 26.6 | 0.577 |
|
Helsinki-NLP/opus-mt-en-tll
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-en-tll
* source languages: en
* target languages: tll
* OPUS readme: [en-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tll | 33.6 | 0.556 |
|
Helsinki-NLP/opus-mt-en-tn
|
Helsinki-NLP
|
marian
| 10 | 24 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-en-tn
* source languages: en
* target languages: tn
* OPUS readme: [en-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tn | 45.5 | 0.636 |
|
Helsinki-NLP/opus-mt-en-to
|
Helsinki-NLP
|
marian
| 10 | 22 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-en-to
* source languages: en
* target languages: to
* OPUS readme: [en-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.to | 56.3 | 0.689 |
|
Helsinki-NLP/opus-mt-en-toi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-en-toi
* source languages: en
* target languages: toi
* OPUS readme: [en-toi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-toi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.toi | 32.8 | 0.598 |
|
Helsinki-NLP/opus-mt-en-tpi
|
Helsinki-NLP
|
marian
| 10 | 19 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-en-tpi
* source languages: en
* target languages: tpi
* OPUS readme: [en-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tpi | 38.7 | 0.568 |
|
Helsinki-NLP/opus-mt-en-trk
|
Helsinki-NLP
|
marian
| 11 | 425 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,537 |
### eng-trk
* source group: English
* target group: Turkic languages
* OPUS readme: [eng-trk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md)
* model: transformer
* source language(s): eng
* target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-engtur.eng.tur | 10.1 | 0.437 |
| newstest2016-entr-engtur.eng.tur | 9.2 | 0.410 |
| newstest2017-entr-engtur.eng.tur | 9.0 | 0.410 |
| newstest2018-entr-engtur.eng.tur | 9.2 | 0.413 |
| Tatoeba-test.eng-aze.eng.aze | 26.8 | 0.577 |
| Tatoeba-test.eng-bak.eng.bak | 7.6 | 0.308 |
| Tatoeba-test.eng-chv.eng.chv | 4.3 | 0.270 |
| Tatoeba-test.eng-crh.eng.crh | 8.1 | 0.330 |
| Tatoeba-test.eng-kaz.eng.kaz | 11.1 | 0.359 |
| Tatoeba-test.eng-kir.eng.kir | 28.6 | 0.524 |
| Tatoeba-test.eng-kjh.eng.kjh | 1.0 | 0.041 |
| Tatoeba-test.eng-kum.eng.kum | 2.2 | 0.075 |
| Tatoeba-test.eng.multi | 19.9 | 0.455 |
| Tatoeba-test.eng-ota.eng.ota | 0.5 | 0.065 |
| Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.030 |
| Tatoeba-test.eng-tat.eng.tat | 9.7 | 0.316 |
| Tatoeba-test.eng-tuk.eng.tuk | 5.9 | 0.317 |
| Tatoeba-test.eng-tur.eng.tur | 34.6 | 0.623 |
| Tatoeba-test.eng-tyv.eng.tyv | 5.4 | 0.210 |
| Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.155 |
| Tatoeba-test.eng-uzb.eng.uzb | 3.4 | 0.275 |
### System Info:
- hf_name: eng-trk
- source_languages: eng
- target_languages: trk
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']
- src_constituents: {'eng'}
- tgt_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: trk
- short_pair: en-trk
- chrF2_score: 0.455
- bleu: 19.9
- brevity_penalty: 1.0
- ref_len: 57072.0
- src_name: English
- tgt_name: Turkic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: trk
- prefer_old: False
- long_pair: eng-trk
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-ts
|
Helsinki-NLP
|
marian
| 10 | 18 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-en-ts
* source languages: en
* target languages: ts
* OPUS readme: [en-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ts/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ts | 43.4 | 0.639 |
|
Helsinki-NLP/opus-mt-en-tut
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['en', 'tut']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,435 |
### eng-tut
* source group: English
* target group: Altaic languages
* OPUS readme: [eng-tut](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md)
* model: transformer
* source language(s): eng
* target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum mon nog ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn xal
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-engtur.eng.tur | 10.4 | 0.438 |
| newstest2016-entr-engtur.eng.tur | 9.1 | 0.414 |
| newstest2017-entr-engtur.eng.tur | 9.5 | 0.414 |
| newstest2018-entr-engtur.eng.tur | 9.5 | 0.415 |
| Tatoeba-test.eng-aze.eng.aze | 27.2 | 0.580 |
| Tatoeba-test.eng-bak.eng.bak | 5.8 | 0.298 |
| Tatoeba-test.eng-chv.eng.chv | 4.6 | 0.301 |
| Tatoeba-test.eng-crh.eng.crh | 6.5 | 0.342 |
| Tatoeba-test.eng-kaz.eng.kaz | 11.8 | 0.360 |
| Tatoeba-test.eng-kir.eng.kir | 24.6 | 0.499 |
| Tatoeba-test.eng-kjh.eng.kjh | 2.2 | 0.052 |
| Tatoeba-test.eng-kum.eng.kum | 8.0 | 0.229 |
| Tatoeba-test.eng-mon.eng.mon | 10.3 | 0.362 |
| Tatoeba-test.eng.multi | 19.5 | 0.451 |
| Tatoeba-test.eng-nog.eng.nog | 1.5 | 0.117 |
| Tatoeba-test.eng-ota.eng.ota | 0.2 | 0.035 |
| Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.080 |
| Tatoeba-test.eng-tat.eng.tat | 10.8 | 0.320 |
| Tatoeba-test.eng-tuk.eng.tuk | 5.6 | 0.323 |
| Tatoeba-test.eng-tur.eng.tur | 34.2 | 0.623 |
| Tatoeba-test.eng-tyv.eng.tyv | 8.1 | 0.192 |
| Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.158 |
| Tatoeba-test.eng-uzb.eng.uzb | 4.2 | 0.298 |
| Tatoeba-test.eng-xal.eng.xal | 0.1 | 0.061 |
### System Info:
- hf_name: eng-tut
- source_languages: eng
- target_languages: tut
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tut']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: tut
- short_pair: en-tut
- chrF2_score: 0.451
- bleu: 19.5
- brevity_penalty: 1.0
- ref_len: 57472.0
- src_name: English
- tgt_name: Altaic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: tut
- prefer_old: False
- long_pair: eng-tut
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-tvl
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-en-tvl
* source languages: en
* target languages: tvl
* OPUS readme: [en-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tvl | 46.9 | 0.625 |
|
Helsinki-NLP/opus-mt-en-tw
|
Helsinki-NLP
|
marian
| 10 | 21 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-en-tw
* source languages: en
* target languages: tw
* OPUS readme: [en-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tw | 38.2 | 0.577 |
|
Helsinki-NLP/opus-mt-en-ty
|
Helsinki-NLP
|
marian
| 10 | 21 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-en-ty
* source languages: en
* target languages: ty
* OPUS readme: [en-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ty | 46.8 | 0.619 |
|
Helsinki-NLP/opus-mt-en-uk
|
Helsinki-NLP
|
marian
| 10 | 2,009 |
transformers
| 2 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-en-uk
* source languages: en
* target languages: uk
* OPUS readme: [en-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.uk | 50.2 | 0.674 |
|
Helsinki-NLP/opus-mt-en-umb
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-en-umb
* source languages: en
* target languages: umb
* OPUS readme: [en-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.umb | 28.6 | 0.510 |
|
Helsinki-NLP/opus-mt-en-ur
|
Helsinki-NLP
|
marian
| 11 | 117 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
|
['en', 'ur']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,977 |
### eng-urd
* source group: English
* target group: Urdu
* OPUS readme: [eng-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.urd | 12.1 | 0.390 |
### System Info:
- hf_name: eng-urd
- source_languages: eng
- target_languages: urd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ur']
- src_constituents: {'eng'}
- tgt_constituents: {'urd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: urd
- short_pair: en-ur
- chrF2_score: 0.39
- bleu: 12.1
- brevity_penalty: 1.0
- ref_len: 12155.0
- src_name: English
- tgt_name: Urdu
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: ur
- prefer_old: False
- long_pair: eng-urd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-urj
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['en', 'se', 'fi', 'hu', 'et', 'urj']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,651 |
### eng-urj
* source group: English
* target group: Uralic languages
* OPUS readme: [eng-urj](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urj/README.md)
* model: transformer
* source language(s): eng
* target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-engfin.eng.fin | 18.3 | 0.519 |
| newsdev2018-enet-engest.eng.est | 19.3 | 0.520 |
| newssyscomb2009-enghun.eng.hun | 15.4 | 0.471 |
| newstest2009-enghun.eng.hun | 15.7 | 0.468 |
| newstest2015-enfi-engfin.eng.fin | 20.2 | 0.534 |
| newstest2016-enfi-engfin.eng.fin | 20.7 | 0.541 |
| newstest2017-enfi-engfin.eng.fin | 23.6 | 0.566 |
| newstest2018-enet-engest.eng.est | 20.8 | 0.535 |
| newstest2018-enfi-engfin.eng.fin | 15.8 | 0.499 |
| newstest2019-enfi-engfin.eng.fin | 19.9 | 0.518 |
| newstestB2016-enfi-engfin.eng.fin | 16.6 | 0.509 |
| newstestB2017-enfi-engfin.eng.fin | 19.4 | 0.529 |
| Tatoeba-test.eng-chm.eng.chm | 1.3 | 0.127 |
| Tatoeba-test.eng-est.eng.est | 51.0 | 0.692 |
| Tatoeba-test.eng-fin.eng.fin | 34.6 | 0.597 |
| Tatoeba-test.eng-fkv.eng.fkv | 2.2 | 0.302 |
| Tatoeba-test.eng-hun.eng.hun | 35.6 | 0.591 |
| Tatoeba-test.eng-izh.eng.izh | 5.7 | 0.211 |
| Tatoeba-test.eng-kom.eng.kom | 3.0 | 0.012 |
| Tatoeba-test.eng-krl.eng.krl | 8.5 | 0.230 |
| Tatoeba-test.eng-liv.eng.liv | 2.7 | 0.077 |
| Tatoeba-test.eng-mdf.eng.mdf | 2.8 | 0.007 |
| Tatoeba-test.eng.multi | 35.1 | 0.588 |
| Tatoeba-test.eng-myv.eng.myv | 1.3 | 0.014 |
| Tatoeba-test.eng-sma.eng.sma | 1.8 | 0.095 |
| Tatoeba-test.eng-sme.eng.sme | 6.8 | 0.204 |
| Tatoeba-test.eng-udm.eng.udm | 1.1 | 0.121 |
### System Info:
- hf_name: eng-urj
- source_languages: eng
- target_languages: urj
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urj/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'se', 'fi', 'hu', 'et', 'urj']
- src_constituents: {'eng'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: urj
- short_pair: en-urj
- chrF2_score: 0.588
- bleu: 35.1
- brevity_penalty: 0.943
- ref_len: 59664.0
- src_name: English
- tgt_name: Uralic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: urj
- prefer_old: False
- long_pair: eng-urj
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-vi
|
Helsinki-NLP
|
marian
| 11 | 38,292 |
transformers
| 2 |
translation
| true | true | false |
apache-2.0
|
['en', 'vi']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,117 |
### eng-vie
* source group: English
* target group: Vietnamese
* OPUS readme: [eng-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): vie vie_Hani
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.vie | 37.2 | 0.542 |
### System Info:
- hf_name: eng-vie
- source_languages: eng
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi']
- src_constituents: {'eng'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: vie
- short_pair: en-vi
- chrF2_score: 0.542
- bleu: 37.2
- brevity_penalty: 0.973
- ref_len: 24427.0
- src_name: English
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: vi
- prefer_old: False
- long_pair: eng-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-xh
|
Helsinki-NLP
|
marian
| 10 | 21 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-en-xh
* source languages: en
* target languages: xh
* OPUS readme: [en-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.xh | 37.9 | 0.652 |
|
Helsinki-NLP/opus-mt-en-zh
|
Helsinki-NLP
|
marian
| 13 | 40,070 |
transformers
| 54 |
translation
| true | true | true |
apache-2.0
|
['en', 'zh']
| null | null | 3 | 2 | 1 | 0 | 2 | 2 | 0 |
['translation']
| false | true | true | 2,602 |
### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.zho | 31.4 | 0.268 |
### System Info:
- hf_name: eng-zho
- source_languages: eng
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'zh']
- src_constituents: {'eng'}
- tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt
- src_alpha3: eng
- tgt_alpha3: zho
- short_pair: en-zh
- chrF2_score: 0.268
- bleu: 31.4
- brevity_penalty: 0.8959999999999999
- ref_len: 110468.0
- src_name: English
- tgt_name: Chinese
- train_date: 2020-07-17
- src_alpha2: en
- tgt_alpha2: zh
- prefer_old: False
- long_pair: eng-zho
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-zle
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['en', 'be', 'ru', 'uk', 'zle']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,847 |
### eng-zle
* source group: English
* target group: East Slavic languages
* OPUS readme: [eng-zle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md)
* model: transformer
* source language(s): eng
* target language(s): bel bel_Latn orv_Cyrl rue rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-engrus.eng.rus | 27.4 | 0.550 |
| newstest2013-engrus.eng.rus | 21.4 | 0.493 |
| newstest2015-enru-engrus.eng.rus | 24.2 | 0.534 |
| newstest2016-enru-engrus.eng.rus | 23.3 | 0.518 |
| newstest2017-enru-engrus.eng.rus | 25.3 | 0.541 |
| newstest2018-enru-engrus.eng.rus | 22.4 | 0.527 |
| newstest2019-enru-engrus.eng.rus | 24.1 | 0.505 |
| Tatoeba-test.eng-bel.eng.bel | 20.8 | 0.471 |
| Tatoeba-test.eng.multi | 37.2 | 0.580 |
| Tatoeba-test.eng-orv.eng.orv | 0.6 | 0.130 |
| Tatoeba-test.eng-rue.eng.rue | 1.4 | 0.168 |
| Tatoeba-test.eng-rus.eng.rus | 41.3 | 0.616 |
| Tatoeba-test.eng-ukr.eng.ukr | 38.7 | 0.596 |
### System Info:
- hf_name: eng-zle
- source_languages: eng
- target_languages: zle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'be', 'ru', 'uk', 'zle']
- src_constituents: {'eng'}
- tgt_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zle
- short_pair: en-zle
- chrF2_score: 0.58
- bleu: 37.2
- brevity_penalty: 0.9890000000000001
- ref_len: 63493.0
- src_name: English
- tgt_name: East Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zle
- prefer_old: False
- long_pair: eng-zle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-zls
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['en', 'hr', 'mk', 'bg', 'sl', 'zls']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,482 |
### eng-zls
* source group: English
* target group: South Slavic languages
* OPUS readme: [eng-zls](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md)
* model: transformer
* source language(s): eng
* target language(s): bos_Latn bul bul_Latn hrv mkd slv srp_Cyrl srp_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-bul.eng.bul | 47.6 | 0.657 |
| Tatoeba-test.eng-hbs.eng.hbs | 40.7 | 0.619 |
| Tatoeba-test.eng-mkd.eng.mkd | 45.2 | 0.642 |
| Tatoeba-test.eng.multi | 42.7 | 0.622 |
| Tatoeba-test.eng-slv.eng.slv | 17.9 | 0.351 |
### System Info:
- hf_name: eng-zls
- source_languages: eng
- target_languages: zls
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls']
- src_constituents: {'eng'}
- tgt_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zls
- short_pair: en-zls
- chrF2_score: 0.622
- bleu: 42.7
- brevity_penalty: 0.9690000000000001
- ref_len: 64788.0
- src_name: English
- tgt_name: South Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zls
- prefer_old: False
- long_pair: eng-zls
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en-zlw
|
Helsinki-NLP
|
marian
| 11 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['en', 'pl', 'cs', 'zlw']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,056 |
### eng-zlw
* source group: English
* target group: West Slavic languages
* OPUS readme: [eng-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md)
* model: transformer
* source language(s): eng
* target language(s): ces csb_Latn dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engces.eng.ces | 20.6 | 0.488 |
| news-test2008-engces.eng.ces | 18.3 | 0.466 |
| newstest2009-engces.eng.ces | 19.8 | 0.483 |
| newstest2010-engces.eng.ces | 19.8 | 0.486 |
| newstest2011-engces.eng.ces | 20.6 | 0.489 |
| newstest2012-engces.eng.ces | 18.6 | 0.464 |
| newstest2013-engces.eng.ces | 22.3 | 0.495 |
| newstest2015-encs-engces.eng.ces | 21.7 | 0.502 |
| newstest2016-encs-engces.eng.ces | 24.5 | 0.521 |
| newstest2017-encs-engces.eng.ces | 20.1 | 0.480 |
| newstest2018-encs-engces.eng.ces | 19.9 | 0.483 |
| newstest2019-encs-engces.eng.ces | 21.2 | 0.490 |
| Tatoeba-test.eng-ces.eng.ces | 43.7 | 0.632 |
| Tatoeba-test.eng-csb.eng.csb | 1.2 | 0.188 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.5 | 0.167 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.199 |
| Tatoeba-test.eng.multi | 42.8 | 0.632 |
| Tatoeba-test.eng-pol.eng.pol | 43.2 | 0.641 |
### System Info:
- hf_name: eng-zlw
- source_languages: eng
- target_languages: zlw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'pl', 'cs', 'zlw']
- src_constituents: {'eng'}
- tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zlw
- short_pair: en-zlw
- chrF2_score: 0.632
- bleu: 42.8
- brevity_penalty: 0.973
- ref_len: 65397.0
- src_name: English
- tgt_name: West Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zlw
- prefer_old: False
- long_pair: eng-zlw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-en_el_es_fi-en_el_es_fi
|
Helsinki-NLP
|
marian
| 9 | 7 |
transformers
| 1 |
translation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,845 |
### opus-mt-en_el_es_fi-en_el_es_fi
* source languages: en,el,es,fi
* target languages: en,el,es,fi
* OPUS readme: [en+el+es+fi-en+el+es+fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en+el+es+fi-en+el+es+fi/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-03-02.zip](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.zip)
* test set translations: [opus-2020-03-02.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.test.txt)
* test set scores: [opus-2020-03-02.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi.en.fi | 16.0 | 0.498 |
| newssyscomb2009.en.es | 29.9 | 0.570 |
| newssyscomb2009.es.en | 29.7 | 0.569 |
| news-test2008.en.es | 27.3 | 0.549 |
| news-test2008.es.en | 27.3 | 0.548 |
| newstest2009.en.es | 28.4 | 0.564 |
| newstest2009.es.en | 28.4 | 0.564 |
| newstest2010.en.es | 34.0 | 0.599 |
| newstest2010.es.en | 34.0 | 0.599 |
| newstest2011.en.es | 35.1 | 0.600 |
| newstest2012.en.es | 35.4 | 0.602 |
| newstest2013.en.es | 31.9 | 0.576 |
| newstest2015-enfi.en.fi | 17.8 | 0.509 |
| newstest2016-enfi.en.fi | 19.0 | 0.521 |
| newstest2017-enfi.en.fi | 21.2 | 0.539 |
| newstest2018-enfi.en.fi | 13.9 | 0.478 |
| newstest2019-enfi.en.fi | 18.8 | 0.503 |
| newstestB2016-enfi.en.fi | 14.9 | 0.491 |
| newstestB2017-enfi.en.fi | 16.9 | 0.503 |
| simplification.en.en | 63.0 | 0.798 |
| Tatoeba.en.fi | 56.7 | 0.719 |
|
Helsinki-NLP/opus-mt-eo-af
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'af']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,002 |
### epo-afr
* source group: Esperanto
* target group: Afrikaans
* OPUS readme: [epo-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.afr | 19.5 | 0.369 |
### System Info:
- hf_name: epo-afr
- source_languages: epo
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'af']
- src_constituents: {'epo'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: afr
- short_pair: eo-af
- chrF2_score: 0.369
- bleu: 19.5
- brevity_penalty: 0.9570000000000001
- ref_len: 8432.0
- src_name: Esperanto
- tgt_name: Afrikaans
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: af
- prefer_old: False
- long_pair: epo-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-bg
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'bg']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,014 |
### epo-bul
* source group: Esperanto
* target group: Bulgarian
* OPUS readme: [epo-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.bul | 19.0 | 0.395 |
### System Info:
- hf_name: epo-bul
- source_languages: epo
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'bg']
- src_constituents: {'epo'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: bul
- short_pair: eo-bg
- chrF2_score: 0.395
- bleu: 19.0
- brevity_penalty: 0.8909999999999999
- ref_len: 3961.0
- src_name: Esperanto
- tgt_name: Bulgarian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: bg
- prefer_old: False
- long_pair: epo-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-cs
|
Helsinki-NLP
|
marian
| 11 | 21 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'cs']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,982 |
### epo-ces
* source group: Esperanto
* target group: Czech
* OPUS readme: [epo-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ces
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ces | 17.5 | 0.376 |
### System Info:
- hf_name: epo-ces
- source_languages: epo
- target_languages: ces
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'cs']
- src_constituents: {'epo'}
- tgt_constituents: {'ces'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ces
- short_pair: eo-cs
- chrF2_score: 0.376
- bleu: 17.5
- brevity_penalty: 0.922
- ref_len: 22148.0
- src_name: Esperanto
- tgt_name: Czech
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: cs
- prefer_old: False
- long_pair: epo-ces
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-da
|
Helsinki-NLP
|
marian
| 11 | 19 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'da']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,011 |
### epo-dan
* source group: Esperanto
* target group: Danish
* OPUS readme: [epo-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.dan | 21.6 | 0.407 |
### System Info:
- hf_name: epo-dan
- source_languages: epo
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'da']
- src_constituents: {'epo'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: dan
- short_pair: eo-da
- chrF2_score: 0.40700000000000003
- bleu: 21.6
- brevity_penalty: 0.9359999999999999
- ref_len: 72349.0
- src_name: Esperanto
- tgt_name: Danish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: da
- prefer_old: False
- long_pair: epo-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-de
|
Helsinki-NLP
|
marian
| 10 | 23 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-eo-de
* source languages: eo
* target languages: de
* OPUS readme: [eo-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.de | 45.5 | 0.644 |
|
Helsinki-NLP/opus-mt-eo-el
|
Helsinki-NLP
|
marian
| 11 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'el']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,038 |
### epo-ell
* source group: Esperanto
* target group: Modern Greek (1453-)
* OPUS readme: [epo-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ell
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ell | 23.2 | 0.438 |
### System Info:
- hf_name: epo-ell
- source_languages: epo
- target_languages: ell
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'el']
- src_constituents: {'epo'}
- tgt_constituents: {'ell'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ell
- short_pair: eo-el
- chrF2_score: 0.43799999999999994
- bleu: 23.2
- brevity_penalty: 0.9159999999999999
- ref_len: 3892.0
- src_name: Esperanto
- tgt_name: Modern Greek (1453-)
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: el
- prefer_old: False
- long_pair: epo-ell
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-en
|
Helsinki-NLP
|
marian
| 10 | 687 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-eo-en
* source languages: eo
* target languages: en
* OPUS readme: [eo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.en | 54.8 | 0.694 |
|
Helsinki-NLP/opus-mt-eo-es
|
Helsinki-NLP
|
marian
| 10 | 102 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-eo-es
* source languages: eo
* target languages: es
* OPUS readme: [eo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.es | 44.2 | 0.631 |
|
Helsinki-NLP/opus-mt-eo-fi
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'fi']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,986 |
### epo-fin
* source group: Esperanto
* target group: Finnish
* OPUS readme: [epo-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-fin/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.fin | 15.9 | 0.371 |
### System Info:
- hf_name: epo-fin
- source_languages: epo
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'fi']
- src_constituents: {'epo'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: fin
- short_pair: eo-fi
- chrF2_score: 0.371
- bleu: 15.9
- brevity_penalty: 0.894
- ref_len: 15881.0
- src_name: Esperanto
- tgt_name: Finnish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: fi
- prefer_old: False
- long_pair: epo-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-fr
|
Helsinki-NLP
|
marian
| 10 | 37 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-eo-fr
* source languages: eo
* target languages: fr
* OPUS readme: [eo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.fr | 50.9 | 0.675 |
|
Helsinki-NLP/opus-mt-eo-he
|
Helsinki-NLP
|
marian
| 11 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'he']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,984 |
### epo-heb
* source group: Esperanto
* target group: Hebrew
* OPUS readme: [epo-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-heb/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.heb | 11.5 | 0.306 |
### System Info:
- hf_name: epo-heb
- source_languages: epo
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'he']
- src_constituents: {'epo'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: heb
- short_pair: eo-he
- chrF2_score: 0.306
- bleu: 11.5
- brevity_penalty: 0.943
- ref_len: 65645.0
- src_name: Esperanto
- tgt_name: Hebrew
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: he
- prefer_old: False
- long_pair: epo-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-hu
|
Helsinki-NLP
|
marian
| 11 | 17 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'hu']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,004 |
### epo-hun
* source group: Esperanto
* target group: Hungarian
* OPUS readme: [epo-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hun/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): hun
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.hun | 12.8 | 0.333 |
### System Info:
- hf_name: epo-hun
- source_languages: epo
- target_languages: hun
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hun/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'hu']
- src_constituents: {'epo'}
- tgt_constituents: {'hun'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: hun
- short_pair: eo-hu
- chrF2_score: 0.33299999999999996
- bleu: 12.8
- brevity_penalty: 0.914
- ref_len: 65704.0
- src_name: Esperanto
- tgt_name: Hungarian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: hu
- prefer_old: False
- long_pair: epo-hun
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-it
|
Helsinki-NLP
|
marian
| 11 | 20 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'it']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,999 |
### epo-ita
* source group: Esperanto
* target group: Italian
* OPUS readme: [epo-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ita | 23.8 | 0.465 |
### System Info:
- hf_name: epo-ita
- source_languages: epo
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'it']
- src_constituents: {'epo'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ita
- short_pair: eo-it
- chrF2_score: 0.465
- bleu: 23.8
- brevity_penalty: 0.9420000000000001
- ref_len: 67118.0
- src_name: Esperanto
- tgt_name: Italian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: it
- prefer_old: False
- long_pair: epo-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-nl
|
Helsinki-NLP
|
marian
| 11 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'nl']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,995 |
### epo-nld
* source group: Esperanto
* target group: Dutch
* OPUS readme: [epo-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.nld | 15.3 | 0.337 |
### System Info:
- hf_name: epo-nld
- source_languages: epo
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'nl']
- src_constituents: {'epo'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: nld
- short_pair: eo-nl
- chrF2_score: 0.337
- bleu: 15.3
- brevity_penalty: 0.8640000000000001
- ref_len: 78770.0
- src_name: Esperanto
- tgt_name: Dutch
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: nl
- prefer_old: False
- long_pair: epo-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-pl
|
Helsinki-NLP
|
marian
| 11 | 44 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'pl']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,984 |
### epo-pol
* source group: Esperanto
* target group: Polish
* OPUS readme: [epo-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-pol/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.pol | 17.2 | 0.392 |
### System Info:
- hf_name: epo-pol
- source_languages: epo
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'pl']
- src_constituents: {'epo'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: pol
- short_pair: eo-pl
- chrF2_score: 0.392
- bleu: 17.2
- brevity_penalty: 0.893
- ref_len: 15343.0
- src_name: Esperanto
- tgt_name: Polish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: pl
- prefer_old: False
- long_pair: epo-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-pt
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'pt']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,006 |
### epo-por
* source group: Esperanto
* target group: Portuguese
* OPUS readme: [epo-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.por | 20.2 | 0.438 |
### System Info:
- hf_name: epo-por
- source_languages: epo
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'pt']
- src_constituents: {'epo'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: por
- short_pair: eo-pt
- chrF2_score: 0.43799999999999994
- bleu: 20.2
- brevity_penalty: 0.895
- ref_len: 89991.0
- src_name: Esperanto
- tgt_name: Portuguese
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: pt
- prefer_old: False
- long_pair: epo-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-ro
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'ro']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,000 |
### epo-ron
* source group: Esperanto
* target group: Romanian
* OPUS readme: [epo-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ron/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ron | 19.4 | 0.420 |
### System Info:
- hf_name: epo-ron
- source_languages: epo
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ro']
- src_constituents: {'epo'}
- tgt_constituents: {'ron'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ron
- short_pair: eo-ro
- chrF2_score: 0.42
- bleu: 19.4
- brevity_penalty: 0.9179999999999999
- ref_len: 25619.0
- src_name: Esperanto
- tgt_name: Romanian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ro
- prefer_old: False
- long_pair: epo-ron
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-ru
|
Helsinki-NLP
|
marian
| 11 | 250 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'ru']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,999 |
### epo-rus
* source group: Esperanto
* target group: Russian
* OPUS readme: [epo-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.rus | 17.7 | 0.379 |
### System Info:
- hf_name: epo-rus
- source_languages: epo
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ru']
- src_constituents: {'epo'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: rus
- short_pair: eo-ru
- chrF2_score: 0.379
- bleu: 17.7
- brevity_penalty: 0.9179999999999999
- ref_len: 71288.0
- src_name: Esperanto
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ru
- prefer_old: False
- long_pair: epo-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-sh
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'sh']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,181 |
### epo-hbs
* source group: Esperanto
* target group: Serbo-Croatian
* OPUS readme: [epo-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hbs/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): bos_Latn hrv srp_Cyrl srp_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.hbs | 13.6 | 0.351 |
### System Info:
- hf_name: epo-hbs
- source_languages: epo
- target_languages: hbs
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hbs/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'sh']
- src_constituents: {'epo'}
- tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: hbs
- short_pair: eo-sh
- chrF2_score: 0.35100000000000003
- bleu: 13.6
- brevity_penalty: 0.888
- ref_len: 17999.0
- src_name: Esperanto
- tgt_name: Serbo-Croatian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: sh
- prefer_old: False
- long_pair: epo-hbs
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-eo-sv
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'sv']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,013 |
### epo-swe
* source group: Esperanto
* target group: Swedish
* OPUS readme: [epo-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.swe | 29.5 | 0.463 |
### System Info:
- hf_name: epo-swe
- source_languages: epo
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'sv']
- src_constituents: {'epo'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: swe
- short_pair: eo-sv
- chrF2_score: 0.46299999999999997
- bleu: 29.5
- brevity_penalty: 0.9640000000000001
- ref_len: 10977.0
- src_name: Esperanto
- tgt_name: Swedish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: sv
- prefer_old: False
- long_pair: epo-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-NORWAY
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,044 |
### opus-mt-es-NORWAY
* source languages: es
* target languages: nb_NO,nb,nn_NO,nn,nog,no_nb,no
* OPUS readme: [es-nb_NO+nb+nn_NO+nn+nog+no_nb+no](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.no | 31.6 | 0.523 |
|
Helsinki-NLP/opus-mt-es-aed
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-aed
* source languages: es
* target languages: aed
* OPUS readme: [es-aed](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-aed/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.aed | 89.2 | 0.915 |
|
Helsinki-NLP/opus-mt-es-af
|
Helsinki-NLP
|
marian
| 11 | 93 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'af']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,002 |
### spa-afr
* source group: Spanish
* target group: Afrikaans
* OPUS readme: [spa-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-afr/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.afr | 55.0 | 0.718 |
### System Info:
- hf_name: spa-afr
- source_languages: spa
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'af']
- src_constituents: {'spa'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: afr
- short_pair: es-af
- chrF2_score: 0.718
- bleu: 55.0
- brevity_penalty: 0.9740000000000001
- ref_len: 3044.0
- src_name: Spanish
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: af
- prefer_old: False
- long_pair: spa-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-ar
|
Helsinki-NLP
|
marian
| 11 | 530 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'ar']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,169 |
### spa-ara
* source group: Spanish
* target group: Arabic
* OPUS readme: [spa-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md)
* model: transformer
* source language(s): spa
* target language(s): apc apc_Latn ara arq
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.ara | 20.0 | 0.517 |
### System Info:
- hf_name: spa-ara
- source_languages: spa
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'ar']
- src_constituents: {'spa'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt
- src_alpha3: spa
- tgt_alpha3: ara
- short_pair: es-ar
- chrF2_score: 0.517
- bleu: 20.0
- brevity_penalty: 0.9390000000000001
- ref_len: 7547.0
- src_name: Spanish
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: es
- tgt_alpha2: ar
- prefer_old: False
- long_pair: spa-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-ase
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-ase
* source languages: es
* target languages: ase
* OPUS readme: [es-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ase/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ase | 31.5 | 0.488 |
|
Helsinki-NLP/opus-mt-es-bcl
|
Helsinki-NLP
|
marian
| 10 | 25 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-bcl
* source languages: es
* target languages: bcl
* OPUS readme: [es-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bcl | 37.1 | 0.586 |
|
Helsinki-NLP/opus-mt-es-ber
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 778 |
### opus-mt-es-ber
* source languages: es
* target languages: ber
* OPUS readme: [es-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ber | 21.8 | 0.444 |
|
Helsinki-NLP/opus-mt-es-bg
|
Helsinki-NLP
|
marian
| 11 | 42 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'bg']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,989 |
### spa-bul
* source group: Spanish
* target group: Bulgarian
* OPUS readme: [spa-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-bul/README.md)
* model: transformer
* source language(s): spa
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.bul | 50.9 | 0.674 |
### System Info:
- hf_name: spa-bul
- source_languages: spa
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'bg']
- src_constituents: {'spa'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.test.txt
- src_alpha3: spa
- tgt_alpha3: bul
- short_pair: es-bg
- chrF2_score: 0.674
- bleu: 50.9
- brevity_penalty: 0.955
- ref_len: 1707.0
- src_name: Spanish
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: es
- tgt_alpha2: bg
- prefer_old: False
- long_pair: spa-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-bi
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-bi
* source languages: es
* target languages: bi
* OPUS readme: [es-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bi | 28.0 | 0.473 |
|
Helsinki-NLP/opus-mt-es-bzs
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-bzs
* source languages: es
* target languages: bzs
* OPUS readme: [es-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bzs | 26.4 | 0.451 |
|
Helsinki-NLP/opus-mt-es-ca
|
Helsinki-NLP
|
marian
| 11 | 58,699 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
|
['es', 'ca']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,997 |
### spa-cat
* source group: Spanish
* target group: Catalan
* OPUS readme: [spa-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.cat | 68.9 | 0.832 |
### System Info:
- hf_name: spa-cat
- source_languages: spa
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'ca']
- src_constituents: {'spa'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: cat
- short_pair: es-ca
- chrF2_score: 0.8320000000000001
- bleu: 68.9
- brevity_penalty: 1.0
- ref_len: 12343.0
- src_name: Spanish
- tgt_name: Catalan
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: ca
- prefer_old: False
- long_pair: spa-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-ceb
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-ceb
* source languages: es
* target languages: ceb
* OPUS readme: [es-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ceb/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ceb | 33.9 | 0.564 |
|
Helsinki-NLP/opus-mt-es-crs
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-crs
* source languages: es
* target languages: crs
* OPUS readme: [es-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-crs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-crs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-crs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.crs | 26.4 | 0.453 |
|
Helsinki-NLP/opus-mt-es-cs
|
Helsinki-NLP
|
marian
| 10 | 73 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-cs
* source languages: es
* target languages: cs
* OPUS readme: [es-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-cs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.cs | 46.4 | 0.655 |
|
Helsinki-NLP/opus-mt-es-csg
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-csg
* source languages: es
* target languages: csg
* OPUS readme: [es-csg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-csg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-csg/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csg/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csg/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.csg | 91.2 | 0.937 |
|
Helsinki-NLP/opus-mt-es-csn
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-csn
* source languages: es
* target languages: csn
* OPUS readme: [es-csn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-csn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-csn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.csn | 87.8 | 0.901 |
|
Helsinki-NLP/opus-mt-es-da
|
Helsinki-NLP
|
marian
| 10 | 67 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-da
* source languages: es
* target languages: da
* OPUS readme: [es-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-da/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-da/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.da | 55.7 | 0.712 |
|
Helsinki-NLP/opus-mt-es-de
|
Helsinki-NLP
|
marian
| 10 | 2,345 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-de
* source languages: es
* target languages: de
* OPUS readme: [es-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-de/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.de | 50.0 | 0.683 |
|
Helsinki-NLP/opus-mt-es-ee
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-ee
* source languages: es
* target languages: ee
* OPUS readme: [es-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ee/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ee/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ee/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ee | 25.6 | 0.470 |
|
Helsinki-NLP/opus-mt-es-efi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-efi
* source languages: es
* target languages: efi
* OPUS readme: [es-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.efi | 24.6 | 0.452 |
|
Helsinki-NLP/opus-mt-es-el
|
Helsinki-NLP
|
marian
| 10 | 44 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-el
* source languages: es
* target languages: el
* OPUS readme: [es-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-el/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.el | 48.6 | 0.661 |
|
Helsinki-NLP/opus-mt-es-en
|
Helsinki-NLP
|
marian
| 11 | 405,427 |
transformers
| 14 |
translation
| true | true | false |
apache-2.0
|
['es', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,356 |
### spa-eng
* source group: Spanish
* target group: English
* OPUS readme: [spa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md)
* model: transformer
* source language(s): spa
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip)
* test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt)
* test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-spaeng.spa.eng | 30.6 | 0.570 |
| news-test2008-spaeng.spa.eng | 27.9 | 0.553 |
| newstest2009-spaeng.spa.eng | 30.4 | 0.572 |
| newstest2010-spaeng.spa.eng | 36.1 | 0.614 |
| newstest2011-spaeng.spa.eng | 34.2 | 0.599 |
| newstest2012-spaeng.spa.eng | 37.9 | 0.624 |
| newstest2013-spaeng.spa.eng | 35.3 | 0.609 |
| Tatoeba-test.spa.eng | 59.6 | 0.739 |
### System Info:
- hf_name: spa-eng
- source_languages: spa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'en']
- src_constituents: {'spa'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt
- src_alpha3: spa
- tgt_alpha3: eng
- short_pair: es-en
- chrF2_score: 0.7390000000000001
- bleu: 59.6
- brevity_penalty: 0.9740000000000001
- ref_len: 79376.0
- src_name: Spanish
- tgt_name: English
- train_date: 2020-08-18 00:00:00
- src_alpha2: es
- tgt_alpha2: en
- prefer_old: False
- long_pair: spa-eng
- helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
- transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
- port_machine: brutasse
- port_time: 2020-08-24-18:20
|
Helsinki-NLP/opus-mt-es-eo
|
Helsinki-NLP
|
marian
| 10 | 194 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-eo
* source languages: es
* target languages: eo
* OPUS readme: [es-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-eo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.eo | 44.7 | 0.657 |
|
Helsinki-NLP/opus-mt-es-es
|
Helsinki-NLP
|
marian
| 10 | 36 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-es
* source languages: es
* target languages: es
* OPUS readme: [es-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.es | 51.7 | 0.688 |
|
Helsinki-NLP/opus-mt-es-et
|
Helsinki-NLP
|
marian
| 10 | 121 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-et
* source languages: es
* target languages: et
* OPUS readme: [es-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-et/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-et/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-et/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-et/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.et | 20.7 | 0.466 |
|
Helsinki-NLP/opus-mt-es-eu
|
Helsinki-NLP
|
marian
| 11 | 173 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
|
['es', 'eu']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,984 |
### spa-eus
* source group: Spanish
* target group: Basque
* OPUS readme: [spa-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eus/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.eus | 37.0 | 0.638 |
### System Info:
- hf_name: spa-eus
- source_languages: spa
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'eu']
- src_constituents: {'spa'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eus/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: eus
- short_pair: es-eu
- chrF2_score: 0.638
- bleu: 37.0
- brevity_penalty: 0.983
- ref_len: 10945.0
- src_name: Spanish
- tgt_name: Basque
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: eu
- prefer_old: False
- long_pair: spa-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-fi
|
Helsinki-NLP
|
marian
| 10 | 279 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-fi
* source languages: es
* target languages: fi
* OPUS readme: [es-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fi/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.fi | 44.4 | 0.672 |
|
Helsinki-NLP/opus-mt-es-fj
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-fj
* source languages: es
* target languages: fj
* OPUS readme: [es-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.fj | 24.8 | 0.472 |
|
Helsinki-NLP/opus-mt-es-fr
|
Helsinki-NLP
|
marian
| 10 | 11,967 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,054 |
### opus-mt-es-fr
* source languages: es
* target languages: fr
* OPUS readme: [es-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.es.fr | 33.6 | 0.610 |
| news-test2008.es.fr | 32.0 | 0.585 |
| newstest2009.es.fr | 32.5 | 0.590 |
| newstest2010.es.fr | 35.0 | 0.615 |
| newstest2011.es.fr | 33.9 | 0.607 |
| newstest2012.es.fr | 32.4 | 0.602 |
| newstest2013.es.fr | 32.1 | 0.593 |
| Tatoeba.es.fr | 58.4 | 0.731 |
|
Helsinki-NLP/opus-mt-es-gaa
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-gaa
* source languages: es
* target languages: gaa
* OPUS readme: [es-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-gaa/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gaa/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gaa/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.gaa | 27.8 | 0.479 |
|
Helsinki-NLP/opus-mt-es-gil
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-gil
* source languages: es
* target languages: gil
* OPUS readme: [es-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-gil/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gil/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-gil/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.gil | 23.8 | 0.470 |
|
Helsinki-NLP/opus-mt-es-gl
|
Helsinki-NLP
|
marian
| 11 | 51 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'gl']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,997 |
### spa-glg
* source group: Spanish
* target group: Galician
* OPUS readme: [spa-glg](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): glg
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.glg | 67.6 | 0.808 |
### System Info:
- hf_name: spa-glg
- source_languages: spa
- target_languages: glg
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-glg/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'gl']
- src_constituents: {'spa'}
- tgt_constituents: {'glg'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-glg/opus-2020-06-16.test.txt
- src_alpha3: spa
- tgt_alpha3: glg
- short_pair: es-gl
- chrF2_score: 0.8079999999999999
- bleu: 67.6
- brevity_penalty: 0.993
- ref_len: 16581.0
- src_name: Spanish
- tgt_name: Galician
- train_date: 2020-06-16
- src_alpha2: es
- tgt_alpha2: gl
- prefer_old: False
- long_pair: spa-glg
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-guw
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-guw
* source languages: es
* target languages: guw
* OPUS readme: [es-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-guw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.guw | 28.6 | 0.480 |
|
Helsinki-NLP/opus-mt-es-ha
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-ha
* source languages: es
* target languages: ha
* OPUS readme: [es-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ha/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ha/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ha/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ha | 20.6 | 0.421 |
|
Helsinki-NLP/opus-mt-es-he
|
Helsinki-NLP
|
marian
| 12 | 20 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'he']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,012 |
### es-he
* source group: Spanish
* target group: Hebrew
* OPUS readme: [spa-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-heb/README.md)
* model: transformer
* source language(s): spa
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.heb | 43.6 | 0.636 |
### System Info:
- hf_name: es-he
- source_languages: spa
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'he']
- src_constituents: ('Spanish', {'spa'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: spa-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.test.txt
- src_alpha3: spa
- tgt_alpha3: heb
- chrF2_score: 0.636
- bleu: 43.6
- brevity_penalty: 0.992
- ref_len: 12112.0
- src_name: Spanish
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: es
- tgt_alpha2: he
- prefer_old: False
- short_pair: es-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-11:41
|
Helsinki-NLP/opus-mt-es-hil
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-hil
* source languages: es
* target languages: hil
* OPUS readme: [es-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-hil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.hil | 35.8 | 0.584 |
|
Helsinki-NLP/opus-mt-es-ho
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-ho
* source languages: es
* target languages: ho
* OPUS readme: [es-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ho/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ho/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ho/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ho | 22.8 | 0.463 |
|
Helsinki-NLP/opus-mt-es-hr
|
Helsinki-NLP
|
marian
| 10 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-hr
* source languages: es
* target languages: hr
* OPUS readme: [es-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-hr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-hr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-hr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.hr | 21.7 | 0.459 |
|
Helsinki-NLP/opus-mt-es-ht
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-ht
* source languages: es
* target languages: ht
* OPUS readme: [es-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ht/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ht/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ht/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ht | 23.3 | 0.407 |
|
Helsinki-NLP/opus-mt-es-id
|
Helsinki-NLP
|
marian
| 10 | 24 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 775 |
### opus-mt-es-id
* source languages: es
* target languages: id
* OPUS readme: [es-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-id/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-id/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-id/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-id/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.es.id | 21.1 | 0.516 |
|
Helsinki-NLP/opus-mt-es-ig
|
Helsinki-NLP
|
marian
| 10 | 12 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-ig
* source languages: es
* target languages: ig
* OPUS readme: [es-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ig/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ig | 27.0 | 0.434 |
|
Helsinki-NLP/opus-mt-es-ilo
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-ilo
* source languages: es
* target languages: ilo
* OPUS readme: [es-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ilo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ilo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ilo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ilo | 31.0 | 0.544 |
|
Helsinki-NLP/opus-mt-es-is
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'is']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,987 |
### spa-isl
* source group: Spanish
* target group: Icelandic
* OPUS readme: [spa-isl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-isl/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): isl
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.isl | 27.1 | 0.528 |
### System Info:
- hf_name: spa-isl
- source_languages: spa
- target_languages: isl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-isl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'is']
- src_constituents: {'spa'}
- tgt_constituents: {'isl'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-isl/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: isl
- short_pair: es-is
- chrF2_score: 0.528
- bleu: 27.1
- brevity_penalty: 1.0
- ref_len: 1220.0
- src_name: Spanish
- tgt_name: Icelandic
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: is
- prefer_old: False
- long_pair: spa-isl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-iso
|
Helsinki-NLP
|
marian
| 10 | 12 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-iso
* source languages: es
* target languages: iso
* OPUS readme: [es-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-iso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.iso | 22.4 | 0.396 |
|
Helsinki-NLP/opus-mt-es-it
|
Helsinki-NLP
|
marian
| 10 | 1,523 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-it
* source languages: es
* target languages: it
* OPUS readme: [es-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-it/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-it/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.it | 55.9 | 0.751 |
|
Helsinki-NLP/opus-mt-es-kg
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-kg
* source languages: es
* target languages: kg
* OPUS readme: [es-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-kg/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-kg/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-kg/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.kg | 25.6 | 0.488 |
|
Helsinki-NLP/opus-mt-es-ln
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-ln
* source languages: es
* target languages: ln
* OPUS readme: [es-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ln/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ln | 27.1 | 0.508 |
|
Helsinki-NLP/opus-mt-es-loz
|
Helsinki-NLP
|
marian
| 10 | 28 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-loz
* source languages: es
* target languages: loz
* OPUS readme: [es-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-loz/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.loz | 28.6 | 0.493 |
|
Helsinki-NLP/opus-mt-es-lt
|
Helsinki-NLP
|
marian
| 11 | 17 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'lt']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,991 |
### spa-lit
* source group: Spanish
* target group: Lithuanian
* OPUS readme: [spa-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-lit/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.lit | 40.2 | 0.643 |
### System Info:
- hf_name: spa-lit
- source_languages: spa
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'lt']
- src_constituents: {'spa'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-lit/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: lit
- short_pair: es-lt
- chrF2_score: 0.643
- bleu: 40.2
- brevity_penalty: 0.956
- ref_len: 2341.0
- src_name: Spanish
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: lt
- prefer_old: False
- long_pair: spa-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-lua
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-lua
* source languages: es
* target languages: lua
* OPUS readme: [es-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lua/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.lua | 23.4 | 0.473 |
|
Helsinki-NLP/opus-mt-es-lus
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-lus
* source languages: es
* target languages: lus
* OPUS readme: [es-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-lus/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-lus/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.lus | 20.9 | 0.414 |
|
Helsinki-NLP/opus-mt-es-mfs
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-mfs
* source languages: es
* target languages: mfs
* OPUS readme: [es-mfs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-mfs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mfs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.mfs | 88.6 | 0.907 |
|
Helsinki-NLP/opus-mt-es-mk
|
Helsinki-NLP
|
marian
| 11 | 38 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['es', 'mk']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,002 |
### spa-mkd
* source group: Spanish
* target group: Macedonian
* OPUS readme: [spa-mkd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-mkd/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): mkd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.mkd | 48.2 | 0.681 |
### System Info:
- hf_name: spa-mkd
- source_languages: spa
- target_languages: mkd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-mkd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'mk']
- src_constituents: {'spa'}
- tgt_constituents: {'mkd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-mkd/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: mkd
- short_pair: es-mk
- chrF2_score: 0.6809999999999999
- bleu: 48.2
- brevity_penalty: 1.0
- ref_len: 1073.0
- src_name: Spanish
- tgt_name: Macedonian
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: mk
- prefer_old: False
- long_pair: spa-mkd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-es-mt
|
Helsinki-NLP
|
marian
| 10 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-es-mt
* source languages: es
* target languages: mt
* OPUS readme: [es-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-mt/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.mt | 28.1 | 0.460 |
|
Helsinki-NLP/opus-mt-es-niu
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-es-niu
* source languages: es
* target languages: niu
* OPUS readme: [es-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-niu/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.niu | 29.9 | 0.506 |
|
Helsinki-NLP/opus-mt-es-nl
|
Helsinki-NLP
|
marian
| 10 | 64 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-es-nl
* source languages: es
* target languages: nl
* OPUS readme: [es-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.nl | 50.6 | 0.681 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.