repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-nl-fr
|
Helsinki-NLP
|
marian
| 10 | 1,752 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-nl-fr
* source languages: nl
* target languages: fr
* OPUS readme: [nl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.fr | 51.3 | 0.674 |
|
Helsinki-NLP/opus-mt-nl-no
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['nl', False]
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,988 |
### nld-nor
* source group: Dutch
* target group: Norwegian
* OPUS readme: [nld-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.nor | 36.1 | 0.562 |
### System Info:
- hf_name: nld-nor
- source_languages: nld
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'no']
- src_constituents: {'nld'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: nor
- short_pair: nl-no
- chrF2_score: 0.562
- bleu: 36.1
- brevity_penalty: 0.966
- ref_len: 1459.0
- src_name: Dutch
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: no
- prefer_old: False
- long_pair: nld-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-nl-sv
|
Helsinki-NLP
|
marian
| 10 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 775 |
### opus-mt-nl-sv
* source languages: nl
* target languages: sv
* OPUS readme: [nl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.nl.sv | 25.0 | 0.518 |
|
Helsinki-NLP/opus-mt-nl-uk
|
Helsinki-NLP
|
marian
| 11 | 26 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['nl', 'uk']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,986 |
### nld-ukr
* source group: Dutch
* target group: Ukrainian
* OPUS readme: [nld-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.ukr | 40.8 | 0.619 |
### System Info:
- hf_name: nld-ukr
- source_languages: nld
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'uk']
- src_constituents: {'nld'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: ukr
- short_pair: nl-uk
- chrF2_score: 0.619
- bleu: 40.8
- brevity_penalty: 0.992
- ref_len: 51674.0
- src_name: Dutch
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: uk
- prefer_old: False
- long_pair: nld-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-da
|
Helsinki-NLP
|
marian
| 11 | 42 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
|
[False, 'da']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,998 |
### nor-dan
* source group: Norwegian
* target group: Danish
* OPUS readme: [nor-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.dan | 65.0 | 0.792 |
### System Info:
- hf_name: nor-dan
- source_languages: nor
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'da']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: dan
- short_pair: no-da
- chrF2_score: 0.792
- bleu: 65.0
- brevity_penalty: 0.995
- ref_len: 9865.0
- src_name: Norwegian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: da
- prefer_old: False
- long_pair: nor-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-de
|
Helsinki-NLP
|
marian
| 11 | 3,374 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'de']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,994 |
### nor-deu
* source group: Norwegian
* target group: German
* OPUS readme: [nor-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.deu | 29.6 | 0.541 |
### System Info:
- hf_name: nor-deu
- source_languages: nor
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'de']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: deu
- short_pair: no-de
- chrF2_score: 0.541
- bleu: 29.6
- brevity_penalty: 0.96
- ref_len: 34575.0
- src_name: Norwegian
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: de
- prefer_old: False
- long_pair: nor-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-es
|
Helsinki-NLP
|
marian
| 11 | 68 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'es']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,000 |
### nor-spa
* source group: Norwegian
* target group: Spanish
* OPUS readme: [nor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.spa | 34.2 | 0.565 |
### System Info:
- hf_name: nor-spa
- source_languages: nor
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'es']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: spa
- short_pair: no-es
- chrF2_score: 0.565
- bleu: 34.2
- brevity_penalty: 0.997
- ref_len: 7311.0
- src_name: Norwegian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: es
- prefer_old: False
- long_pair: nor-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-fi
|
Helsinki-NLP
|
marian
| 11 | 102 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'fi']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,997 |
### nor-fin
* source group: Norwegian
* target group: Finnish
* OPUS readme: [nor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fin | 14.1 | 0.374 |
### System Info:
- hf_name: nor-fin
- source_languages: nor
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fi']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fin
- short_pair: no-fi
- chrF2_score: 0.374
- bleu: 14.1
- brevity_penalty: 0.894
- ref_len: 13066.0
- src_name: Norwegian
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fi
- prefer_old: False
- long_pair: nor-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-fr
|
Helsinki-NLP
|
marian
| 11 | 36 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,994 |
### nor-fra
* source group: Norwegian
* target group: French
* OPUS readme: [nor-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fra | 39.1 | 0.578 |
### System Info:
- hf_name: nor-fra
- source_languages: nor
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fr']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fra
- short_pair: no-fr
- chrF2_score: 0.578
- bleu: 39.1
- brevity_penalty: 0.987
- ref_len: 3205.0
- src_name: Norwegian
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fr
- prefer_old: False
- long_pair: nor-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-nl
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'nl']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,001 |
### nor-nld
* source group: Norwegian
* target group: Dutch
* OPUS readme: [nor-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nld | 40.2 | 0.596 |
### System Info:
- hf_name: nor-nld
- source_languages: nor
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'nl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nld
- short_pair: no-nl
- chrF2_score: 0.596
- bleu: 40.2
- brevity_penalty: 0.9590000000000001
- ref_len: 1535.0
- src_name: Norwegian
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: nl
- prefer_old: False
- long_pair: nor-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-no
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False]
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,109 |
### nor-nor
* source group: Norwegian
* target group: Norwegian
* OPUS readme: [nor-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nor | 58.4 | 0.784 |
### System Info:
- hf_name: nor-nor
- source_languages: nor
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nor
- short_pair: no-no
- chrF2_score: 0.784
- bleu: 58.4
- brevity_penalty: 0.988
- ref_len: 6351.0
- src_name: Norwegian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: no
- prefer_old: False
- long_pair: nor-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-pl
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'pl']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,990 |
### nor-pol
* source group: Norwegian
* target group: Polish
* OPUS readme: [nor-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.pol | 20.9 | 0.455 |
### System Info:
- hf_name: nor-pol
- source_languages: nor
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'pl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: pol
- short_pair: no-pl
- chrF2_score: 0.455
- bleu: 20.9
- brevity_penalty: 0.941
- ref_len: 1828.0
- src_name: Norwegian
- tgt_name: Polish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: pl
- prefer_old: False
- long_pair: nor-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-ru
|
Helsinki-NLP
|
marian
| 11 | 66 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,995 |
### nor-rus
* source group: Norwegian
* target group: Russian
* OPUS readme: [nor-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.rus | 18.6 | 0.400 |
### System Info:
- hf_name: nor-rus
- source_languages: nor
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'ru']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: rus
- short_pair: no-ru
- chrF2_score: 0.4
- bleu: 18.6
- brevity_penalty: 0.958
- ref_len: 10671.0
- src_name: Norwegian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: ru
- prefer_old: False
- long_pair: nor-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-sv
|
Helsinki-NLP
|
marian
| 11 | 39 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'sv']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,009 |
### nor-swe
* source group: Norwegian
* target group: Swedish
* OPUS readme: [nor-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.swe | 63.7 | 0.773 |
### System Info:
- hf_name: nor-swe
- source_languages: nor
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'sv']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: swe
- short_pair: no-sv
- chrF2_score: 0.773
- bleu: 63.7
- brevity_penalty: 0.9670000000000001
- ref_len: 3672.0
- src_name: Norwegian
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: sv
- prefer_old: False
- long_pair: nor-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-no-uk
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
[False, 'uk']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,994 |
### nor-ukr
* source group: Norwegian
* target group: Ukrainian
* OPUS readme: [nor-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.ukr | 16.6 | 0.384 |
### System Info:
- hf_name: nor-ukr
- source_languages: nor
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'uk']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: ukr
- short_pair: no-uk
- chrF2_score: 0.384
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 3982.0
- src_name: Norwegian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: uk
- prefer_old: False
- long_pair: nor-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-nso-de
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-nso-de
* source languages: nso
* target languages: de
* OPUS readme: [nso-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.de | 24.7 | 0.461 |
|
Helsinki-NLP/opus-mt-nso-en
|
Helsinki-NLP
|
marian
| 10 | 36 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-nso-en
* source languages: nso
* target languages: en
* OPUS readme: [nso-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.en | 48.6 | 0.634 |
|
Helsinki-NLP/opus-mt-nso-es
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-nso-es
* source languages: nso
* target languages: es
* OPUS readme: [nso-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.es | 29.5 | 0.485 |
|
Helsinki-NLP/opus-mt-nso-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-nso-fi
* source languages: nso
* target languages: fi
* OPUS readme: [nso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fi | 27.8 | 0.523 |
|
Helsinki-NLP/opus-mt-nso-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-nso-fr
* source languages: nso
* target languages: fr
* OPUS readme: [nso-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fr | 30.7 | 0.488 |
|
Helsinki-NLP/opus-mt-nso-sv
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-nso-sv
* source languages: nso
* target languages: sv
* OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.sv | 34.3 | 0.527 |
|
Helsinki-NLP/opus-mt-ny-de
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ny-de
* source languages: ny
* target languages: de
* OPUS readme: [ny-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.de | 23.9 | 0.440 |
|
Helsinki-NLP/opus-mt-ny-en
|
Helsinki-NLP
|
marian
| 10 | 135 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 803 |
### opus-mt-ny-en
* source languages: ny
* target languages: en
* OPUS readme: [ny-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.en | 39.7 | 0.547 |
| Tatoeba.ny.en | 44.2 | 0.562 |
|
Helsinki-NLP/opus-mt-ny-es
|
Helsinki-NLP
|
marian
| 10 | 19 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ny-es
* source languages: ny
* target languages: es
* OPUS readme: [ny-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.es | 27.9 | 0.457 |
|
Helsinki-NLP/opus-mt-nyk-en
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-nyk-en
* source languages: nyk
* target languages: en
* OPUS readme: [nyk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nyk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nyk.en | 27.3 | 0.423 |
|
Helsinki-NLP/opus-mt-om-en
|
Helsinki-NLP
|
marian
| 10 | 61 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-om-en
* source languages: om
* target languages: en
* OPUS readme: [om-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/om-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.om.en | 27.3 | 0.448 |
|
Helsinki-NLP/opus-mt-pa-en
|
Helsinki-NLP
|
marian
| 10 | 389 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 803 |
### opus-mt-pa-en
* source languages: pa
* target languages: en
* OPUS readme: [pa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pa-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pa.en | 20.6 | 0.320 |
| Tatoeba.pa.en | 29.3 | 0.464 |
|
Helsinki-NLP/opus-mt-pag-de
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pag-de
* source languages: pag
* target languages: de
* OPUS readme: [pag-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.de | 22.8 | 0.435 |
|
Helsinki-NLP/opus-mt-pag-en
|
Helsinki-NLP
|
marian
| 10 | 12 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pag-en
* source languages: pag
* target languages: en
* OPUS readme: [pag-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.en | 42.4 | 0.580 |
|
Helsinki-NLP/opus-mt-pag-es
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pag-es
* source languages: pag
* target languages: es
* OPUS readme: [pag-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.es | 27.9 | 0.459 |
|
Helsinki-NLP/opus-mt-pag-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pag-fi
* source languages: pag
* target languages: fi
* OPUS readme: [pag-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.fi | 26.7 | 0.496 |
|
Helsinki-NLP/opus-mt-pag-sv
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pag-sv
* source languages: pag
* target languages: sv
* OPUS readme: [pag-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.sv | 29.8 | 0.492 |
|
Helsinki-NLP/opus-mt-pap-de
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pap-de
* source languages: pap
* target languages: de
* OPUS readme: [pap-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.de | 25.0 | 0.466 |
|
Helsinki-NLP/opus-mt-pap-en
|
Helsinki-NLP
|
marian
| 10 | 12 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 812 |
### opus-mt-pap-en
* source languages: pap
* target languages: en
* OPUS readme: [pap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.en | 47.3 | 0.634 |
| Tatoeba.pap.en | 63.2 | 0.684 |
|
Helsinki-NLP/opus-mt-pap-es
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pap-es
* source languages: pap
* target languages: es
* OPUS readme: [pap-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.es | 32.3 | 0.518 |
|
Helsinki-NLP/opus-mt-pap-fi
|
Helsinki-NLP
|
marian
| 10 | 27 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pap-fi
* source languages: pap
* target languages: fi
* OPUS readme: [pap-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.fi | 27.7 | 0.520 |
|
Helsinki-NLP/opus-mt-pap-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pap-fr
* source languages: pap
* target languages: fr
* OPUS readme: [pap-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.fr | 31.0 | 0.498 |
|
Helsinki-NLP/opus-mt-phi-en
|
Helsinki-NLP
|
marian
| 11 | 24 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['phi', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,382 |
### phi-eng
* source group: Philippine languages
* target group: English
* OPUS readme: [phi-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/phi-eng/README.md)
* model: transformer
* source language(s): akl_Latn ceb hil ilo pag war
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.akl-eng.akl.eng | 11.6 | 0.321 |
| Tatoeba-test.ceb-eng.ceb.eng | 21.7 | 0.393 |
| Tatoeba-test.hil-eng.hil.eng | 17.6 | 0.371 |
| Tatoeba-test.ilo-eng.ilo.eng | 36.6 | 0.560 |
| Tatoeba-test.multi.eng | 21.5 | 0.391 |
| Tatoeba-test.pag-eng.pag.eng | 27.5 | 0.494 |
| Tatoeba-test.war-eng.war.eng | 17.3 | 0.380 |
### System Info:
- hf_name: phi-eng
- source_languages: phi
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/phi-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['phi', 'en']
- src_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.test.txt
- src_alpha3: phi
- tgt_alpha3: eng
- short_pair: phi-en
- chrF2_score: 0.391
- bleu: 21.5
- brevity_penalty: 1.0
- ref_len: 2380.0
- src_name: Philippine languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: phi
- tgt_alpha2: en
- prefer_old: False
- long_pair: phi-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pis-en
|
Helsinki-NLP
|
marian
| 10 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pis-en
* source languages: pis
* target languages: en
* OPUS readme: [pis-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.en | 33.3 | 0.493 |
|
Helsinki-NLP/opus-mt-pis-es
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pis-es
* source languages: pis
* target languages: es
* OPUS readme: [pis-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.es | 24.1 | 0.421 |
|
Helsinki-NLP/opus-mt-pis-fi
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pis-fi
* source languages: pis
* target languages: fi
* OPUS readme: [pis-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fi | 21.8 | 0.439 |
|
Helsinki-NLP/opus-mt-pis-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pis-fr
* source languages: pis
* target languages: fr
* OPUS readme: [pis-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fr | 24.9 | 0.421 |
|
Helsinki-NLP/opus-mt-pis-sv
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pis-sv
* source languages: pis
* target languages: sv
* OPUS readme: [pis-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.sv | 25.9 | 0.442 |
|
Helsinki-NLP/opus-mt-pl-ar
|
Helsinki-NLP
|
marian
| 11 | 12 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pl', 'ar']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,154 |
### pol-ara
* source group: Polish
* target group: Arabic
* OPUS readme: [pol-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ara/README.md)
* model: transformer
* source language(s): pol
* target language(s): ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.ara | 20.4 | 0.491 |
### System Info:
- hf_name: pol-ara
- source_languages: pol
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'ar']
- src_constituents: {'pol'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.test.txt
- src_alpha3: pol
- tgt_alpha3: ara
- short_pair: pl-ar
- chrF2_score: 0.491
- bleu: 20.4
- brevity_penalty: 0.9590000000000001
- ref_len: 1028.0
- src_name: Polish
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: pl
- tgt_alpha2: ar
- prefer_old: False
- long_pair: pol-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pl-de
|
Helsinki-NLP
|
marian
| 10 | 346 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-pl-de
* source languages: pl
* target languages: de
* OPUS readme: [pl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.de | 47.8 | 0.665 |
|
Helsinki-NLP/opus-mt-pl-en
|
Helsinki-NLP
|
marian
| 10 | 63,258 |
transformers
| 2 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-pl-en
* source languages: pl
* target languages: en
* OPUS readme: [pl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.en | 54.9 | 0.701 |
|
Helsinki-NLP/opus-mt-pl-eo
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pl', 'eo']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,997 |
### pol-epo
* source group: Polish
* target group: Esperanto
* OPUS readme: [pol-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-epo/README.md)
* model: transformer-align
* source language(s): pol
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.epo | 24.8 | 0.451 |
### System Info:
- hf_name: pol-epo
- source_languages: pol
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'eo']
- src_constituents: {'pol'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.test.txt
- src_alpha3: pol
- tgt_alpha3: epo
- short_pair: pl-eo
- chrF2_score: 0.451
- bleu: 24.8
- brevity_penalty: 0.9670000000000001
- ref_len: 17191.0
- src_name: Polish
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: pl
- tgt_alpha2: eo
- prefer_old: False
- long_pair: pol-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pl-es
|
Helsinki-NLP
|
marian
| 10 | 187 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-pl-es
* source languages: pl
* target languages: es
* OPUS readme: [pl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.es | 46.9 | 0.654 |
|
Helsinki-NLP/opus-mt-pl-fr
|
Helsinki-NLP
|
marian
| 10 | 1,023 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-pl-fr
* source languages: pl
* target languages: fr
* OPUS readme: [pl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.fr | 49.0 | 0.659 |
|
Helsinki-NLP/opus-mt-pl-lt
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pl', 'lt']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,990 |
### pol-lit
* source group: Polish
* target group: Lithuanian
* OPUS readme: [pol-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-lit/README.md)
* model: transformer-align
* source language(s): pol
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.lit | 43.7 | 0.688 |
### System Info:
- hf_name: pol-lit
- source_languages: pol
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'lt']
- src_constituents: {'pol'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.test.txt
- src_alpha3: pol
- tgt_alpha3: lit
- short_pair: pl-lt
- chrF2_score: 0.688
- bleu: 43.7
- brevity_penalty: 0.981
- ref_len: 10084.0
- src_name: Polish
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: pl
- tgt_alpha2: lt
- prefer_old: False
- long_pair: pol-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pl-no
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pl', False]
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,003 |
### pol-nor
* source group: Polish
* target group: Norwegian
* OPUS readme: [pol-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md)
* model: transformer-align
* source language(s): pol
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.nor | 27.5 | 0.479 |
### System Info:
- hf_name: pol-nor
- source_languages: pol
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'no']
- src_constituents: {'pol'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt
- src_alpha3: pol
- tgt_alpha3: nor
- short_pair: pl-no
- chrF2_score: 0.479
- bleu: 27.5
- brevity_penalty: 0.9690000000000001
- ref_len: 2045.0
- src_name: Polish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: pl
- tgt_alpha2: no
- prefer_old: False
- long_pair: pol-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pl-sv
|
Helsinki-NLP
|
marian
| 10 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-pl-sv
* source languages: pl
* target languages: sv
* OPUS readme: [pl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.sv | 58.9 | 0.717 |
|
Helsinki-NLP/opus-mt-pl-uk
|
Helsinki-NLP
|
marian
| 11 | 58 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pl', 'uk']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,988 |
### pol-ukr
* source group: Polish
* target group: Ukrainian
* OPUS readme: [pol-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ukr/README.md)
* model: transformer-align
* source language(s): pol
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.ukr | 47.1 | 0.665 |
### System Info:
- hf_name: pol-ukr
- source_languages: pol
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'uk']
- src_constituents: {'pol'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.test.txt
- src_alpha3: pol
- tgt_alpha3: ukr
- short_pair: pl-uk
- chrF2_score: 0.665
- bleu: 47.1
- brevity_penalty: 0.992
- ref_len: 13434.0
- src_name: Polish
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: pl
- tgt_alpha2: uk
- prefer_old: False
- long_pair: pol-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pon-en
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pon-en
* source languages: pon
* target languages: en
* OPUS readme: [pon-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.en | 34.1 | 0.489 |
|
Helsinki-NLP/opus-mt-pon-es
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pon-es
* source languages: pon
* target languages: es
* OPUS readme: [pon-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.es | 22.4 | 0.402 |
|
Helsinki-NLP/opus-mt-pon-fi
|
Helsinki-NLP
|
marian
| 10 | 26 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pon-fi
* source languages: pon
* target languages: fi
* OPUS readme: [pon-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.fi | 22.2 | 0.434 |
|
Helsinki-NLP/opus-mt-pon-fr
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pon-fr
* source languages: pon
* target languages: fr
* OPUS readme: [pon-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.fr | 24.4 | 0.410 |
|
Helsinki-NLP/opus-mt-pon-sv
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-pon-sv
* source languages: pon
* target languages: sv
* OPUS readme: [pon-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.sv | 26.4 | 0.436 |
|
Helsinki-NLP/opus-mt-pqe-en
|
Helsinki-NLP
|
marian
| 11 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,791 |
### pqe-eng
* source group: Eastern Malayo-Polynesian languages
* target group: English
* OPUS readme: [pqe-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md)
* model: transformer
* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip)
* test set translations: [opus-2020-06-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt)
* test set scores: [opus-2020-06-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fij-eng.fij.eng | 26.9 | 0.361 |
| Tatoeba-test.gil-eng.gil.eng | 49.0 | 0.618 |
| Tatoeba-test.haw-eng.haw.eng | 1.6 | 0.126 |
| Tatoeba-test.mah-eng.mah.eng | 13.7 | 0.257 |
| Tatoeba-test.mri-eng.mri.eng | 7.4 | 0.250 |
| Tatoeba-test.multi.eng | 12.6 | 0.268 |
| Tatoeba-test.nau-eng.nau.eng | 2.3 | 0.125 |
| Tatoeba-test.niu-eng.niu.eng | 34.4 | 0.471 |
| Tatoeba-test.rap-eng.rap.eng | 10.3 | 0.215 |
| Tatoeba-test.smo-eng.smo.eng | 28.5 | 0.413 |
| Tatoeba-test.tah-eng.tah.eng | 12.1 | 0.199 |
| Tatoeba-test.ton-eng.ton.eng | 41.8 | 0.517 |
| Tatoeba-test.tvl-eng.tvl.eng | 42.9 | 0.540 |
### System Info:
- hf_name: pqe-eng
- source_languages: pqe
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']
- src_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt
- src_alpha3: pqe
- tgt_alpha3: eng
- short_pair: pqe-en
- chrF2_score: 0.268
- bleu: 12.6
- brevity_penalty: 1.0
- ref_len: 4568.0
- src_name: Eastern Malayo-Polynesian languages
- tgt_name: English
- train_date: 2020-06-28
- src_alpha2: pqe
- tgt_alpha2: en
- prefer_old: False
- long_pair: pqe-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-prl-es
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-prl-es
* source languages: prl
* target languages: es
* OPUS readme: [prl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/prl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.prl.es | 93.3 | 0.955 |
|
Helsinki-NLP/opus-mt-pt-ca
|
Helsinki-NLP
|
marian
| 11 | 1,213 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pt', 'ca']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,991 |
### por-cat
* source group: Portuguese
* target group: Catalan
* OPUS readme: [por-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-cat/README.md)
* model: transformer-align
* source language(s): por
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.cat | 45.7 | 0.672 |
### System Info:
- hf_name: por-cat
- source_languages: por
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'ca']
- src_constituents: {'por'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: cat
- short_pair: pt-ca
- chrF2_score: 0.672
- bleu: 45.7
- brevity_penalty: 0.972
- ref_len: 5878.0
- src_name: Portuguese
- tgt_name: Catalan
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: ca
- prefer_old: False
- long_pair: por-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pt-eo
|
Helsinki-NLP
|
marian
| 11 | 126 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pt', 'eo']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,006 |
### por-epo
* source group: Portuguese
* target group: Esperanto
* OPUS readme: [por-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-epo/README.md)
* model: transformer-align
* source language(s): por
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.epo | 26.8 | 0.497 |
### System Info:
- hf_name: por-epo
- source_languages: por
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'eo']
- src_constituents: {'por'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.test.txt
- src_alpha3: por
- tgt_alpha3: epo
- short_pair: pt-eo
- chrF2_score: 0.49700000000000005
- bleu: 26.8
- brevity_penalty: 0.948
- ref_len: 87408.0
- src_name: Portuguese
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: pt
- tgt_alpha2: eo
- prefer_old: False
- long_pair: por-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pt-gl
|
Helsinki-NLP
|
marian
| 11 | 53 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pt', 'gl']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,989 |
### por-glg
* source group: Portuguese
* target group: Galician
* OPUS readme: [por-glg](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-glg/README.md)
* model: transformer-align
* source language(s): por
* target language(s): glg
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.glg | 55.8 | 0.737 |
### System Info:
- hf_name: por-glg
- source_languages: por
- target_languages: glg
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-glg/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'gl']
- src_constituents: {'por'}
- tgt_constituents: {'glg'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.test.txt
- src_alpha3: por
- tgt_alpha3: glg
- short_pair: pt-gl
- chrF2_score: 0.737
- bleu: 55.8
- brevity_penalty: 0.996
- ref_len: 2989.0
- src_name: Portuguese
- tgt_name: Galician
- train_date: 2020-06-16
- src_alpha2: pt
- tgt_alpha2: gl
- prefer_old: False
- long_pair: por-glg
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pt-tl
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pt', 'tl']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,000 |
### por-tgl
* source group: Portuguese
* target group: Tagalog
* OPUS readme: [por-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md)
* model: transformer-align
* source language(s): por
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.tgl | 28.4 | 0.565 |
### System Info:
- hf_name: por-tgl
- source_languages: por
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'tl']
- src_constituents: {'por'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: tgl
- short_pair: pt-tl
- chrF2_score: 0.565
- bleu: 28.4
- brevity_penalty: 1.0
- ref_len: 13620.0
- src_name: Portuguese
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: tl
- prefer_old: False
- long_pair: por-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-pt-uk
|
Helsinki-NLP
|
marian
| 11 | 121 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pt', 'uk']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,009 |
### por-ukr
* source group: Portuguese
* target group: Ukrainian
* OPUS readme: [por-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md)
* model: transformer-align
* source language(s): por
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.ukr | 39.8 | 0.616 |
### System Info:
- hf_name: por-ukr
- source_languages: por
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'uk']
- src_constituents: {'por'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: ukr
- short_pair: pt-uk
- chrF2_score: 0.616
- bleu: 39.8
- brevity_penalty: 0.9990000000000001
- ref_len: 18933.0
- src_name: Portuguese
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: uk
- prefer_old: False
- long_pair: por-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rn-de
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['rn', 'de']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,976 |
### run-deu
* source group: Rundi
* target group: German
* OPUS readme: [run-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-deu/README.md)
* model: transformer-align
* source language(s): run
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.deu | 17.1 | 0.344 |
### System Info:
- hf_name: run-deu
- source_languages: run
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'de']
- src_constituents: {'run'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: deu
- short_pair: rn-de
- chrF2_score: 0.344
- bleu: 17.1
- brevity_penalty: 0.961
- ref_len: 10562.0
- src_name: Rundi
- tgt_name: German
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: de
- prefer_old: False
- long_pair: run-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rn-en
|
Helsinki-NLP
|
marian
| 11 | 23 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['rn', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,977 |
### run-eng
* source group: Rundi
* target group: English
* OPUS readme: [run-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-eng/README.md)
* model: transformer-align
* source language(s): run
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.eng | 26.7 | 0.428 |
### System Info:
- hf_name: run-eng
- source_languages: run
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'en']
- src_constituents: {'run'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: eng
- short_pair: rn-en
- chrF2_score: 0.428
- bleu: 26.7
- brevity_penalty: 0.99
- ref_len: 10041.0
- src_name: Rundi
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: en
- prefer_old: False
- long_pair: run-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rn-es
|
Helsinki-NLP
|
marian
| 11 | 33 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['rn', 'es']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,975 |
### run-spa
* source group: Rundi
* target group: Spanish
* OPUS readme: [run-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-spa/README.md)
* model: transformer-align
* source language(s): run
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.spa | 14.4 | 0.376 |
### System Info:
- hf_name: run-spa
- source_languages: run
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'es']
- src_constituents: {'run'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: spa
- short_pair: rn-es
- chrF2_score: 0.376
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 5167.0
- src_name: Rundi
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: es
- prefer_old: False
- long_pair: run-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rn-fr
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['rn', 'fr']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,973 |
### run-fra
* source group: Rundi
* target group: French
* OPUS readme: [run-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md)
* model: transformer-align
* source language(s): run
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.fra | 18.2 | 0.397 |
### System Info:
- hf_name: run-fra
- source_languages: run
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'fr']
- src_constituents: {'run'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: fra
- short_pair: rn-fr
- chrF2_score: 0.397
- bleu: 18.2
- brevity_penalty: 1.0
- ref_len: 7496.0
- src_name: Rundi
- tgt_name: French
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: fr
- prefer_old: False
- long_pair: run-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rn-ru
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['rn', 'ru']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,975 |
### run-rus
* source group: Rundi
* target group: Russian
* OPUS readme: [run-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-rus/README.md)
* model: transformer-align
* source language(s): run
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.rus | 17.1 | 0.321 |
### System Info:
- hf_name: run-rus
- source_languages: run
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'ru']
- src_constituents: {'run'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: rus
- short_pair: rn-ru
- chrF2_score: 0.321
- bleu: 17.1
- brevity_penalty: 1.0
- ref_len: 6635.0
- src_name: Rundi
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: ru
- prefer_old: False
- long_pair: run-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-rnd-en
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-rnd-en
* source languages: rnd
* target languages: en
* OPUS readme: [rnd-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.en | 37.8 | 0.531 |
|
Helsinki-NLP/opus-mt-rnd-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-rnd-fr
* source languages: rnd
* target languages: fr
* OPUS readme: [rnd-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.fr | 22.1 | 0.392 |
|
Helsinki-NLP/opus-mt-rnd-sv
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-rnd-sv
* source languages: rnd
* target languages: sv
* OPUS readme: [rnd-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.sv | 21.2 | 0.387 |
|
Helsinki-NLP/opus-mt-ro-eo
|
Helsinki-NLP
|
marian
| 11 | 50 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ro', 'eo']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,988 |
### ron-epo
* source group: Romanian
* target group: Esperanto
* OPUS readme: [ron-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ron-epo/README.md)
* model: transformer-align
* source language(s): ron
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ron.epo | 27.8 | 0.495 |
### System Info:
- hf_name: ron-epo
- source_languages: ron
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ron-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ro', 'eo']
- src_constituents: {'ron'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.test.txt
- src_alpha3: ron
- tgt_alpha3: epo
- short_pair: ro-eo
- chrF2_score: 0.495
- bleu: 27.8
- brevity_penalty: 0.955
- ref_len: 25751.0
- src_name: Romanian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ro
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ron-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ro-fi
|
Helsinki-NLP
|
marian
| 10 | 32 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ro-fi
* source languages: ro
* target languages: fi
* OPUS readme: [ro-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.fi | 25.2 | 0.521 |
|
Helsinki-NLP/opus-mt-ro-fr
|
Helsinki-NLP
|
marian
| 10 | 50 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-ro-fr
* source languages: ro
* target languages: fr
* OPUS readme: [ro-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ro.fr | 54.5 | 0.697 |
|
Helsinki-NLP/opus-mt-ro-sv
|
Helsinki-NLP
|
marian
| 10 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ro-sv
* source languages: ro
* target languages: sv
* OPUS readme: [ro-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.sv | 31.2 | 0.529 |
|
Helsinki-NLP/opus-mt-roa-en
|
Helsinki-NLP
|
marian
| 12 | 39,114 |
transformers
| 2 |
translation
| true | true | false |
apache-2.0
|
['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 5,103 |
### roa-eng
* source group: Romance languages
* target group: English
* OPUS readme: [roa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/roa-eng/README.md)
* model: transformer
* source language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-roneng.ron.eng | 37.1 | 0.631 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 31.6 | 0.564 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 36.1 | 0.592 |
| newssyscomb2009-fraeng.fra.eng | 29.3 | 0.563 |
| newssyscomb2009-itaeng.ita.eng | 33.1 | 0.589 |
| newssyscomb2009-spaeng.spa.eng | 29.2 | 0.562 |
| news-test2008-fraeng.fra.eng | 25.2 | 0.533 |
| news-test2008-spaeng.spa.eng | 26.6 | 0.542 |
| newstest2009-fraeng.fra.eng | 28.6 | 0.557 |
| newstest2009-itaeng.ita.eng | 32.0 | 0.580 |
| newstest2009-spaeng.spa.eng | 28.9 | 0.559 |
| newstest2010-fraeng.fra.eng | 29.9 | 0.573 |
| newstest2010-spaeng.spa.eng | 33.3 | 0.596 |
| newstest2011-fraeng.fra.eng | 31.2 | 0.585 |
| newstest2011-spaeng.spa.eng | 32.3 | 0.584 |
| newstest2012-fraeng.fra.eng | 31.3 | 0.580 |
| newstest2012-spaeng.spa.eng | 35.3 | 0.606 |
| newstest2013-fraeng.fra.eng | 31.9 | 0.575 |
| newstest2013-spaeng.spa.eng | 32.8 | 0.592 |
| newstest2014-fren-fraeng.fra.eng | 34.6 | 0.611 |
| newstest2016-enro-roneng.ron.eng | 35.8 | 0.614 |
| Tatoeba-test.arg-eng.arg.eng | 38.7 | 0.512 |
| Tatoeba-test.ast-eng.ast.eng | 35.2 | 0.520 |
| Tatoeba-test.cat-eng.cat.eng | 54.9 | 0.703 |
| Tatoeba-test.cos-eng.cos.eng | 68.1 | 0.666 |
| Tatoeba-test.egl-eng.egl.eng | 6.7 | 0.209 |
| Tatoeba-test.ext-eng.ext.eng | 24.2 | 0.427 |
| Tatoeba-test.fra-eng.fra.eng | 53.9 | 0.691 |
| Tatoeba-test.frm-eng.frm.eng | 25.7 | 0.423 |
| Tatoeba-test.gcf-eng.gcf.eng | 14.8 | 0.288 |
| Tatoeba-test.glg-eng.glg.eng | 54.6 | 0.703 |
| Tatoeba-test.hat-eng.hat.eng | 37.0 | 0.540 |
| Tatoeba-test.ita-eng.ita.eng | 64.8 | 0.768 |
| Tatoeba-test.lad-eng.lad.eng | 21.7 | 0.452 |
| Tatoeba-test.lij-eng.lij.eng | 11.2 | 0.299 |
| Tatoeba-test.lld-eng.lld.eng | 10.8 | 0.273 |
| Tatoeba-test.lmo-eng.lmo.eng | 5.8 | 0.260 |
| Tatoeba-test.mfe-eng.mfe.eng | 63.1 | 0.819 |
| Tatoeba-test.msa-eng.msa.eng | 40.9 | 0.592 |
| Tatoeba-test.multi.eng | 54.9 | 0.697 |
| Tatoeba-test.mwl-eng.mwl.eng | 44.6 | 0.674 |
| Tatoeba-test.oci-eng.oci.eng | 20.5 | 0.404 |
| Tatoeba-test.pap-eng.pap.eng | 56.2 | 0.669 |
| Tatoeba-test.pms-eng.pms.eng | 10.3 | 0.324 |
| Tatoeba-test.por-eng.por.eng | 59.7 | 0.738 |
| Tatoeba-test.roh-eng.roh.eng | 14.8 | 0.378 |
| Tatoeba-test.ron-eng.ron.eng | 55.2 | 0.703 |
| Tatoeba-test.scn-eng.scn.eng | 10.2 | 0.259 |
| Tatoeba-test.spa-eng.spa.eng | 56.2 | 0.714 |
| Tatoeba-test.vec-eng.vec.eng | 13.8 | 0.317 |
| Tatoeba-test.wln-eng.wln.eng | 17.3 | 0.323 |
### System Info:
- hf_name: roa-eng
- source_languages: roa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/roa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en']
- src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.test.txt
- src_alpha3: roa
- tgt_alpha3: eng
- short_pair: roa-en
- chrF2_score: 0.6970000000000001
- bleu: 54.9
- brevity_penalty: 0.9790000000000001
- ref_len: 74762.0
- src_name: Romance languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: roa
- tgt_alpha2: en
- prefer_old: False
- long_pair: roa-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-af
|
Helsinki-NLP
|
marian
| 11 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'af']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,987 |
### rus-afr
* source group: Russian
* target group: Afrikaans
* OPUS readme: [rus-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-afr/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.afr | 48.1 | 0.669 |
### System Info:
- hf_name: rus-afr
- source_languages: rus
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'af']
- src_constituents: {'rus'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: afr
- short_pair: ru-af
- chrF2_score: 0.669
- bleu: 48.1
- brevity_penalty: 1.0
- ref_len: 1390.0
- src_name: Russian
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: af
- prefer_old: False
- long_pair: rus-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-ar
|
Helsinki-NLP
|
marian
| 11 | 28 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'ar']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,161 |
### rus-ara
* source group: Russian
* target group: Arabic
* OPUS readme: [rus-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md)
* model: transformer
* source language(s): rus
* target language(s): apc ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.ara | 16.6 | 0.486 |
### System Info:
- hf_name: rus-ara
- source_languages: rus
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'ar']
- src_constituents: {'rus'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt
- src_alpha3: rus
- tgt_alpha3: ara
- short_pair: ru-ar
- chrF2_score: 0.486
- bleu: 16.6
- brevity_penalty: 0.9690000000000001
- ref_len: 18878.0
- src_name: Russian
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: ru
- tgt_alpha2: ar
- prefer_old: False
- long_pair: rus-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-bg
|
Helsinki-NLP
|
marian
| 11 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'bg']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,115 |
### rus-bul
* source group: Russian
* target group: Bulgarian
* OPUS readme: [rus-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-bul/README.md)
* model: transformer
* source language(s): rus
* target language(s): bul bul_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.bul | 52.3 | 0.704 |
### System Info:
- hf_name: rus-bul
- source_languages: rus
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'bg']
- src_constituents: {'rus'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.test.txt
- src_alpha3: rus
- tgt_alpha3: bul
- short_pair: ru-bg
- chrF2_score: 0.7040000000000001
- bleu: 52.3
- brevity_penalty: 0.919
- ref_len: 8272.0
- src_name: Russian
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: ru
- tgt_alpha2: bg
- prefer_old: False
- long_pair: rus-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-da
|
Helsinki-NLP
|
marian
| 11 | 46 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'da']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,997 |
### rus-dan
* source group: Russian
* target group: Danish
* OPUS readme: [rus-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.dan | 56.6 | 0.714 |
### System Info:
- hf_name: rus-dan
- source_languages: rus
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'da']
- src_constituents: {'rus'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: dan
- short_pair: ru-da
- chrF2_score: 0.7140000000000001
- bleu: 56.6
- brevity_penalty: 0.977
- ref_len: 11746.0
- src_name: Russian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: da
- prefer_old: False
- long_pair: rus-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-en
|
Helsinki-NLP
|
marian
| 11 | 150,675 |
transformers
| 15 |
translation
| true | true | false |
cc-by-4.0
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,110 |
### opus-mt-ru-en
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Transformer-align
- **Language(s):**
- Source Language: Russian
- Target Language: English
- **License:** CC-BY-4.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Uses
#### Direct Use
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Further details about the dataset for this model can be found in the OPUS readme: [ru-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-en/README.md)
## Training
#### Training Data
##### Preprocessing
* Pre-processing: Normalization + SentencePiece
* Dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT)
* Download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.zip)
* Test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.test.txt)
## Evaluation
#### Results
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.eval.txt)
#### Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.ru.en | 34.8 | 0.603 |
| newstest2013.ru.en | 27.9 | 0.545 |
| newstest2014-ruen.ru.en | 31.9 | 0.591 |
| newstest2015-enru.ru.en | 30.4 | 0.568 |
| newstest2016-enru.ru.en | 30.1 | 0.565 |
| newstest2017-enru.ru.en | 33.4 | 0.593 |
| newstest2018-enru.ru.en | 29.6 | 0.565 |
| newstest2019-ruen.ru.en | 31.4 | 0.576 |
| Tatoeba.ru.en | 61.1 | 0.736 |
## Citation Information
```bibtex
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-ru-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-ru-en")
```
|
Helsinki-NLP/opus-mt-ru-eo
|
Helsinki-NLP
|
marian
| 11 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'eo']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,986 |
### rus-epo
* source group: Russian
* target group: Esperanto
* OPUS readme: [rus-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.epo | 24.2 | 0.436 |
### System Info:
- hf_name: rus-epo
- source_languages: rus
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'eo']
- src_constituents: {'rus'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: epo
- short_pair: ru-eo
- chrF2_score: 0.436
- bleu: 24.2
- brevity_penalty: 0.925
- ref_len: 77197.0
- src_name: Russian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: eo
- prefer_old: False
- long_pair: rus-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-es
|
Helsinki-NLP
|
marian
| 10 | 522 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 850 |
### opus-mt-ru-es
* source languages: ru
* target languages: es
* OPUS readme: [ru-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.ru.es | 26.1 | 0.527 |
| newstest2013.ru.es | 28.2 | 0.538 |
| Tatoeba.ru.es | 49.4 | 0.675 |
|
Helsinki-NLP/opus-mt-ru-et
|
Helsinki-NLP
|
marian
| 11 | 14 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'et']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,000 |
### rus-est
* source group: Russian
* target group: Estonian
* OPUS readme: [rus-est](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-est/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): est
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.est | 57.5 | 0.749 |
### System Info:
- hf_name: rus-est
- source_languages: rus
- target_languages: est
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-est/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'et']
- src_constituents: {'rus'}
- tgt_constituents: {'est'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: est
- short_pair: ru-et
- chrF2_score: 0.7490000000000001
- bleu: 57.5
- brevity_penalty: 0.975
- ref_len: 3572.0
- src_name: Russian
- tgt_name: Estonian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: et
- prefer_old: False
- long_pair: rus-est
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-eu
|
Helsinki-NLP
|
marian
| 10 | 15 |
transformers
| 0 |
translation
| true | false | false |
apache-2.0
|
['ru', 'eu']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,992 |
### rus-eus
* source group: Russian
* target group: Basque
* OPUS readme: [rus-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-eus/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.eus | 29.7 | 0.539 |
### System Info:
- hf_name: rus-eus
- source_languages: rus
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'eu']
- src_constituents: {'rus'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: eus
- short_pair: ru-eu
- chrF2_score: 0.539
- bleu: 29.7
- brevity_penalty: 0.9440000000000001
- ref_len: 2373.0
- src_name: Russian
- tgt_name: Basque
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: eu
- prefer_old: False
- long_pair: rus-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-fi
|
Helsinki-NLP
|
marian
| 10 | 119 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-ru-fi
* source languages: ru
* target languages: fi
* OPUS readme: [ru-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ru.fi | 40.1 | 0.646 |
|
Helsinki-NLP/opus-mt-ru-fr
|
Helsinki-NLP
|
marian
| 11 | 211 |
transformers
| 0 |
translation
| true | true | true |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 850 |
### opus-mt-ru-fr
* source languages: ru
* target languages: fr
* OPUS readme: [ru-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.ru.fr | 18.3 | 0.497 |
| newstest2013.ru.fr | 21.6 | 0.516 |
| Tatoeba.ru.fr | 51.5 | 0.670 |
|
Helsinki-NLP/opus-mt-ru-he
|
Helsinki-NLP
|
marian
| 12 | 17 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'he']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,026 |
### ru-he
* source group: Russian
* target group: Hebrew
* OPUS readme: [rus-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md)
* model: transformer
* source language(s): rus
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.heb | 36.1 | 0.569 |
### System Info:
- hf_name: ru-he
- source_languages: rus
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'he']
- src_constituents: ('Russian', {'rus'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: rus-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt
- src_alpha3: rus
- tgt_alpha3: heb
- chrF2_score: 0.569
- bleu: 36.1
- brevity_penalty: 0.9990000000000001
- ref_len: 15028.0
- src_name: Russian
- tgt_name: Hebrew
- train_date: 2020-10-04 00:00:00
- src_alpha2: ru
- tgt_alpha2: he
- prefer_old: False
- short_pair: ru-he
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
- port_machine: LM0-400-22516.local
- port_time: 2020-10-26-16:16
|
Helsinki-NLP/opus-mt-ru-hy
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'hy']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,106 |
### rus-hye
* source group: Russian
* target group: Armenian
* OPUS readme: [rus-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-hye/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): hye hye_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.hye | 21.7 | 0.494 |
### System Info:
- hf_name: rus-hye
- source_languages: rus
- target_languages: hye
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-hye/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'hy']
- src_constituents: {'rus'}
- tgt_constituents: {'hye', 'hye_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: hye
- short_pair: ru-hy
- chrF2_score: 0.494
- bleu: 21.7
- brevity_penalty: 1.0
- ref_len: 1602.0
- src_name: Russian
- tgt_name: Armenian
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: hy
- prefer_old: False
- long_pair: rus-hye
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-lt
|
Helsinki-NLP
|
marian
| 11 | 18 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'lt']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,992 |
### rus-lit
* source group: Russian
* target group: Lithuanian
* OPUS readme: [rus-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lit | 43.5 | 0.675 |
### System Info:
- hf_name: rus-lit
- source_languages: rus
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lt']
- src_constituents: {'rus'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lit
- short_pair: ru-lt
- chrF2_score: 0.675
- bleu: 43.5
- brevity_penalty: 0.937
- ref_len: 14406.0
- src_name: Russian
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lt
- prefer_old: False
- long_pair: rus-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-lv
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'lv']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,985 |
### rus-lav
* source group: Russian
* target group: Latvian
* OPUS readme: [rus-lav](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lav/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lav
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lav | 50.0 | 0.696 |
### System Info:
- hf_name: rus-lav
- source_languages: rus
- target_languages: lav
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lav/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lv']
- src_constituents: {'rus'}
- tgt_constituents: {'lav'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lav
- short_pair: ru-lv
- chrF2_score: 0.696
- bleu: 50.0
- brevity_penalty: 0.968
- ref_len: 1518.0
- src_name: Russian
- tgt_name: Latvian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lv
- prefer_old: False
- long_pair: rus-lav
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-no
|
Helsinki-NLP
|
marian
| 11 | 25 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', False]
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,101 |
### rus-nor
* source group: Russian
* target group: Norwegian
* OPUS readme: [rus-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.nor | 20.3 | 0.418 |
### System Info:
- hf_name: rus-nor
- source_languages: rus
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'no']
- src_constituents: {'rus'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: nor
- short_pair: ru-no
- chrF2_score: 0.418
- bleu: 20.3
- brevity_penalty: 0.946
- ref_len: 11686.0
- src_name: Russian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: no
- prefer_old: False
- long_pair: rus-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-sl
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'sl']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,003 |
### rus-slv
* source group: Russian
* target group: Slovenian
* OPUS readme: [rus-slv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): slv
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.slv | 32.3 | 0.492 |
### System Info:
- hf_name: rus-slv
- source_languages: rus
- target_languages: slv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'sl']
- src_constituents: {'rus'}
- tgt_constituents: {'slv'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: slv
- short_pair: ru-sl
- chrF2_score: 0.49200000000000005
- bleu: 32.3
- brevity_penalty: 0.992
- ref_len: 2135.0
- src_name: Russian
- tgt_name: Slovenian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: sl
- prefer_old: False
- long_pair: rus-slv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-sv
|
Helsinki-NLP
|
marian
| 11 | 35 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'sv']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,985 |
### rus-swe
* source group: Russian
* target group: Swedish
* OPUS readme: [rus-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-swe/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.swe | 51.9 | 0.677 |
### System Info:
- hf_name: rus-swe
- source_languages: rus
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'sv']
- src_constituents: {'rus'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: swe
- short_pair: ru-sv
- chrF2_score: 0.677
- bleu: 51.9
- brevity_penalty: 0.968
- ref_len: 8449.0
- src_name: Russian
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: sv
- prefer_old: False
- long_pair: rus-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-uk
|
Helsinki-NLP
|
marian
| 11 | 942 |
transformers
| 3 |
translation
| true | true | false |
apache-2.0
|
['ru', 'uk']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,002 |
### rus-ukr
* source group: Russian
* target group: Ukrainian
* OPUS readme: [rus-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ukr/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.ukr | 64.0 | 0.793 |
### System Info:
- hf_name: rus-ukr
- source_languages: rus
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'uk']
- src_constituents: {'rus'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: ukr
- short_pair: ru-uk
- chrF2_score: 0.7929999999999999
- bleu: 64.0
- brevity_penalty: 0.99
- ref_len: 60212.0
- src_name: Russian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: uk
- prefer_old: False
- long_pair: rus-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-ru-vi
|
Helsinki-NLP
|
marian
| 11 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ru', 'vi']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,015 |
### rus-vie
* source group: Russian
* target group: Vietnamese
* OPUS readme: [rus-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-vie/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.vie | 16.9 | 0.346 |
### System Info:
- hf_name: rus-vie
- source_languages: rus
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'vi']
- src_constituents: {'rus'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: vie
- short_pair: ru-vi
- chrF2_score: 0.34600000000000003
- bleu: 16.9
- brevity_penalty: 1.0
- ref_len: 2566.0
- src_name: Russian
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: vi
- prefer_old: False
- long_pair: rus-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-run-en
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-run-en
* source languages: run
* target languages: en
* OPUS readme: [run-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.en | 42.7 | 0.583 |
|
Helsinki-NLP/opus-mt-run-es
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-run-es
* source languages: run
* target languages: es
* OPUS readme: [run-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.es | 26.9 | 0.452 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.