Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-lue-es
* source languages: lue
* target languages: es
* OPUS readme: [lue-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.es | 22.8 | 0.399 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lue-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lue",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lue-fi
* source languages: lue
* target languages: fi
* OPUS readme: [lue-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.fi | 22.1 | 0.427 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lue-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lue",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lue-fr
* source languages: lue
* target languages: fr
* OPUS readme: [lue-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.fr | 24.1 | 0.407 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lue-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lue",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lue-sv
* source languages: lue
* target languages: sv
* OPUS readme: [lue-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lue.sv | 23.7 | 0.412 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lue-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lue",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lun-en
* source languages: lun
* target languages: en
* OPUS readme: [lun-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lun-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lun.en | 30.6 | 0.466 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lun-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lun",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-luo-en
* source languages: luo
* target languages: en
* OPUS readme: [luo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/luo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.luo.en | 29.1 | 0.452 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-luo-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"luo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lus-en
* source languages: lus
* target languages: en
* OPUS readme: [lus-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.en | 37.0 | 0.534 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lus-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lus-es
* source languages: lus
* target languages: es
* OPUS readme: [lus-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.es | 21.6 | 0.389 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lus-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lus-fi
* source languages: lus
* target languages: fi
* OPUS readme: [lus-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.fi | 22.6 | 0.441 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lus-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lus-fr
* source languages: lus
* target languages: fr
* OPUS readme: [lus-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.fr | 25.5 | 0.423 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lus-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lus-sv
* source languages: lus
* target languages: sv
* OPUS readme: [lus-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lus.sv | 25.5 | 0.439 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lus-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lus",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lv-en
* source languages: lv
* target languages: en
* OPUS readme: [lv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enlv.lv.en | 29.9 | 0.587 |
| newstest2017-enlv.lv.en | 22.1 | 0.526 |
| Tatoeba.lv.en | 53.3 | 0.707 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lv-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lv-es
* source languages: lv
* target languages: es
* OPUS readme: [lv-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.es | 21.7 | 0.433 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lv-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lv-fi
* source languages: lv
* target languages: fi
* OPUS readme: [lv-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.fi | 20.6 | 0.469 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lv-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lv-fr
* source languages: lv
* target languages: fr
* OPUS readme: [lv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.fr | 22.1 | 0.437 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lv-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### lav-rus
* source group: Latvian
* target group: Russian
* OPUS readme: [lav-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md)
* model: transformer-align
* source language(s): lav
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lav.rus | 53.3 | 0.702 |
### System Info:
- hf_name: lav-rus
- source_languages: lav
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lv', 'ru']
- src_constituents: {'lav'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt
- src_alpha3: lav
- tgt_alpha3: rus
- short_pair: lv-ru
- chrF2_score: 0.7020000000000001
- bleu: 53.3
- brevity_penalty: 0.9840000000000001
- ref_len: 1541.0
- src_name: Latvian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: lv
- tgt_alpha2: ru
- prefer_old: False
- long_pair: lav-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["lv", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lv-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-lv-sv
* source languages: lv
* target languages: sv
* OPUS readme: [lv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lv.sv | 22.0 | 0.444 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-lv-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lv",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mfe-en
* source languages: mfe
* target languages: en
* OPUS readme: [mfe-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfe-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mfe.en | 39.9 | 0.552 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mfe-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mfe",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mfe-es
* source languages: mfe
* target languages: es
* OPUS readme: [mfe-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfe-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mfe.es | 24.0 | 0.418 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mfe-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mfe",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mfs-es
* source languages: mfs
* target languages: es
* OPUS readme: [mfs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mfs.es | 88.9 | 0.910 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mfs-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mfs",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mg-en
* source languages: mg
* target languages: en
* OPUS readme: [mg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.mg.en | 27.6 | 0.522 |
| Tatoeba.mg.en | 50.2 | 0.607 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mg-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mg",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mg-es
* source languages: mg
* target languages: es
* OPUS readme: [mg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mg-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mg-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.mg.es | 23.1 | 0.480 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mg-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mg",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mh-en
* source languages: mh
* target languages: en
* OPUS readme: [mh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mh.en | 36.5 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mh-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mh-es
* source languages: mh
* target languages: es
* OPUS readme: [mh-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mh.es | 23.6 | 0.407 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mh-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mh",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mh-fi
* source languages: mh
* target languages: fi
* OPUS readme: [mh-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mh.fi | 23.3 | 0.442 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mh-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mh",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mk-en
* source languages: mk
* target languages: en
* OPUS readme: [mk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.mk.en | 59.8 | 0.720 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mk-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### mkd-spa
* source group: Macedonian
* target group: Spanish
* OPUS readme: [mkd-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md)
* model: transformer-align
* source language(s): mkd
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.mkd.spa | 56.5 | 0.717 |
### System Info:
- hf_name: mkd-spa
- source_languages: mkd
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mk', 'es']
- src_constituents: {'mkd'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt
- src_alpha3: mkd
- tgt_alpha3: spa
- short_pair: mk-es
- chrF2_score: 0.7170000000000001
- bleu: 56.5
- brevity_penalty: 0.997
- ref_len: 1121.0
- src_name: Macedonian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: mk
- tgt_alpha2: es
- prefer_old: False
- long_pair: mkd-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["mk", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mk-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mk-fi
* source languages: mk
* target languages: fi
* OPUS readme: [mk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mk.fi | 25.9 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mk-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mk-fr
* source languages: mk
* target languages: fr
* OPUS readme: [mk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.mk.fr | 22.3 | 0.492 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mk-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mk",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### mkh-eng
* source group: Mon-Khmer languages
* target group: English
* OPUS readme: [mkh-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md)
* model: transformer
* source language(s): kha khm khm_Latn mnw vie vie_Hani
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kha-eng.kha.eng | 0.5 | 0.108 |
| Tatoeba-test.khm-eng.khm.eng | 8.5 | 0.206 |
| Tatoeba-test.mnw-eng.mnw.eng | 0.7 | 0.110 |
| Tatoeba-test.multi.eng | 24.5 | 0.407 |
| Tatoeba-test.vie-eng.vie.eng | 34.4 | 0.529 |
### System Info:
- hf_name: mkh-eng
- source_languages: mkh
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'km', 'mkh', 'en']
- src_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt
- src_alpha3: mkh
- tgt_alpha3: eng
- short_pair: mkh-en
- chrF2_score: 0.40700000000000003
- bleu: 24.5
- brevity_penalty: 1.0
- ref_len: 33985.0
- src_name: Mon-Khmer languages
- tgt_name: English
- train_date: 2020-07-27
- src_alpha2: mkh
- tgt_alpha2: en
- prefer_old: False
- long_pair: mkh-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "km", "mkh", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mkh-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"km",
"mkh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ml-en
* source languages: ml
* target languages: en
* OPUS readme: [ml-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ml-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.zip)
* test set translations: [opus-2020-04-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.test.txt)
* test set scores: [opus-2020-04-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ml.en | 42.7 | 0.605 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ml-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ml",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mos-en
* source languages: mos
* target languages: en
* OPUS readme: [mos-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mos-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mos.en | 26.1 | 0.408 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mos-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mos",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mr-en
* source languages: mr
* target languages: en
* OPUS readme: [mr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mr-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.mr.en | 38.2 | 0.515 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mr-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### msa-deu
* source group: Malay (macrolanguage)
* target group: German
* OPUS readme: [msa-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-deu/README.md)
* model: transformer-align
* source language(s): ind zsm_Latn
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa.deu | 36.5 | 0.584 |
### System Info:
- hf_name: msa-deu
- source_languages: msa
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ms', 'de']
- src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.test.txt
- src_alpha3: msa
- tgt_alpha3: deu
- short_pair: ms-de
- chrF2_score: 0.584
- bleu: 36.5
- brevity_penalty: 0.966
- ref_len: 4198.0
- src_name: Malay (macrolanguage)
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: ms
- tgt_alpha2: de
- prefer_old: False
- long_pair: msa-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ms", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ms-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ms",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### msa-fra
* source group: Malay (macrolanguage)
* target group: French
* OPUS readme: [msa-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-fra/README.md)
* model: transformer-align
* source language(s): ind zsm_Latn
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa.fra | 43.7 | 0.609 |
### System Info:
- hf_name: msa-fra
- source_languages: msa
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ms', 'fr']
- src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.test.txt
- src_alpha3: msa
- tgt_alpha3: fra
- short_pair: ms-fr
- chrF2_score: 0.609
- bleu: 43.7
- brevity_penalty: 0.9740000000000001
- ref_len: 7808.0
- src_name: Malay (macrolanguage)
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: ms
- tgt_alpha2: fr
- prefer_old: False
- long_pair: msa-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ms", "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ms-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ms",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### msa-ita
* source group: Malay (macrolanguage)
* target group: Italian
* OPUS readme: [msa-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-ita/README.md)
* model: transformer-align
* source language(s): ind zsm_Latn
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa.ita | 37.8 | 0.613 |
### System Info:
- hf_name: msa-ita
- source_languages: msa
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ms', 'it']
- src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.test.txt
- src_alpha3: msa
- tgt_alpha3: ita
- short_pair: ms-it
- chrF2_score: 0.613
- bleu: 37.8
- brevity_penalty: 0.995
- ref_len: 2758.0
- src_name: Malay (macrolanguage)
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: ms
- tgt_alpha2: it
- prefer_old: False
- long_pair: msa-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ms", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ms-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ms",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### msa-msa
* source group: Malay (macrolanguage)
* target group: Malay (macrolanguage)
* OPUS readme: [msa-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-msa/README.md)
* model: transformer-align
* source language(s): ind max_Latn min zlm_Latn zsm_Latn
* target language(s): ind max_Latn min zlm_Latn zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa.msa | 18.6 | 0.418 |
### System Info:
- hf_name: msa-msa
- source_languages: msa
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ms']
- src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.test.txt
- src_alpha3: msa
- tgt_alpha3: msa
- short_pair: ms-ms
- chrF2_score: 0.418
- bleu: 18.6
- brevity_penalty: 1.0
- ref_len: 6029.0
- src_name: Malay (macrolanguage)
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: ms
- tgt_alpha2: ms
- prefer_old: False
- long_pair: msa-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ms"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ms-ms | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ms",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mt-en
* source languages: mt
* target languages: en
* OPUS readme: [mt-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.en | 49.0 | 0.655 |
| Tatoeba.mt.en | 53.3 | 0.685 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mt-en | null | [
"transformers",
"pytorch",
"tf",
"jax",
"marian",
"text2text-generation",
"translation",
"mt",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mt-es
* source languages: mt
* target languages: es
* OPUS readme: [mt-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.es | 27.1 | 0.471 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mt-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mt-fi
* source languages: mt
* target languages: fi
* OPUS readme: [mt-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.fi | 24.9 | 0.509 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mt-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mt-fr
* source languages: mt
* target languages: fr
* OPUS readme: [mt-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.fr | 27.2 | 0.475 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mt-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-mt-sv
* source languages: mt
* target languages: sv
* OPUS readme: [mt-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.sv | 30.4 | 0.514 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mt-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### mul-eng
* source group: Multiple languages
* target group: English
* OPUS readme: [mul-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md)
* model: transformer
* source language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.5 | 0.341 |
| newsdev2015-enfi-fineng.fin.eng | 16.8 | 0.441 |
| newsdev2016-enro-roneng.ron.eng | 31.3 | 0.580 |
| newsdev2016-entr-tureng.tur.eng | 16.4 | 0.422 |
| newsdev2017-enlv-laveng.lav.eng | 21.3 | 0.502 |
| newsdev2017-enzh-zhoeng.zho.eng | 12.7 | 0.409 |
| newsdev2018-enet-esteng.est.eng | 19.8 | 0.467 |
| newsdev2019-engu-gujeng.guj.eng | 13.3 | 0.385 |
| newsdev2019-enlt-liteng.lit.eng | 19.9 | 0.482 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 26.7 | 0.520 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 29.8 | 0.541 |
| newssyscomb2009-ceseng.ces.eng | 21.1 | 0.487 |
| newssyscomb2009-deueng.deu.eng | 22.6 | 0.499 |
| newssyscomb2009-fraeng.fra.eng | 25.8 | 0.530 |
| newssyscomb2009-huneng.hun.eng | 15.1 | 0.430 |
| newssyscomb2009-itaeng.ita.eng | 29.4 | 0.555 |
| newssyscomb2009-spaeng.spa.eng | 26.1 | 0.534 |
| news-test2008-deueng.deu.eng | 21.6 | 0.491 |
| news-test2008-fraeng.fra.eng | 22.3 | 0.502 |
| news-test2008-spaeng.spa.eng | 23.6 | 0.514 |
| newstest2009-ceseng.ces.eng | 19.8 | 0.480 |
| newstest2009-deueng.deu.eng | 20.9 | 0.487 |
| newstest2009-fraeng.fra.eng | 25.0 | 0.523 |
| newstest2009-huneng.hun.eng | 14.7 | 0.425 |
| newstest2009-itaeng.ita.eng | 27.6 | 0.542 |
| newstest2009-spaeng.spa.eng | 25.7 | 0.530 |
| newstest2010-ceseng.ces.eng | 20.6 | 0.491 |
| newstest2010-deueng.deu.eng | 23.4 | 0.517 |
| newstest2010-fraeng.fra.eng | 26.1 | 0.537 |
| newstest2010-spaeng.spa.eng | 29.1 | 0.561 |
| newstest2011-ceseng.ces.eng | 21.0 | 0.489 |
| newstest2011-deueng.deu.eng | 21.3 | 0.494 |
| newstest2011-fraeng.fra.eng | 26.8 | 0.546 |
| newstest2011-spaeng.spa.eng | 28.2 | 0.549 |
| newstest2012-ceseng.ces.eng | 20.5 | 0.485 |
| newstest2012-deueng.deu.eng | 22.3 | 0.503 |
| newstest2012-fraeng.fra.eng | 27.5 | 0.545 |
| newstest2012-ruseng.rus.eng | 26.6 | 0.532 |
| newstest2012-spaeng.spa.eng | 30.3 | 0.567 |
| newstest2013-ceseng.ces.eng | 22.5 | 0.498 |
| newstest2013-deueng.deu.eng | 25.0 | 0.518 |
| newstest2013-fraeng.fra.eng | 27.4 | 0.537 |
| newstest2013-ruseng.rus.eng | 21.6 | 0.484 |
| newstest2013-spaeng.spa.eng | 28.4 | 0.555 |
| newstest2014-csen-ceseng.ces.eng | 24.0 | 0.517 |
| newstest2014-deen-deueng.deu.eng | 24.1 | 0.511 |
| newstest2014-fren-fraeng.fra.eng | 29.1 | 0.563 |
| newstest2014-hien-hineng.hin.eng | 14.0 | 0.414 |
| newstest2014-ruen-ruseng.rus.eng | 24.0 | 0.521 |
| newstest2015-encs-ceseng.ces.eng | 21.9 | 0.481 |
| newstest2015-ende-deueng.deu.eng | 25.5 | 0.519 |
| newstest2015-enfi-fineng.fin.eng | 17.4 | 0.441 |
| newstest2015-enru-ruseng.rus.eng | 22.4 | 0.494 |
| newstest2016-encs-ceseng.ces.eng | 23.0 | 0.500 |
| newstest2016-ende-deueng.deu.eng | 30.1 | 0.560 |
| newstest2016-enfi-fineng.fin.eng | 18.5 | 0.461 |
| newstest2016-enro-roneng.ron.eng | 29.6 | 0.562 |
| newstest2016-enru-ruseng.rus.eng | 22.0 | 0.495 |
| newstest2016-entr-tureng.tur.eng | 14.8 | 0.415 |
| newstest2017-encs-ceseng.ces.eng | 20.2 | 0.475 |
| newstest2017-ende-deueng.deu.eng | 26.0 | 0.523 |
| newstest2017-enfi-fineng.fin.eng | 19.6 | 0.465 |
| newstest2017-enlv-laveng.lav.eng | 16.2 | 0.454 |
| newstest2017-enru-ruseng.rus.eng | 24.2 | 0.510 |
| newstest2017-entr-tureng.tur.eng | 15.0 | 0.412 |
| newstest2017-enzh-zhoeng.zho.eng | 13.7 | 0.412 |
| newstest2018-encs-ceseng.ces.eng | 21.2 | 0.486 |
| newstest2018-ende-deueng.deu.eng | 31.5 | 0.564 |
| newstest2018-enet-esteng.est.eng | 19.7 | 0.473 |
| newstest2018-enfi-fineng.fin.eng | 15.1 | 0.418 |
| newstest2018-enru-ruseng.rus.eng | 21.3 | 0.490 |
| newstest2018-entr-tureng.tur.eng | 15.4 | 0.421 |
| newstest2018-enzh-zhoeng.zho.eng | 12.9 | 0.408 |
| newstest2019-deen-deueng.deu.eng | 27.0 | 0.529 |
| newstest2019-fien-fineng.fin.eng | 17.2 | 0.438 |
| newstest2019-guen-gujeng.guj.eng | 9.0 | 0.342 |
| newstest2019-lten-liteng.lit.eng | 22.6 | 0.512 |
| newstest2019-ruen-ruseng.rus.eng | 24.1 | 0.503 |
| newstest2019-zhen-zhoeng.zho.eng | 13.9 | 0.427 |
| newstestB2016-enfi-fineng.fin.eng | 15.2 | 0.428 |
| newstestB2017-enfi-fineng.fin.eng | 16.8 | 0.442 |
| newstestB2017-fien-fineng.fin.eng | 16.8 | 0.442 |
| Tatoeba-test.abk-eng.abk.eng | 2.4 | 0.190 |
| Tatoeba-test.ady-eng.ady.eng | 1.1 | 0.111 |
| Tatoeba-test.afh-eng.afh.eng | 1.7 | 0.108 |
| Tatoeba-test.afr-eng.afr.eng | 53.0 | 0.672 |
| Tatoeba-test.akl-eng.akl.eng | 5.9 | 0.239 |
| Tatoeba-test.amh-eng.amh.eng | 25.6 | 0.464 |
| Tatoeba-test.ang-eng.ang.eng | 11.7 | 0.289 |
| Tatoeba-test.ara-eng.ara.eng | 26.4 | 0.443 |
| Tatoeba-test.arg-eng.arg.eng | 35.9 | 0.473 |
| Tatoeba-test.asm-eng.asm.eng | 19.8 | 0.365 |
| Tatoeba-test.ast-eng.ast.eng | 31.8 | 0.467 |
| Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.119 |
| Tatoeba-test.awa-eng.awa.eng | 9.7 | 0.271 |
| Tatoeba-test.aze-eng.aze.eng | 37.0 | 0.542 |
| Tatoeba-test.bak-eng.bak.eng | 13.9 | 0.395 |
| Tatoeba-test.bam-eng.bam.eng | 2.2 | 0.094 |
| Tatoeba-test.bel-eng.bel.eng | 36.8 | 0.549 |
| Tatoeba-test.ben-eng.ben.eng | 39.7 | 0.546 |
| Tatoeba-test.bho-eng.bho.eng | 33.6 | 0.540 |
| Tatoeba-test.bod-eng.bod.eng | 1.1 | 0.147 |
| Tatoeba-test.bre-eng.bre.eng | 14.2 | 0.303 |
| Tatoeba-test.brx-eng.brx.eng | 1.7 | 0.130 |
| Tatoeba-test.bul-eng.bul.eng | 46.0 | 0.621 |
| Tatoeba-test.cat-eng.cat.eng | 46.6 | 0.636 |
| Tatoeba-test.ceb-eng.ceb.eng | 17.4 | 0.347 |
| Tatoeba-test.ces-eng.ces.eng | 41.3 | 0.586 |
| Tatoeba-test.cha-eng.cha.eng | 7.9 | 0.232 |
| Tatoeba-test.che-eng.che.eng | 0.7 | 0.104 |
| Tatoeba-test.chm-eng.chm.eng | 7.3 | 0.261 |
| Tatoeba-test.chr-eng.chr.eng | 8.8 | 0.244 |
| Tatoeba-test.chv-eng.chv.eng | 11.0 | 0.319 |
| Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.204 |
| Tatoeba-test.cos-eng.cos.eng | 58.2 | 0.643 |
| Tatoeba-test.crh-eng.crh.eng | 26.3 | 0.399 |
| Tatoeba-test.csb-eng.csb.eng | 18.8 | 0.389 |
| Tatoeba-test.cym-eng.cym.eng | 23.4 | 0.407 |
| Tatoeba-test.dan-eng.dan.eng | 50.5 | 0.659 |
| Tatoeba-test.deu-eng.deu.eng | 39.6 | 0.579 |
| Tatoeba-test.dsb-eng.dsb.eng | 24.3 | 0.449 |
| Tatoeba-test.dtp-eng.dtp.eng | 1.0 | 0.149 |
| Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.061 |
| Tatoeba-test.egl-eng.egl.eng | 7.6 | 0.236 |
| Tatoeba-test.ell-eng.ell.eng | 55.4 | 0.682 |
| Tatoeba-test.enm-eng.enm.eng | 28.0 | 0.489 |
| Tatoeba-test.epo-eng.epo.eng | 41.8 | 0.591 |
| Tatoeba-test.est-eng.est.eng | 41.5 | 0.581 |
| Tatoeba-test.eus-eng.eus.eng | 37.8 | 0.557 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.7 | 0.262 |
| Tatoeba-test.ext-eng.ext.eng | 25.5 | 0.405 |
| Tatoeba-test.fao-eng.fao.eng | 28.7 | 0.469 |
| Tatoeba-test.fas-eng.fas.eng | 7.5 | 0.281 |
| Tatoeba-test.fij-eng.fij.eng | 24.2 | 0.320 |
| Tatoeba-test.fin-eng.fin.eng | 35.8 | 0.534 |
| Tatoeba-test.fkv-eng.fkv.eng | 15.5 | 0.434 |
| Tatoeba-test.fra-eng.fra.eng | 45.1 | 0.618 |
| Tatoeba-test.frm-eng.frm.eng | 29.6 | 0.427 |
| Tatoeba-test.frr-eng.frr.eng | 5.5 | 0.138 |
| Tatoeba-test.fry-eng.fry.eng | 25.3 | 0.455 |
| Tatoeba-test.ful-eng.ful.eng | 1.1 | 0.127 |
| Tatoeba-test.gcf-eng.gcf.eng | 16.0 | 0.315 |
| Tatoeba-test.gil-eng.gil.eng | 46.7 | 0.587 |
| Tatoeba-test.gla-eng.gla.eng | 20.2 | 0.358 |
| Tatoeba-test.gle-eng.gle.eng | 43.9 | 0.592 |
| Tatoeba-test.glg-eng.glg.eng | 45.1 | 0.623 |
| Tatoeba-test.glv-eng.glv.eng | 3.3 | 0.119 |
| Tatoeba-test.gos-eng.gos.eng | 20.1 | 0.364 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.041 |
| Tatoeba-test.grc-eng.grc.eng | 2.1 | 0.137 |
| Tatoeba-test.grn-eng.grn.eng | 1.7 | 0.152 |
| Tatoeba-test.gsw-eng.gsw.eng | 18.2 | 0.334 |
| Tatoeba-test.guj-eng.guj.eng | 21.7 | 0.373 |
| Tatoeba-test.hat-eng.hat.eng | 34.5 | 0.502 |
| Tatoeba-test.hau-eng.hau.eng | 10.5 | 0.295 |
| Tatoeba-test.haw-eng.haw.eng | 2.8 | 0.160 |
| Tatoeba-test.hbs-eng.hbs.eng | 46.7 | 0.623 |
| Tatoeba-test.heb-eng.heb.eng | 33.0 | 0.492 |
| Tatoeba-test.hif-eng.hif.eng | 17.0 | 0.391 |
| Tatoeba-test.hil-eng.hil.eng | 16.0 | 0.339 |
| Tatoeba-test.hin-eng.hin.eng | 36.4 | 0.533 |
| Tatoeba-test.hmn-eng.hmn.eng | 0.4 | 0.131 |
| Tatoeba-test.hoc-eng.hoc.eng | 0.7 | 0.132 |
| Tatoeba-test.hsb-eng.hsb.eng | 41.9 | 0.551 |
| Tatoeba-test.hun-eng.hun.eng | 33.2 | 0.510 |
| Tatoeba-test.hye-eng.hye.eng | 32.2 | 0.487 |
| Tatoeba-test.iba-eng.iba.eng | 9.4 | 0.278 |
| Tatoeba-test.ibo-eng.ibo.eng | 5.8 | 0.200 |
| Tatoeba-test.ido-eng.ido.eng | 31.7 | 0.503 |
| Tatoeba-test.iku-eng.iku.eng | 9.1 | 0.164 |
| Tatoeba-test.ile-eng.ile.eng | 42.2 | 0.595 |
| Tatoeba-test.ilo-eng.ilo.eng | 29.7 | 0.485 |
| Tatoeba-test.ina-eng.ina.eng | 42.1 | 0.607 |
| Tatoeba-test.isl-eng.isl.eng | 35.7 | 0.527 |
| Tatoeba-test.ita-eng.ita.eng | 54.8 | 0.686 |
| Tatoeba-test.izh-eng.izh.eng | 28.3 | 0.526 |
| Tatoeba-test.jav-eng.jav.eng | 10.0 | 0.282 |
| Tatoeba-test.jbo-eng.jbo.eng | 0.3 | 0.115 |
| Tatoeba-test.jdt-eng.jdt.eng | 5.3 | 0.140 |
| Tatoeba-test.jpn-eng.jpn.eng | 18.8 | 0.387 |
| Tatoeba-test.kab-eng.kab.eng | 3.9 | 0.205 |
| Tatoeba-test.kal-eng.kal.eng | 16.9 | 0.329 |
| Tatoeba-test.kan-eng.kan.eng | 16.2 | 0.374 |
| Tatoeba-test.kat-eng.kat.eng | 31.1 | 0.493 |
| Tatoeba-test.kaz-eng.kaz.eng | 24.5 | 0.437 |
| Tatoeba-test.kek-eng.kek.eng | 7.4 | 0.192 |
| Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.154 |
| Tatoeba-test.khm-eng.khm.eng | 12.2 | 0.290 |
| Tatoeba-test.kin-eng.kin.eng | 22.5 | 0.355 |
| Tatoeba-test.kir-eng.kir.eng | 27.2 | 0.470 |
| Tatoeba-test.kjh-eng.kjh.eng | 2.1 | 0.129 |
| Tatoeba-test.kok-eng.kok.eng | 4.5 | 0.259 |
| Tatoeba-test.kom-eng.kom.eng | 1.4 | 0.099 |
| Tatoeba-test.krl-eng.krl.eng | 26.1 | 0.387 |
| Tatoeba-test.ksh-eng.ksh.eng | 5.5 | 0.256 |
| Tatoeba-test.kum-eng.kum.eng | 9.3 | 0.288 |
| Tatoeba-test.kur-eng.kur.eng | 9.6 | 0.208 |
| Tatoeba-test.lad-eng.lad.eng | 30.1 | 0.475 |
| Tatoeba-test.lah-eng.lah.eng | 11.6 | 0.284 |
| Tatoeba-test.lao-eng.lao.eng | 4.5 | 0.214 |
| Tatoeba-test.lat-eng.lat.eng | 21.5 | 0.402 |
| Tatoeba-test.lav-eng.lav.eng | 40.2 | 0.577 |
| Tatoeba-test.ldn-eng.ldn.eng | 0.8 | 0.115 |
| Tatoeba-test.lfn-eng.lfn.eng | 23.0 | 0.433 |
| Tatoeba-test.lij-eng.lij.eng | 9.3 | 0.287 |
| Tatoeba-test.lin-eng.lin.eng | 2.4 | 0.196 |
| Tatoeba-test.lit-eng.lit.eng | 44.0 | 0.597 |
| Tatoeba-test.liv-eng.liv.eng | 1.6 | 0.115 |
| Tatoeba-test.lkt-eng.lkt.eng | 2.0 | 0.113 |
| Tatoeba-test.lld-eng.lld.eng | 18.3 | 0.312 |
| Tatoeba-test.lmo-eng.lmo.eng | 25.4 | 0.395 |
| Tatoeba-test.ltz-eng.ltz.eng | 35.9 | 0.509 |
| Tatoeba-test.lug-eng.lug.eng | 5.1 | 0.357 |
| Tatoeba-test.mad-eng.mad.eng | 2.8 | 0.123 |
| Tatoeba-test.mah-eng.mah.eng | 5.7 | 0.175 |
| Tatoeba-test.mai-eng.mai.eng | 56.3 | 0.703 |
| Tatoeba-test.mal-eng.mal.eng | 37.5 | 0.534 |
| Tatoeba-test.mar-eng.mar.eng | 22.8 | 0.470 |
| Tatoeba-test.mdf-eng.mdf.eng | 2.0 | 0.110 |
| Tatoeba-test.mfe-eng.mfe.eng | 59.2 | 0.764 |
| Tatoeba-test.mic-eng.mic.eng | 9.0 | 0.199 |
| Tatoeba-test.mkd-eng.mkd.eng | 44.3 | 0.593 |
| Tatoeba-test.mlg-eng.mlg.eng | 31.9 | 0.424 |
| Tatoeba-test.mlt-eng.mlt.eng | 38.6 | 0.540 |
| Tatoeba-test.mnw-eng.mnw.eng | 2.5 | 0.101 |
| Tatoeba-test.moh-eng.moh.eng | 0.3 | 0.110 |
| Tatoeba-test.mon-eng.mon.eng | 13.5 | 0.334 |
| Tatoeba-test.mri-eng.mri.eng | 8.5 | 0.260 |
| Tatoeba-test.msa-eng.msa.eng | 33.9 | 0.520 |
| Tatoeba-test.multi.eng | 34.7 | 0.518 |
| Tatoeba-test.mwl-eng.mwl.eng | 37.4 | 0.630 |
| Tatoeba-test.mya-eng.mya.eng | 15.5 | 0.335 |
| Tatoeba-test.myv-eng.myv.eng | 0.8 | 0.118 |
| Tatoeba-test.nau-eng.nau.eng | 9.0 | 0.186 |
| Tatoeba-test.nav-eng.nav.eng | 1.3 | 0.144 |
| Tatoeba-test.nds-eng.nds.eng | 30.7 | 0.495 |
| Tatoeba-test.nep-eng.nep.eng | 3.5 | 0.168 |
| Tatoeba-test.niu-eng.niu.eng | 42.7 | 0.492 |
| Tatoeba-test.nld-eng.nld.eng | 47.9 | 0.640 |
| Tatoeba-test.nog-eng.nog.eng | 12.7 | 0.284 |
| Tatoeba-test.non-eng.non.eng | 43.8 | 0.586 |
| Tatoeba-test.nor-eng.nor.eng | 45.5 | 0.619 |
| Tatoeba-test.nov-eng.nov.eng | 26.9 | 0.472 |
| Tatoeba-test.nya-eng.nya.eng | 33.2 | 0.456 |
| Tatoeba-test.oci-eng.oci.eng | 17.9 | 0.370 |
| Tatoeba-test.ori-eng.ori.eng | 14.6 | 0.305 |
| Tatoeba-test.orv-eng.orv.eng | 11.0 | 0.283 |
| Tatoeba-test.oss-eng.oss.eng | 4.1 | 0.211 |
| Tatoeba-test.ota-eng.ota.eng | 4.1 | 0.216 |
| Tatoeba-test.pag-eng.pag.eng | 24.3 | 0.468 |
| Tatoeba-test.pan-eng.pan.eng | 16.4 | 0.358 |
| Tatoeba-test.pap-eng.pap.eng | 53.2 | 0.628 |
| Tatoeba-test.pau-eng.pau.eng | 3.7 | 0.173 |
| Tatoeba-test.pdc-eng.pdc.eng | 45.3 | 0.569 |
| Tatoeba-test.pms-eng.pms.eng | 14.0 | 0.345 |
| Tatoeba-test.pol-eng.pol.eng | 41.7 | 0.588 |
| Tatoeba-test.por-eng.por.eng | 51.4 | 0.669 |
| Tatoeba-test.ppl-eng.ppl.eng | 0.4 | 0.134 |
| Tatoeba-test.prg-eng.prg.eng | 4.1 | 0.198 |
| Tatoeba-test.pus-eng.pus.eng | 6.7 | 0.233 |
| Tatoeba-test.quc-eng.quc.eng | 3.5 | 0.091 |
| Tatoeba-test.qya-eng.qya.eng | 0.2 | 0.090 |
| Tatoeba-test.rap-eng.rap.eng | 17.5 | 0.230 |
| Tatoeba-test.rif-eng.rif.eng | 4.2 | 0.164 |
| Tatoeba-test.roh-eng.roh.eng | 24.6 | 0.464 |
| Tatoeba-test.rom-eng.rom.eng | 3.4 | 0.212 |
| Tatoeba-test.ron-eng.ron.eng | 45.2 | 0.620 |
| Tatoeba-test.rue-eng.rue.eng | 21.4 | 0.390 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.392 |
| Tatoeba-test.rus-eng.rus.eng | 42.7 | 0.591 |
| Tatoeba-test.sag-eng.sag.eng | 3.4 | 0.187 |
| Tatoeba-test.sah-eng.sah.eng | 5.0 | 0.177 |
| Tatoeba-test.san-eng.san.eng | 2.0 | 0.172 |
| Tatoeba-test.scn-eng.scn.eng | 35.8 | 0.410 |
| Tatoeba-test.sco-eng.sco.eng | 34.6 | 0.520 |
| Tatoeba-test.sgs-eng.sgs.eng | 21.8 | 0.299 |
| Tatoeba-test.shs-eng.shs.eng | 1.8 | 0.122 |
| Tatoeba-test.shy-eng.shy.eng | 1.4 | 0.104 |
| Tatoeba-test.sin-eng.sin.eng | 20.6 | 0.429 |
| Tatoeba-test.sjn-eng.sjn.eng | 1.2 | 0.095 |
| Tatoeba-test.slv-eng.slv.eng | 37.0 | 0.545 |
| Tatoeba-test.sma-eng.sma.eng | 4.4 | 0.147 |
| Tatoeba-test.sme-eng.sme.eng | 8.9 | 0.229 |
| Tatoeba-test.smo-eng.smo.eng | 37.7 | 0.483 |
| Tatoeba-test.sna-eng.sna.eng | 18.0 | 0.359 |
| Tatoeba-test.snd-eng.snd.eng | 28.1 | 0.444 |
| Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 |
| Tatoeba-test.spa-eng.spa.eng | 47.9 | 0.645 |
| Tatoeba-test.sqi-eng.sqi.eng | 46.9 | 0.634 |
| Tatoeba-test.stq-eng.stq.eng | 8.1 | 0.379 |
| Tatoeba-test.sun-eng.sun.eng | 23.8 | 0.369 |
| Tatoeba-test.swa-eng.swa.eng | 6.5 | 0.193 |
| Tatoeba-test.swe-eng.swe.eng | 51.4 | 0.655 |
| Tatoeba-test.swg-eng.swg.eng | 18.5 | 0.342 |
| Tatoeba-test.tah-eng.tah.eng | 25.6 | 0.249 |
| Tatoeba-test.tam-eng.tam.eng | 29.1 | 0.437 |
| Tatoeba-test.tat-eng.tat.eng | 12.9 | 0.327 |
| Tatoeba-test.tel-eng.tel.eng | 21.2 | 0.386 |
| Tatoeba-test.tet-eng.tet.eng | 9.2 | 0.215 |
| Tatoeba-test.tgk-eng.tgk.eng | 12.7 | 0.374 |
| Tatoeba-test.tha-eng.tha.eng | 36.3 | 0.531 |
| Tatoeba-test.tir-eng.tir.eng | 9.1 | 0.267 |
| Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.084 |
| Tatoeba-test.tly-eng.tly.eng | 2.1 | 0.128 |
| Tatoeba-test.toi-eng.toi.eng | 5.3 | 0.150 |
| Tatoeba-test.ton-eng.ton.eng | 39.5 | 0.473 |
| Tatoeba-test.tpw-eng.tpw.eng | 1.5 | 0.160 |
| Tatoeba-test.tso-eng.tso.eng | 44.7 | 0.526 |
| Tatoeba-test.tuk-eng.tuk.eng | 18.6 | 0.401 |
| Tatoeba-test.tur-eng.tur.eng | 40.5 | 0.573 |
| Tatoeba-test.tvl-eng.tvl.eng | 55.0 | 0.593 |
| Tatoeba-test.tyv-eng.tyv.eng | 19.1 | 0.477 |
| Tatoeba-test.tzl-eng.tzl.eng | 17.7 | 0.333 |
| Tatoeba-test.udm-eng.udm.eng | 3.4 | 0.217 |
| Tatoeba-test.uig-eng.uig.eng | 11.4 | 0.289 |
| Tatoeba-test.ukr-eng.ukr.eng | 43.1 | 0.595 |
| Tatoeba-test.umb-eng.umb.eng | 9.2 | 0.260 |
| Tatoeba-test.urd-eng.urd.eng | 23.2 | 0.426 |
| Tatoeba-test.uzb-eng.uzb.eng | 19.0 | 0.342 |
| Tatoeba-test.vec-eng.vec.eng | 41.1 | 0.409 |
| Tatoeba-test.vie-eng.vie.eng | 30.6 | 0.481 |
| Tatoeba-test.vol-eng.vol.eng | 1.8 | 0.143 |
| Tatoeba-test.war-eng.war.eng | 15.9 | 0.352 |
| Tatoeba-test.wln-eng.wln.eng | 12.6 | 0.291 |
| Tatoeba-test.wol-eng.wol.eng | 4.4 | 0.138 |
| Tatoeba-test.xal-eng.xal.eng | 0.9 | 0.153 |
| Tatoeba-test.xho-eng.xho.eng | 35.4 | 0.513 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.387 |
| Tatoeba-test.yor-eng.yor.eng | 19.3 | 0.327 |
| Tatoeba-test.zho-eng.zho.eng | 25.8 | 0.448 |
| Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.567 |
| Tatoeba-test.zza-eng.zza.eng | 1.6 | 0.125 |
### System Info:
- hf_name: mul-eng
- source_languages: mul
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en']
- src_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt
- src_alpha3: mul
- tgt_alpha3: eng
- short_pair: mul-en
- chrF2_score: 0.518
- bleu: 34.7
- brevity_penalty: 1.0
- ref_len: 72346.0
- src_name: Multiple languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: mul
- tgt_alpha2: en
- prefer_old: False
- long_pair: mul-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "es", "os", "eo", "ro", "fy", "cy", "is", "lb", "su", "an", "sq", "fr", "ht", "rm", "cv", "ig", "am", "eu", "tr", "ps", "af", "ny", "ch", "uk", "sl", "lt", "tk", "sg", "ar", "lg", "bg", "be", "ka", "gd", "ja", "si", "br", "mh", "km", "th", "ty", "rw", "te", "mk", "or", "wo", "kl", "mr", "ru", "yo", "hu", "fo", "zh", "ti", "co", "ee", "oc", "sn", "mt", "ts", "pl", "gl", "nb", "bn", "tt", "bo", "lo", "id", "gn", "nv", "hy", "kn", "to", "io", "so", "vi", "da", "fj", "gv", "sm", "nl", "mi", "pt", "hi", "se", "as", "ta", "et", "kw", "ga", "sv", "ln", "na", "mn", "gu", "wa", "lv", "jv", "el", "my", "ba", "it", "hr", "ur", "ce", "nn", "fi", "mg", "rn", "xh", "ab", "de", "cs", "he", "zu", "yi", "ml", "mul", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-mul-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"es",
"os",
"eo",
"ro",
"fy",
"cy",
"is",
"lb",
"su",
"an",
"sq",
"fr",
"ht",
"rm",
"cv",
"ig",
"am",
"eu",
"tr",
"ps",
"af",
"ny",
"ch",
"uk",
"sl",
"lt",
"tk",
"sg",
"ar",
"lg",
"bg",
"be",
"ka",
"gd",
"ja",
"si",
"br",
"mh",
"km",
"th",
"ty",
"rw",
"te",
"mk",
"or",
"wo",
"kl",
"mr",
"ru",
"yo",
"hu",
"fo",
"zh",
"ti",
"co",
"ee",
"oc",
"sn",
"mt",
"ts",
"pl",
"gl",
"nb",
"bn",
"tt",
"bo",
"lo",
"id",
"gn",
"nv",
"hy",
"kn",
"to",
"io",
"so",
"vi",
"da",
"fj",
"gv",
"sm",
"nl",
"mi",
"pt",
"hi",
"se",
"as",
"ta",
"et",
"kw",
"ga",
"sv",
"ln",
"na",
"mn",
"gu",
"wa",
"lv",
"jv",
"el",
"my",
"ba",
"it",
"hr",
"ur",
"ce",
"nn",
"fi",
"mg",
"rn",
"xh",
"ab",
"de",
"cs",
"he",
"zu",
"yi",
"ml",
"mul",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ng-en
* source languages: ng
* target languages: en
* OPUS readme: [ng-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ng-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ng.en | 27.3 | 0.443 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ng-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ng",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nic-eng
* source group: Niger-Kordofanian languages
* target group: English
* OPUS readme: [nic-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md)
* model: transformer
* source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bam-eng.bam.eng | 2.4 | 0.090 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.3 | 0.384 |
| Tatoeba-test.ful-eng.ful.eng | 1.2 | 0.114 |
| Tatoeba-test.ibo-eng.ibo.eng | 7.5 | 0.197 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.481 |
| Tatoeba-test.lin-eng.lin.eng | 3.1 | 0.185 |
| Tatoeba-test.lug-eng.lug.eng | 3.1 | 0.261 |
| Tatoeba-test.multi.eng | 21.3 | 0.377 |
| Tatoeba-test.nya-eng.nya.eng | 31.6 | 0.502 |
| Tatoeba-test.run-eng.run.eng | 24.9 | 0.420 |
| Tatoeba-test.sag-eng.sag.eng | 5.2 | 0.231 |
| Tatoeba-test.sna-eng.sna.eng | 20.1 | 0.374 |
| Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.191 |
| Tatoeba-test.toi-eng.toi.eng | 4.8 | 0.122 |
| Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 |
| Tatoeba-test.umb-eng.umb.eng | 9.0 | 0.246 |
| Tatoeba-test.wol-eng.wol.eng | 14.0 | 0.212 |
| Tatoeba-test.xho-eng.xho.eng | 38.2 | 0.558 |
| Tatoeba-test.yor-eng.yor.eng | 21.2 | 0.364 |
| Tatoeba-test.zul-eng.zul.eng | 42.3 | 0.589 |
### System Info:
- hf_name: nic-eng
- source_languages: nic
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en']
- src_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt
- src_alpha3: nic
- tgt_alpha3: eng
- short_pair: nic-en
- chrF2_score: 0.377
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 15228.0
- src_name: Niger-Kordofanian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: nic
- tgt_alpha2: en
- prefer_old: False
- long_pair: nic-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "nic", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nic-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"nic",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-niu-de
* source languages: niu
* target languages: de
* OPUS readme: [niu-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.de | 20.2 | 0.395 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-niu-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-niu-en
* source languages: niu
* target languages: en
* OPUS readme: [niu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.en | 46.1 | 0.604 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-niu-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-niu-es
* source languages: niu
* target languages: es
* OPUS readme: [niu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.es | 24.2 | 0.419 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-niu-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-niu-fi
* source languages: niu
* target languages: fi
* OPUS readme: [niu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.fi | 24.8 | 0.474 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-niu-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-niu-fr
* source languages: niu
* target languages: fr
* OPUS readme: [niu-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.fr | 28.1 | 0.452 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-niu-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-niu-sv
* source languages: niu
* target languages: sv
* OPUS readme: [niu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.sv | 29.2 | 0.478 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-niu-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"niu",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nld-afr
* source group: Dutch
* target group: Afrikaans
* OPUS readme: [nld-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.afr | 57.8 | 0.749 |
### System Info:
- hf_name: nld-afr
- source_languages: nld
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'af']
- src_constituents: {'nld'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: afr
- short_pair: nl-af
- chrF2_score: 0.7490000000000001
- bleu: 57.8
- brevity_penalty: 1.0
- ref_len: 6823.0
- src_name: Dutch
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: af
- prefer_old: False
- long_pair: nld-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["nl", "af"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-af | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nld-cat
* source group: Dutch
* target group: Catalan
* OPUS readme: [nld-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.cat | 42.1 | 0.624 |
### System Info:
- hf_name: nld-cat
- source_languages: nld
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'ca']
- src_constituents: {'nld'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: cat
- short_pair: nl-ca
- chrF2_score: 0.624
- bleu: 42.1
- brevity_penalty: 0.988
- ref_len: 3942.0
- src_name: Dutch
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: ca
- prefer_old: False
- long_pair: nld-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["nl", "ca"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-ca | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nl-en
* source languages: nl
* target languages: en
* OPUS readme: [nl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-05.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.zip)
* test set translations: [opus-2019-12-05.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.test.txt)
* test set scores: [opus-2019-12-05.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.en | 60.9 | 0.749 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"nl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nld-epo
* source group: Dutch
* target group: Esperanto
* OPUS readme: [nld-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.epo | 16.1 | 0.355 |
### System Info:
- hf_name: nld-epo
- source_languages: nld
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'eo']
- src_constituents: {'nld'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: epo
- short_pair: nl-eo
- chrF2_score: 0.355
- bleu: 16.1
- brevity_penalty: 0.9359999999999999
- ref_len: 72293.0
- src_name: Dutch
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: eo
- prefer_old: False
- long_pair: nld-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["nl", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nl-es
* source languages: nl
* target languages: es
* OPUS readme: [nl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.es | 51.6 | 0.698 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nl-fi
* source languages: nl
* target languages: fi
* OPUS readme: [nl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nl.fi | 28.6 | 0.569 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nl-fr
* source languages: nl
* target languages: fr
* OPUS readme: [nl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.fr | 51.3 | 0.674 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nld-nor
* source group: Dutch
* target group: Norwegian
* OPUS readme: [nld-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.nor | 36.1 | 0.562 |
### System Info:
- hf_name: nld-nor
- source_languages: nld
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'no']
- src_constituents: {'nld'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: nor
- short_pair: nl-no
- chrF2_score: 0.562
- bleu: 36.1
- brevity_penalty: 0.966
- ref_len: 1459.0
- src_name: Dutch
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: no
- prefer_old: False
- long_pair: nld-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["nl", false], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-no | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nl-sv
* source languages: nl
* target languages: sv
* OPUS readme: [nl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.nl.sv | 25.0 | 0.518 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nld-ukr
* source group: Dutch
* target group: Ukrainian
* OPUS readme: [nld-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.ukr | 40.8 | 0.619 |
### System Info:
- hf_name: nld-ukr
- source_languages: nld
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'uk']
- src_constituents: {'nld'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: ukr
- short_pair: nl-uk
- chrF2_score: 0.619
- bleu: 40.8
- brevity_penalty: 0.992
- ref_len: 51674.0
- src_name: Dutch
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: uk
- prefer_old: False
- long_pair: nld-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["nl", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nl-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-dan
* source group: Norwegian
* target group: Danish
* OPUS readme: [nor-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.dan | 65.0 | 0.792 |
### System Info:
- hf_name: nor-dan
- source_languages: nor
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'da']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: dan
- short_pair: no-da
- chrF2_score: 0.792
- bleu: 65.0
- brevity_penalty: 0.995
- ref_len: 9865.0
- src_name: Norwegian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: da
- prefer_old: False
- long_pair: nor-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "da"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-da | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-deu
* source group: Norwegian
* target group: German
* OPUS readme: [nor-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.deu | 29.6 | 0.541 |
### System Info:
- hf_name: nor-deu
- source_languages: nor
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'de']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: deu
- short_pair: no-de
- chrF2_score: 0.541
- bleu: 29.6
- brevity_penalty: 0.96
- ref_len: 34575.0
- src_name: Norwegian
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: de
- prefer_old: False
- long_pair: nor-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-spa
* source group: Norwegian
* target group: Spanish
* OPUS readme: [nor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.spa | 34.2 | 0.565 |
### System Info:
- hf_name: nor-spa
- source_languages: nor
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'es']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: spa
- short_pair: no-es
- chrF2_score: 0.565
- bleu: 34.2
- brevity_penalty: 0.997
- ref_len: 7311.0
- src_name: Norwegian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: es
- prefer_old: False
- long_pair: nor-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-fin
* source group: Norwegian
* target group: Finnish
* OPUS readme: [nor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fin | 14.1 | 0.374 |
### System Info:
- hf_name: nor-fin
- source_languages: nor
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fi']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fin
- short_pair: no-fi
- chrF2_score: 0.374
- bleu: 14.1
- brevity_penalty: 0.894
- ref_len: 13066.0
- src_name: Norwegian
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fi
- prefer_old: False
- long_pair: nor-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "fi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-fra
* source group: Norwegian
* target group: French
* OPUS readme: [nor-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fra | 39.1 | 0.578 |
### System Info:
- hf_name: nor-fra
- source_languages: nor
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fr']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fra
- short_pair: no-fr
- chrF2_score: 0.578
- bleu: 39.1
- brevity_penalty: 0.987
- ref_len: 3205.0
- src_name: Norwegian
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fr
- prefer_old: False
- long_pair: nor-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-nld
* source group: Norwegian
* target group: Dutch
* OPUS readme: [nor-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nld | 40.2 | 0.596 |
### System Info:
- hf_name: nor-nld
- source_languages: nor
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'nl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nld
- short_pair: no-nl
- chrF2_score: 0.596
- bleu: 40.2
- brevity_penalty: 0.9590000000000001
- ref_len: 1535.0
- src_name: Norwegian
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: nl
- prefer_old: False
- long_pair: nor-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "nl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-nl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-nor
* source group: Norwegian
* target group: Norwegian
* OPUS readme: [nor-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nor | 58.4 | 0.784 |
### System Info:
- hf_name: nor-nor
- source_languages: nor
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nor
- short_pair: no-no
- chrF2_score: 0.784
- bleu: 58.4
- brevity_penalty: 0.988
- ref_len: 6351.0
- src_name: Norwegian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: no
- prefer_old: False
- long_pair: nor-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-no | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-pol
* source group: Norwegian
* target group: Polish
* OPUS readme: [nor-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.pol | 20.9 | 0.455 |
### System Info:
- hf_name: nor-pol
- source_languages: nor
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'pl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: pol
- short_pair: no-pl
- chrF2_score: 0.455
- bleu: 20.9
- brevity_penalty: 0.941
- ref_len: 1828.0
- src_name: Norwegian
- tgt_name: Polish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: pl
- prefer_old: False
- long_pair: nor-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "pl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-pl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-rus
* source group: Norwegian
* target group: Russian
* OPUS readme: [nor-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.rus | 18.6 | 0.400 |
### System Info:
- hf_name: nor-rus
- source_languages: nor
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'ru']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: rus
- short_pair: no-ru
- chrF2_score: 0.4
- bleu: 18.6
- brevity_penalty: 0.958
- ref_len: 10671.0
- src_name: Norwegian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: ru
- prefer_old: False
- long_pair: nor-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-swe
* source group: Norwegian
* target group: Swedish
* OPUS readme: [nor-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.swe | 63.7 | 0.773 |
### System Info:
- hf_name: nor-swe
- source_languages: nor
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'sv']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: swe
- short_pair: no-sv
- chrF2_score: 0.773
- bleu: 63.7
- brevity_penalty: 0.9670000000000001
- ref_len: 3672.0
- src_name: Norwegian
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: sv
- prefer_old: False
- long_pair: nor-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "sv"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### nor-ukr
* source group: Norwegian
* target group: Ukrainian
* OPUS readme: [nor-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.ukr | 16.6 | 0.384 |
### System Info:
- hf_name: nor-ukr
- source_languages: nor
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'uk']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: ukr
- short_pair: no-uk
- chrF2_score: 0.384
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 3982.0
- src_name: Norwegian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: uk
- prefer_old: False
- long_pair: nor-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": [false, "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-no-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"no",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nso-de
* source languages: nso
* target languages: de
* OPUS readme: [nso-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.de | 24.7 | 0.461 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nso-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nso-en
* source languages: nso
* target languages: en
* OPUS readme: [nso-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.en | 48.6 | 0.634 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nso-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nso-es
* source languages: nso
* target languages: es
* OPUS readme: [nso-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.es | 29.5 | 0.485 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nso-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nso-fi
* source languages: nso
* target languages: fi
* OPUS readme: [nso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fi | 27.8 | 0.523 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nso-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nso-fr
* source languages: nso
* target languages: fr
* OPUS readme: [nso-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fr | 30.7 | 0.488 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nso-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nso-sv
* source languages: nso
* target languages: sv
* OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.sv | 34.3 | 0.527 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nso-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nso",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ny-de
* source languages: ny
* target languages: de
* OPUS readme: [ny-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.de | 23.9 | 0.440 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ny-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ny",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ny-en
* source languages: ny
* target languages: en
* OPUS readme: [ny-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.en | 39.7 | 0.547 |
| Tatoeba.ny.en | 44.2 | 0.562 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ny-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ny-es
* source languages: ny
* target languages: es
* OPUS readme: [ny-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.es | 27.9 | 0.457 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ny-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ny",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-nyk-en
* source languages: nyk
* target languages: en
* OPUS readme: [nyk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nyk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nyk.en | 27.3 | 0.423 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-nyk-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nyk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-om-en
* source languages: om
* target languages: en
* OPUS readme: [om-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/om-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.om.en | 27.3 | 0.448 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-om-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"om",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pa-en
* source languages: pa
* target languages: en
* OPUS readme: [pa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pa-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pa.en | 20.6 | 0.320 |
| Tatoeba.pa.en | 29.3 | 0.464 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pa-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pa",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pag-de
* source languages: pag
* target languages: de
* OPUS readme: [pag-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.de | 22.8 | 0.435 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pag-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pag-en
* source languages: pag
* target languages: en
* OPUS readme: [pag-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.en | 42.4 | 0.580 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pag-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pag-es
* source languages: pag
* target languages: es
* OPUS readme: [pag-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.es | 27.9 | 0.459 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pag-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pag-fi
* source languages: pag
* target languages: fi
* OPUS readme: [pag-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.fi | 26.7 | 0.496 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pag-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pag-sv
* source languages: pag
* target languages: sv
* OPUS readme: [pag-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.sv | 29.8 | 0.492 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pag-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pag",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pap-de
* source languages: pap
* target languages: de
* OPUS readme: [pap-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.de | 25.0 | 0.466 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pap-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pap-en
* source languages: pap
* target languages: en
* OPUS readme: [pap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.en | 47.3 | 0.634 |
| Tatoeba.pap.en | 63.2 | 0.684 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pap-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pap-es
* source languages: pap
* target languages: es
* OPUS readme: [pap-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.es | 32.3 | 0.518 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pap-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pap-fi
* source languages: pap
* target languages: fi
* OPUS readme: [pap-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.fi | 27.7 | 0.520 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pap-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pap-fr
* source languages: pap
* target languages: fr
* OPUS readme: [pap-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.fr | 31.0 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pap-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pap",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### phi-eng
* source group: Philippine languages
* target group: English
* OPUS readme: [phi-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/phi-eng/README.md)
* model: transformer
* source language(s): akl_Latn ceb hil ilo pag war
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.akl-eng.akl.eng | 11.6 | 0.321 |
| Tatoeba-test.ceb-eng.ceb.eng | 21.7 | 0.393 |
| Tatoeba-test.hil-eng.hil.eng | 17.6 | 0.371 |
| Tatoeba-test.ilo-eng.ilo.eng | 36.6 | 0.560 |
| Tatoeba-test.multi.eng | 21.5 | 0.391 |
| Tatoeba-test.pag-eng.pag.eng | 27.5 | 0.494 |
| Tatoeba-test.war-eng.war.eng | 17.3 | 0.380 |
### System Info:
- hf_name: phi-eng
- source_languages: phi
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/phi-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['phi', 'en']
- src_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.test.txt
- src_alpha3: phi
- tgt_alpha3: eng
- short_pair: phi-en
- chrF2_score: 0.391
- bleu: 21.5
- brevity_penalty: 1.0
- ref_len: 2380.0
- src_name: Philippine languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: phi
- tgt_alpha2: en
- prefer_old: False
- long_pair: phi-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["phi", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-phi-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"phi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pis-en
* source languages: pis
* target languages: en
* OPUS readme: [pis-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.en | 33.3 | 0.493 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pis-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pis",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pis-es
* source languages: pis
* target languages: es
* OPUS readme: [pis-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.es | 24.1 | 0.421 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pis-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pis",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pis-fi
* source languages: pis
* target languages: fi
* OPUS readme: [pis-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fi | 21.8 | 0.439 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pis-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pis",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pis-fr
* source languages: pis
* target languages: fr
* OPUS readme: [pis-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fr | 24.9 | 0.421 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pis-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pis",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-pis-sv
* source languages: pis
* target languages: sv
* OPUS readme: [pis-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.sv | 25.9 | 0.442 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pis-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pis",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.