repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-fr-niu
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-niu
* source languages: fr
* target languages: niu
* OPUS readme: [fr-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.niu | 34.5 | 0.537 |
|
Helsinki-NLP/opus-mt-fr-no
|
Helsinki-NLP
|
marian
| 11 | 38 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['fr', False]
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,098 |
### fra-nor
* source group: French
* target group: Norwegian
* OPUS readme: [fra-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.nor | 36.1 | 0.555 |
### System Info:
- hf_name: fra-nor
- source_languages: fra
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'no']
- src_constituents: {'fra'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: nor
- short_pair: fr-no
- chrF2_score: 0.555
- bleu: 36.1
- brevity_penalty: 0.981
- ref_len: 3089.0
- src_name: French
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: no
- prefer_old: False
- long_pair: fra-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-fr-nso
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-nso
* source languages: fr
* target languages: nso
* OPUS readme: [fr-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.nso | 33.3 | 0.527 |
|
Helsinki-NLP/opus-mt-fr-ny
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-ny
* source languages: fr
* target languages: ny
* OPUS readme: [fr-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ny | 23.2 | 0.481 |
|
Helsinki-NLP/opus-mt-fr-pag
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-pag
* source languages: fr
* target languages: pag
* OPUS readme: [fr-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pag | 27.0 | 0.486 |
|
Helsinki-NLP/opus-mt-fr-pap
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-pap
* source languages: fr
* target languages: pap
* OPUS readme: [fr-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pap | 27.8 | 0.464 |
|
Helsinki-NLP/opus-mt-fr-pis
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-pis
* source languages: fr
* target languages: pis
* OPUS readme: [fr-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pis | 29.0 | 0.486 |
|
Helsinki-NLP/opus-mt-fr-pl
|
Helsinki-NLP
|
marian
| 10 | 54 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-fr-pl
* source languages: fr
* target languages: pl
* OPUS readme: [fr-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.pl | 40.7 | 0.625 |
|
Helsinki-NLP/opus-mt-fr-pon
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-pon
* source languages: fr
* target languages: pon
* OPUS readme: [fr-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pon | 23.9 | 0.458 |
|
Helsinki-NLP/opus-mt-fr-rnd
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-rnd
* source languages: fr
* target languages: rnd
* OPUS readme: [fr-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-rnd/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.rnd | 21.8 | 0.431 |
|
Helsinki-NLP/opus-mt-fr-ro
|
Helsinki-NLP
|
marian
| 10 | 73 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-fr-ro
* source languages: fr
* target languages: ro
* OPUS readme: [fr-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ro | 42.1 | 0.640 |
|
Helsinki-NLP/opus-mt-fr-ru
|
Helsinki-NLP
|
marian
| 10 | 587 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-fr-ru
* source languages: fr
* target languages: ru
* OPUS readme: [fr-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ru | 37.9 | 0.585 |
|
Helsinki-NLP/opus-mt-fr-run
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-run
* source languages: fr
* target languages: run
* OPUS readme: [fr-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.run | 23.8 | 0.482 |
|
Helsinki-NLP/opus-mt-fr-rw
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-rw
* source languages: fr
* target languages: rw
* OPUS readme: [fr-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.rw | 25.5 | 0.483 |
|
Helsinki-NLP/opus-mt-fr-sg
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-sg
* source languages: fr
* target languages: sg
* OPUS readme: [fr-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sg | 29.7 | 0.473 |
|
Helsinki-NLP/opus-mt-fr-sk
|
Helsinki-NLP
|
marian
| 10 | 18 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-sk
* source languages: fr
* target languages: sk
* OPUS readme: [fr-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sk | 24.9 | 0.456 |
|
Helsinki-NLP/opus-mt-fr-sl
|
Helsinki-NLP
|
marian
| 10 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-sl
* source languages: fr
* target languages: sl
* OPUS readme: [fr-sl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sl | 20.1 | 0.433 |
|
Helsinki-NLP/opus-mt-fr-sm
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-sm
* source languages: fr
* target languages: sm
* OPUS readme: [fr-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sm | 28.8 | 0.474 |
|
Helsinki-NLP/opus-mt-fr-sn
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-sn
* source languages: fr
* target languages: sn
* OPUS readme: [fr-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sn | 23.4 | 0.507 |
|
Helsinki-NLP/opus-mt-fr-srn
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-srn
* source languages: fr
* target languages: srn
* OPUS readme: [fr-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.srn | 27.4 | 0.459 |
|
Helsinki-NLP/opus-mt-fr-st
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-st
* source languages: fr
* target languages: st
* OPUS readme: [fr-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.st | 34.6 | 0.540 |
|
Helsinki-NLP/opus-mt-fr-sv
|
Helsinki-NLP
|
marian
| 10 | 51 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-fr-sv
* source languages: fr
* target languages: sv
* OPUS readme: [fr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.sv | 60.1 | 0.744 |
|
Helsinki-NLP/opus-mt-fr-swc
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-swc
* source languages: fr
* target languages: swc
* OPUS readme: [fr-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.swc | 28.2 | 0.499 |
|
Helsinki-NLP/opus-mt-fr-tiv
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-tiv
* source languages: fr
* target languages: tiv
* OPUS readme: [fr-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tiv | 23.5 | 0.406 |
|
Helsinki-NLP/opus-mt-fr-tl
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['fr', 'tl']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,991 |
### fra-tgl
* source group: French
* target group: Tagalog
* OPUS readme: [fra-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.tgl | 24.1 | 0.536 |
### System Info:
- hf_name: fra-tgl
- source_languages: fra
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'tl']
- src_constituents: {'fra'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: tgl
- short_pair: fr-tl
- chrF2_score: 0.536
- bleu: 24.1
- brevity_penalty: 1.0
- ref_len: 5778.0
- src_name: French
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: tl
- prefer_old: False
- long_pair: fra-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-fr-tll
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-tll
* source languages: fr
* target languages: tll
* OPUS readme: [fr-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tll | 24.6 | 0.467 |
|
Helsinki-NLP/opus-mt-fr-tn
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-tn
* source languages: fr
* target languages: tn
* OPUS readme: [fr-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tn | 33.1 | 0.525 |
|
Helsinki-NLP/opus-mt-fr-to
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-to
* source languages: fr
* target languages: to
* OPUS readme: [fr-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.to | 37.0 | 0.518 |
|
Helsinki-NLP/opus-mt-fr-tpi
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-tpi
* source languages: fr
* target languages: tpi
* OPUS readme: [fr-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tpi | 30.0 | 0.487 |
|
Helsinki-NLP/opus-mt-fr-ts
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-ts
* source languages: fr
* target languages: ts
* OPUS readme: [fr-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ts/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ts | 31.4 | 0.525 |
|
Helsinki-NLP/opus-mt-fr-tum
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-tum
* source languages: fr
* target languages: tum
* OPUS readme: [fr-tum](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tum/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tum | 23.0 | 0.458 |
|
Helsinki-NLP/opus-mt-fr-tvl
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-tvl
* source languages: fr
* target languages: tvl
* OPUS readme: [fr-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tvl | 32.6 | 0.497 |
|
Helsinki-NLP/opus-mt-fr-tw
|
Helsinki-NLP
|
marian
| 10 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-tw
* source languages: fr
* target languages: tw
* OPUS readme: [fr-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tw | 27.9 | 0.469 |
|
Helsinki-NLP/opus-mt-fr-ty
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-ty
* source languages: fr
* target languages: ty
* OPUS readme: [fr-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ty | 39.6 | 0.561 |
|
Helsinki-NLP/opus-mt-fr-uk
|
Helsinki-NLP
|
marian
| 10 | 62 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-fr-uk
* source languages: fr
* target languages: uk
* OPUS readme: [fr-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.uk | 39.4 | 0.581 |
|
Helsinki-NLP/opus-mt-fr-ve
|
Helsinki-NLP
|
marian
| 10 | 21 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-ve
* source languages: fr
* target languages: ve
* OPUS readme: [fr-ve](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ve/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ve | 26.3 | 0.481 |
|
Helsinki-NLP/opus-mt-fr-vi
|
Helsinki-NLP
|
marian
| 11 | 80 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['fr', 'vi']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,002 |
### fra-vie
* source group: French
* target group: Vietnamese
* OPUS readme: [fra-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.vie | 31.1 | 0.486 |
### System Info:
- hf_name: fra-vie
- source_languages: fra
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'vi']
- src_constituents: {'fra'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: vie
- short_pair: fr-vi
- chrF2_score: 0.486
- bleu: 31.1
- brevity_penalty: 0.985
- ref_len: 13219.0
- src_name: French
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: vi
- prefer_old: False
- long_pair: fra-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-fr-war
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-war
* source languages: fr
* target languages: war
* OPUS readme: [fr-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.war | 33.7 | 0.538 |
|
Helsinki-NLP/opus-mt-fr-wls
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-wls
* source languages: fr
* target languages: wls
* OPUS readme: [fr-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.wls | 27.5 | 0.478 |
|
Helsinki-NLP/opus-mt-fr-xh
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-xh
* source languages: fr
* target languages: xh
* OPUS readme: [fr-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.xh | 25.1 | 0.523 |
|
Helsinki-NLP/opus-mt-fr-yap
|
Helsinki-NLP
|
marian
| 10 | 12 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-yap
* source languages: fr
* target languages: yap
* OPUS readme: [fr-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yap | 25.8 | 0.434 |
|
Helsinki-NLP/opus-mt-fr-yo
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-fr-yo
* source languages: fr
* target languages: yo
* OPUS readme: [fr-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yo | 25.9 | 0.415 |
|
Helsinki-NLP/opus-mt-fr-zne
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fr-zne
* source languages: fr
* target languages: zne
* OPUS readme: [fr-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-zne/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.zne | 24.1 | 0.460 |
|
Helsinki-NLP/opus-mt-fse-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-fse-fi
* source languages: fse
* target languages: fi
* OPUS readme: [fse-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fse-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fse.fi | 90.2 | 0.943 |
|
Helsinki-NLP/opus-mt-ga-en
|
Helsinki-NLP
|
marian
| 11 | 726 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ga', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,980 |
### gle-eng
* source group: Irish
* target group: English
* OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md)
* model: transformer-align
* source language(s): gle
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gle.eng | 51.6 | 0.672 |
### System Info:
- hf_name: gle-eng
- source_languages: gle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ga', 'en']
- src_constituents: {'gle'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt
- src_alpha3: gle
- tgt_alpha3: eng
- short_pair: ga-en
- chrF2_score: 0.672
- bleu: 51.6
- brevity_penalty: 1.0
- ref_len: 11247.0
- src_name: Irish
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ga
- tgt_alpha2: en
- prefer_old: False
- long_pair: gle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gaa-de
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gaa-de
* source languages: gaa
* target languages: de
* OPUS readme: [gaa-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.de | 23.3 | 0.438 |
|
Helsinki-NLP/opus-mt-gaa-en
|
Helsinki-NLP
|
marian
| 10 | 26 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gaa-en
* source languages: gaa
* target languages: en
* OPUS readme: [gaa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.en | 41.0 | 0.567 |
|
Helsinki-NLP/opus-mt-gaa-es
|
Helsinki-NLP
|
marian
| 10 | 27 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gaa-es
* source languages: gaa
* target languages: es
* OPUS readme: [gaa-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.es | 28.6 | 0.463 |
|
Helsinki-NLP/opus-mt-gaa-fi
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gaa-fi
* source languages: gaa
* target languages: fi
* OPUS readme: [gaa-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fi | 26.4 | 0.498 |
|
Helsinki-NLP/opus-mt-gaa-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gaa-fr
* source languages: gaa
* target languages: fr
* OPUS readme: [gaa-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fr | 27.8 | 0.455 |
|
Helsinki-NLP/opus-mt-gaa-sv
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gaa-sv
* source languages: gaa
* target languages: sv
* OPUS readme: [gaa-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.sv | 30.1 | 0.489 |
|
Helsinki-NLP/opus-mt-gem-en
|
Helsinki-NLP
|
marian
| 11 | 11,040 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
|
['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 4,289 |
### gem-eng
* source group: Germanic languages
* target group: English
* OPUS readme: [gem-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.542 |
| news-test2008-deueng.deu.eng | 26.3 | 0.536 |
| newstest2009-deueng.deu.eng | 25.1 | 0.531 |
| newstest2010-deueng.deu.eng | 28.3 | 0.569 |
| newstest2011-deueng.deu.eng | 26.0 | 0.543 |
| newstest2012-deueng.deu.eng | 26.8 | 0.550 |
| newstest2013-deueng.deu.eng | 30.2 | 0.570 |
| newstest2014-deen-deueng.deu.eng | 30.7 | 0.574 |
| newstest2015-ende-deueng.deu.eng | 32.1 | 0.581 |
| newstest2016-ende-deueng.deu.eng | 36.9 | 0.624 |
| newstest2017-ende-deueng.deu.eng | 32.8 | 0.588 |
| newstest2018-ende-deueng.deu.eng | 40.2 | 0.640 |
| newstest2019-deen-deueng.deu.eng | 36.8 | 0.614 |
| Tatoeba-test.afr-eng.afr.eng | 62.8 | 0.758 |
| Tatoeba-test.ang-eng.ang.eng | 10.5 | 0.262 |
| Tatoeba-test.dan-eng.dan.eng | 61.6 | 0.754 |
| Tatoeba-test.deu-eng.deu.eng | 49.7 | 0.665 |
| Tatoeba-test.enm-eng.enm.eng | 23.9 | 0.491 |
| Tatoeba-test.fao-eng.fao.eng | 23.4 | 0.446 |
| Tatoeba-test.frr-eng.frr.eng | 10.2 | 0.184 |
| Tatoeba-test.fry-eng.fry.eng | 29.6 | 0.486 |
| Tatoeba-test.gos-eng.gos.eng | 17.8 | 0.352 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.058 |
| Tatoeba-test.gsw-eng.gsw.eng | 15.3 | 0.333 |
| Tatoeba-test.isl-eng.isl.eng | 51.0 | 0.669 |
| Tatoeba-test.ksh-eng.ksh.eng | 6.7 | 0.266 |
| Tatoeba-test.ltz-eng.ltz.eng | 33.0 | 0.505 |
| Tatoeba-test.multi.eng | 54.0 | 0.687 |
| Tatoeba-test.nds-eng.nds.eng | 33.6 | 0.529 |
| Tatoeba-test.nld-eng.nld.eng | 58.9 | 0.733 |
| Tatoeba-test.non-eng.non.eng | 37.3 | 0.546 |
| Tatoeba-test.nor-eng.nor.eng | 54.9 | 0.696 |
| Tatoeba-test.pdc-eng.pdc.eng | 29.6 | 0.446 |
| Tatoeba-test.sco-eng.sco.eng | 40.5 | 0.581 |
| Tatoeba-test.stq-eng.stq.eng | 14.5 | 0.361 |
| Tatoeba-test.swe-eng.swe.eng | 62.0 | 0.745 |
| Tatoeba-test.swg-eng.swg.eng | 17.1 | 0.334 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.400 |
### System Info:
- hf_name: gem-eng
- source_languages: gem
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gem
- tgt_alpha3: eng
- short_pair: gem-en
- chrF2_score: 0.687
- bleu: 54.0
- brevity_penalty: 0.993
- ref_len: 72120.0
- src_name: Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gem
- tgt_alpha2: en
- prefer_old: False
- long_pair: gem-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gem-gem
|
Helsinki-NLP
|
marian
| 11 | 3,053 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 17,101 |
### gem-gem
* source group: Germanic languages
* target group: Germanic languages
* OPUS readme: [gem-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 24.5 | 0.519 |
| newssyscomb2009-engdeu.eng.deu | 18.7 | 0.495 |
| news-test2008-deueng.deu.eng | 22.8 | 0.509 |
| news-test2008-engdeu.eng.deu | 18.6 | 0.485 |
| newstest2009-deueng.deu.eng | 22.2 | 0.507 |
| newstest2009-engdeu.eng.deu | 18.3 | 0.491 |
| newstest2010-deueng.deu.eng | 24.8 | 0.537 |
| newstest2010-engdeu.eng.deu | 19.7 | 0.499 |
| newstest2011-deueng.deu.eng | 22.9 | 0.516 |
| newstest2011-engdeu.eng.deu | 18.3 | 0.485 |
| newstest2012-deueng.deu.eng | 23.9 | 0.524 |
| newstest2012-engdeu.eng.deu | 18.5 | 0.484 |
| newstest2013-deueng.deu.eng | 26.3 | 0.537 |
| newstest2013-engdeu.eng.deu | 21.5 | 0.506 |
| newstest2014-deen-deueng.deu.eng | 25.7 | 0.535 |
| newstest2015-ende-deueng.deu.eng | 27.3 | 0.542 |
| newstest2015-ende-engdeu.eng.deu | 24.2 | 0.534 |
| newstest2016-ende-deueng.deu.eng | 31.8 | 0.584 |
| newstest2016-ende-engdeu.eng.deu | 28.4 | 0.564 |
| newstest2017-ende-deueng.deu.eng | 27.6 | 0.545 |
| newstest2017-ende-engdeu.eng.deu | 22.8 | 0.527 |
| newstest2018-ende-deueng.deu.eng | 34.1 | 0.593 |
| newstest2018-ende-engdeu.eng.deu | 32.7 | 0.595 |
| newstest2019-deen-deueng.deu.eng | 30.6 | 0.565 |
| newstest2019-ende-engdeu.eng.deu | 29.5 | 0.567 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.053 |
| Tatoeba-test.afr-dan.afr.dan | 57.8 | 0.907 |
| Tatoeba-test.afr-deu.afr.deu | 46.4 | 0.663 |
| Tatoeba-test.afr-eng.afr.eng | 57.4 | 0.717 |
| Tatoeba-test.afr-enm.afr.enm | 11.3 | 0.285 |
| Tatoeba-test.afr-fry.afr.fry | 0.0 | 0.167 |
| Tatoeba-test.afr-gos.afr.gos | 1.5 | 0.178 |
| Tatoeba-test.afr-isl.afr.isl | 29.0 | 0.760 |
| Tatoeba-test.afr-ltz.afr.ltz | 11.2 | 0.246 |
| Tatoeba-test.afr-nld.afr.nld | 53.3 | 0.708 |
| Tatoeba-test.afr-nor.afr.nor | 66.0 | 0.752 |
| Tatoeba-test.afr-swe.afr.swe | 88.0 | 0.955 |
| Tatoeba-test.afr-yid.afr.yid | 59.5 | 0.443 |
| Tatoeba-test.ang-afr.ang.afr | 10.7 | 0.043 |
| Tatoeba-test.ang-dan.ang.dan | 6.3 | 0.190 |
| Tatoeba-test.ang-deu.ang.deu | 1.4 | 0.212 |
| Tatoeba-test.ang-eng.ang.eng | 8.1 | 0.247 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.196 |
| Tatoeba-test.ang-fao.ang.fao | 10.7 | 0.105 |
| Tatoeba-test.ang-gos.ang.gos | 10.7 | 0.128 |
| Tatoeba-test.ang-isl.ang.isl | 16.0 | 0.135 |
| Tatoeba-test.ang-ltz.ang.ltz | 16.0 | 0.121 |
| Tatoeba-test.ang-yid.ang.yid | 1.5 | 0.136 |
| Tatoeba-test.dan-afr.dan.afr | 22.7 | 0.655 |
| Tatoeba-test.dan-ang.dan.ang | 3.1 | 0.110 |
| Tatoeba-test.dan-deu.dan.deu | 47.4 | 0.676 |
| Tatoeba-test.dan-eng.dan.eng | 54.7 | 0.704 |
| Tatoeba-test.dan-enm.dan.enm | 4.8 | 0.291 |
| Tatoeba-test.dan-fao.dan.fao | 9.7 | 0.120 |
| Tatoeba-test.dan-gos.dan.gos | 3.8 | 0.240 |
| Tatoeba-test.dan-isl.dan.isl | 66.1 | 0.678 |
| Tatoeba-test.dan-ltz.dan.ltz | 78.3 | 0.563 |
| Tatoeba-test.dan-nds.dan.nds | 6.2 | 0.335 |
| Tatoeba-test.dan-nld.dan.nld | 60.0 | 0.748 |
| Tatoeba-test.dan-nor.dan.nor | 68.1 | 0.812 |
| Tatoeba-test.dan-swe.dan.swe | 65.0 | 0.785 |
| Tatoeba-test.dan-swg.dan.swg | 2.6 | 0.182 |
| Tatoeba-test.dan-yid.dan.yid | 9.3 | 0.226 |
| Tatoeba-test.deu-afr.deu.afr | 50.3 | 0.682 |
| Tatoeba-test.deu-ang.deu.ang | 0.5 | 0.118 |
| Tatoeba-test.deu-dan.deu.dan | 49.6 | 0.679 |
| Tatoeba-test.deu-eng.deu.eng | 43.4 | 0.618 |
| Tatoeba-test.deu-enm.deu.enm | 2.2 | 0.159 |
| Tatoeba-test.deu-frr.deu.frr | 0.4 | 0.156 |
| Tatoeba-test.deu-fry.deu.fry | 10.7 | 0.355 |
| Tatoeba-test.deu-gos.deu.gos | 0.7 | 0.183 |
| Tatoeba-test.deu-got.deu.got | 0.3 | 0.010 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.1 | 0.130 |
| Tatoeba-test.deu-isl.deu.isl | 24.3 | 0.504 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.9 | 0.173 |
| Tatoeba-test.deu-ltz.deu.ltz | 15.6 | 0.304 |
| Tatoeba-test.deu-nds.deu.nds | 21.2 | 0.469 |
| Tatoeba-test.deu-nld.deu.nld | 47.1 | 0.657 |
| Tatoeba-test.deu-nor.deu.nor | 43.9 | 0.646 |
| Tatoeba-test.deu-pdc.deu.pdc | 3.0 | 0.133 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.296 |
| Tatoeba-test.deu-stq.deu.stq | 0.6 | 0.137 |
| Tatoeba-test.deu-swe.deu.swe | 50.6 | 0.668 |
| Tatoeba-test.deu-swg.deu.swg | 0.2 | 0.137 |
| Tatoeba-test.deu-yid.deu.yid | 3.9 | 0.229 |
| Tatoeba-test.eng-afr.eng.afr | 55.2 | 0.721 |
| Tatoeba-test.eng-ang.eng.ang | 4.9 | 0.118 |
| Tatoeba-test.eng-dan.eng.dan | 52.6 | 0.684 |
| Tatoeba-test.eng-deu.eng.deu | 35.4 | 0.573 |
| Tatoeba-test.eng-enm.eng.enm | 1.8 | 0.223 |
| Tatoeba-test.eng-fao.eng.fao | 7.0 | 0.312 |
| Tatoeba-test.eng-frr.eng.frr | 1.2 | 0.050 |
| Tatoeba-test.eng-fry.eng.fry | 15.8 | 0.381 |
| Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.170 |
| Tatoeba-test.eng-got.eng.got | 0.3 | 0.011 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.5 | 0.126 |
| Tatoeba-test.eng-isl.eng.isl | 20.9 | 0.463 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.0 | 0.141 |
| Tatoeba-test.eng-ltz.eng.ltz | 12.8 | 0.292 |
| Tatoeba-test.eng-nds.eng.nds | 18.3 | 0.428 |
| Tatoeba-test.eng-nld.eng.nld | 47.3 | 0.657 |
| Tatoeba-test.eng-non.eng.non | 0.3 | 0.145 |
| Tatoeba-test.eng-nor.eng.nor | 47.2 | 0.650 |
| Tatoeba-test.eng-pdc.eng.pdc | 4.8 | 0.177 |
| Tatoeba-test.eng-sco.eng.sco | 38.1 | 0.597 |
| Tatoeba-test.eng-stq.eng.stq | 2.4 | 0.288 |
| Tatoeba-test.eng-swe.eng.swe | 52.7 | 0.677 |
| Tatoeba-test.eng-swg.eng.swg | 1.1 | 0.163 |
| Tatoeba-test.eng-yid.eng.yid | 4.5 | 0.223 |
| Tatoeba-test.enm-afr.enm.afr | 22.8 | 0.401 |
| Tatoeba-test.enm-ang.enm.ang | 0.4 | 0.062 |
| Tatoeba-test.enm-dan.enm.dan | 51.4 | 0.782 |
| Tatoeba-test.enm-deu.enm.deu | 33.8 | 0.473 |
| Tatoeba-test.enm-eng.enm.eng | 22.4 | 0.495 |
| Tatoeba-test.enm-fry.enm.fry | 16.0 | 0.173 |
| Tatoeba-test.enm-gos.enm.gos | 6.1 | 0.222 |
| Tatoeba-test.enm-isl.enm.isl | 59.5 | 0.651 |
| Tatoeba-test.enm-ksh.enm.ksh | 10.5 | 0.130 |
| Tatoeba-test.enm-nds.enm.nds | 18.1 | 0.327 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.546 |
| Tatoeba-test.enm-nor.enm.nor | 15.6 | 0.290 |
| Tatoeba-test.enm-yid.enm.yid | 2.3 | 0.215 |
| Tatoeba-test.fao-ang.fao.ang | 2.1 | 0.035 |
| Tatoeba-test.fao-dan.fao.dan | 53.7 | 0.625 |
| Tatoeba-test.fao-eng.fao.eng | 24.7 | 0.435 |
| Tatoeba-test.fao-gos.fao.gos | 12.7 | 0.116 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.341 |
| Tatoeba-test.fao-nor.fao.nor | 41.9 | 0.586 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 1.000 |
| Tatoeba-test.frr-deu.frr.deu | 7.4 | 0.263 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.157 |
| Tatoeba-test.frr-fry.frr.fry | 4.0 | 0.112 |
| Tatoeba-test.frr-gos.frr.gos | 1.0 | 0.135 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.207 |
| Tatoeba-test.frr-nld.frr.nld | 10.6 | 0.227 |
| Tatoeba-test.frr-stq.frr.stq | 1.0 | 0.058 |
| Tatoeba-test.fry-afr.fry.afr | 12.7 | 0.333 |
| Tatoeba-test.fry-deu.fry.deu | 30.8 | 0.555 |
| Tatoeba-test.fry-eng.fry.eng | 31.2 | 0.506 |
| Tatoeba-test.fry-enm.fry.enm | 0.0 | 0.175 |
| Tatoeba-test.fry-frr.fry.frr | 1.6 | 0.091 |
| Tatoeba-test.fry-gos.fry.gos | 1.1 | 0.254 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.526 |
| Tatoeba-test.fry-nds.fry.nds | 12.4 | 0.116 |
| Tatoeba-test.fry-nld.fry.nld | 43.4 | 0.637 |
| Tatoeba-test.fry-nor.fry.nor | 47.1 | 0.607 |
| Tatoeba-test.fry-stq.fry.stq | 0.6 | 0.181 |
| Tatoeba-test.fry-swe.fry.swe | 30.2 | 0.587 |
| Tatoeba-test.fry-yid.fry.yid | 3.1 | 0.173 |
| Tatoeba-test.gos-afr.gos.afr | 1.8 | 0.215 |
| Tatoeba-test.gos-ang.gos.ang | 0.0 | 0.045 |
| Tatoeba-test.gos-dan.gos.dan | 4.1 | 0.236 |
| Tatoeba-test.gos-deu.gos.deu | 19.6 | 0.406 |
| Tatoeba-test.gos-eng.gos.eng | 15.1 | 0.329 |
| Tatoeba-test.gos-enm.gos.enm | 5.8 | 0.271 |
| Tatoeba-test.gos-fao.gos.fao | 19.0 | 0.136 |
| Tatoeba-test.gos-frr.gos.frr | 1.3 | 0.119 |
| Tatoeba-test.gos-fry.gos.fry | 17.1 | 0.388 |
| Tatoeba-test.gos-isl.gos.isl | 16.8 | 0.356 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.6 | 0.174 |
| Tatoeba-test.gos-nds.gos.nds | 4.7 | 0.225 |
| Tatoeba-test.gos-nld.gos.nld | 16.3 | 0.406 |
| Tatoeba-test.gos-stq.gos.stq | 0.7 | 0.154 |
| Tatoeba-test.gos-swe.gos.swe | 8.6 | 0.319 |
| Tatoeba-test.gos-yid.gos.yid | 4.4 | 0.165 |
| Tatoeba-test.got-deu.got.deu | 0.2 | 0.041 |
| Tatoeba-test.got-eng.got.eng | 0.2 | 0.068 |
| Tatoeba-test.got-nor.got.nor | 0.6 | 0.000 |
| Tatoeba-test.gsw-deu.gsw.deu | 15.9 | 0.373 |
| Tatoeba-test.gsw-eng.gsw.eng | 14.7 | 0.320 |
| Tatoeba-test.isl-afr.isl.afr | 38.0 | 0.641 |
| Tatoeba-test.isl-ang.isl.ang | 0.0 | 0.037 |
| Tatoeba-test.isl-dan.isl.dan | 67.7 | 0.836 |
| Tatoeba-test.isl-deu.isl.deu | 42.6 | 0.614 |
| Tatoeba-test.isl-eng.isl.eng | 43.5 | 0.610 |
| Tatoeba-test.isl-enm.isl.enm | 12.4 | 0.123 |
| Tatoeba-test.isl-fao.isl.fao | 15.6 | 0.176 |
| Tatoeba-test.isl-gos.isl.gos | 7.1 | 0.257 |
| Tatoeba-test.isl-nor.isl.nor | 53.5 | 0.690 |
| Tatoeba-test.isl-stq.isl.stq | 10.7 | 0.176 |
| Tatoeba-test.isl-swe.isl.swe | 67.7 | 0.818 |
| Tatoeba-test.ksh-deu.ksh.deu | 11.8 | 0.393 |
| Tatoeba-test.ksh-eng.ksh.eng | 4.0 | 0.239 |
| Tatoeba-test.ksh-enm.ksh.enm | 9.5 | 0.085 |
| Tatoeba-test.ltz-afr.ltz.afr | 36.5 | 0.529 |
| Tatoeba-test.ltz-ang.ltz.ang | 0.0 | 0.043 |
| Tatoeba-test.ltz-dan.ltz.dan | 80.6 | 0.722 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.1 | 0.581 |
| Tatoeba-test.ltz-eng.ltz.eng | 36.1 | 0.511 |
| Tatoeba-test.ltz-fry.ltz.fry | 16.5 | 0.524 |
| Tatoeba-test.ltz-gos.ltz.gos | 0.7 | 0.118 |
| Tatoeba-test.ltz-nld.ltz.nld | 40.4 | 0.535 |
| Tatoeba-test.ltz-nor.ltz.nor | 19.1 | 0.582 |
| Tatoeba-test.ltz-stq.ltz.stq | 2.4 | 0.093 |
| Tatoeba-test.ltz-swe.ltz.swe | 25.9 | 0.430 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.5 | 0.160 |
| Tatoeba-test.multi.multi | 42.7 | 0.614 |
| Tatoeba-test.nds-dan.nds.dan | 23.0 | 0.465 |
| Tatoeba-test.nds-deu.nds.deu | 39.8 | 0.610 |
| Tatoeba-test.nds-eng.nds.eng | 32.0 | 0.520 |
| Tatoeba-test.nds-enm.nds.enm | 3.9 | 0.156 |
| Tatoeba-test.nds-frr.nds.frr | 10.7 | 0.127 |
| Tatoeba-test.nds-fry.nds.fry | 10.7 | 0.231 |
| Tatoeba-test.nds-gos.nds.gos | 0.8 | 0.157 |
| Tatoeba-test.nds-nld.nds.nld | 44.1 | 0.634 |
| Tatoeba-test.nds-nor.nds.nor | 47.1 | 0.665 |
| Tatoeba-test.nds-swg.nds.swg | 0.5 | 0.166 |
| Tatoeba-test.nds-yid.nds.yid | 12.7 | 0.337 |
| Tatoeba-test.nld-afr.nld.afr | 58.4 | 0.748 |
| Tatoeba-test.nld-dan.nld.dan | 61.3 | 0.753 |
| Tatoeba-test.nld-deu.nld.deu | 48.2 | 0.670 |
| Tatoeba-test.nld-eng.nld.eng | 52.8 | 0.690 |
| Tatoeba-test.nld-enm.nld.enm | 5.7 | 0.178 |
| Tatoeba-test.nld-frr.nld.frr | 0.9 | 0.159 |
| Tatoeba-test.nld-fry.nld.fry | 23.0 | 0.467 |
| Tatoeba-test.nld-gos.nld.gos | 1.0 | 0.165 |
| Tatoeba-test.nld-ltz.nld.ltz | 14.4 | 0.310 |
| Tatoeba-test.nld-nds.nld.nds | 24.1 | 0.485 |
| Tatoeba-test.nld-nor.nld.nor | 53.6 | 0.705 |
| Tatoeba-test.nld-sco.nld.sco | 15.0 | 0.415 |
| Tatoeba-test.nld-stq.nld.stq | 0.5 | 0.183 |
| Tatoeba-test.nld-swe.nld.swe | 73.6 | 0.842 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.191 |
| Tatoeba-test.nld-yid.nld.yid | 9.4 | 0.299 |
| Tatoeba-test.non-eng.non.eng | 27.7 | 0.501 |
| Tatoeba-test.nor-afr.nor.afr | 48.2 | 0.687 |
| Tatoeba-test.nor-dan.nor.dan | 69.5 | 0.820 |
| Tatoeba-test.nor-deu.nor.deu | 41.1 | 0.634 |
| Tatoeba-test.nor-eng.nor.eng | 49.4 | 0.660 |
| Tatoeba-test.nor-enm.nor.enm | 6.8 | 0.230 |
| Tatoeba-test.nor-fao.nor.fao | 6.9 | 0.395 |
| Tatoeba-test.nor-fry.nor.fry | 9.2 | 0.323 |
| Tatoeba-test.nor-got.nor.got | 1.5 | 0.000 |
| Tatoeba-test.nor-isl.nor.isl | 34.5 | 0.555 |
| Tatoeba-test.nor-ltz.nor.ltz | 22.1 | 0.447 |
| Tatoeba-test.nor-nds.nor.nds | 34.3 | 0.565 |
| Tatoeba-test.nor-nld.nor.nld | 50.5 | 0.676 |
| Tatoeba-test.nor-nor.nor.nor | 57.6 | 0.764 |
| Tatoeba-test.nor-swe.nor.swe | 68.9 | 0.813 |
| Tatoeba-test.nor-yid.nor.yid | 65.0 | 0.627 |
| Tatoeba-test.pdc-deu.pdc.deu | 43.5 | 0.559 |
| Tatoeba-test.pdc-eng.pdc.eng | 26.1 | 0.471 |
| Tatoeba-test.sco-deu.sco.deu | 7.1 | 0.295 |
| Tatoeba-test.sco-eng.sco.eng | 34.4 | 0.551 |
| Tatoeba-test.sco-nld.sco.nld | 9.9 | 0.438 |
| Tatoeba-test.stq-deu.stq.deu | 8.6 | 0.385 |
| Tatoeba-test.stq-eng.stq.eng | 21.8 | 0.431 |
| Tatoeba-test.stq-frr.stq.frr | 2.1 | 0.111 |
| Tatoeba-test.stq-fry.stq.fry | 7.6 | 0.267 |
| Tatoeba-test.stq-gos.stq.gos | 0.7 | 0.198 |
| Tatoeba-test.stq-isl.stq.isl | 16.0 | 0.121 |
| Tatoeba-test.stq-ltz.stq.ltz | 3.8 | 0.150 |
| Tatoeba-test.stq-nld.stq.nld | 14.6 | 0.375 |
| Tatoeba-test.stq-yid.stq.yid | 2.4 | 0.096 |
| Tatoeba-test.swe-afr.swe.afr | 51.8 | 0.802 |
| Tatoeba-test.swe-dan.swe.dan | 64.9 | 0.784 |
| Tatoeba-test.swe-deu.swe.deu | 47.0 | 0.657 |
| Tatoeba-test.swe-eng.swe.eng | 55.8 | 0.700 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.060 |
| Tatoeba-test.swe-fry.swe.fry | 14.1 | 0.449 |
| Tatoeba-test.swe-gos.swe.gos | 7.5 | 0.291 |
| Tatoeba-test.swe-isl.swe.isl | 70.7 | 0.812 |
| Tatoeba-test.swe-ltz.swe.ltz | 15.9 | 0.553 |
| Tatoeba-test.swe-nld.swe.nld | 78.7 | 0.854 |
| Tatoeba-test.swe-nor.swe.nor | 67.1 | 0.799 |
| Tatoeba-test.swe-yid.swe.yid | 14.7 | 0.156 |
| Tatoeba-test.swg-dan.swg.dan | 7.7 | 0.341 |
| Tatoeba-test.swg-deu.swg.deu | 8.0 | 0.334 |
| Tatoeba-test.swg-eng.swg.eng | 12.4 | 0.305 |
| Tatoeba-test.swg-nds.swg.nds | 1.1 | 0.209 |
| Tatoeba-test.swg-nld.swg.nld | 4.9 | 0.244 |
| Tatoeba-test.swg-yid.swg.yid | 3.4 | 0.194 |
| Tatoeba-test.yid-afr.yid.afr | 23.6 | 0.552 |
| Tatoeba-test.yid-ang.yid.ang | 0.1 | 0.066 |
| Tatoeba-test.yid-dan.yid.dan | 17.5 | 0.392 |
| Tatoeba-test.yid-deu.yid.deu | 21.0 | 0.423 |
| Tatoeba-test.yid-eng.yid.eng | 17.4 | 0.368 |
| Tatoeba-test.yid-enm.yid.enm | 0.6 | 0.143 |
| Tatoeba-test.yid-fry.yid.fry | 5.3 | 0.169 |
| Tatoeba-test.yid-gos.yid.gos | 1.2 | 0.149 |
| Tatoeba-test.yid-ltz.yid.ltz | 3.5 | 0.256 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.487 |
| Tatoeba-test.yid-nld.yid.nld | 26.1 | 0.423 |
| Tatoeba-test.yid-nor.yid.nor | 47.1 | 0.583 |
| Tatoeba-test.yid-stq.yid.stq | 1.5 | 0.092 |
| Tatoeba-test.yid-swe.yid.swe | 35.9 | 0.518 |
| Tatoeba-test.yid-swg.yid.swg | 1.0 | 0.124 |
### System Info:
- hf_name: gem-gem
- source_languages: gem
- target_languages: gem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt
- src_alpha3: gem
- tgt_alpha3: gem
- short_pair: gem-gem
- chrF2_score: 0.614
- bleu: 42.7
- brevity_penalty: 0.993
- ref_len: 73459.0
- src_name: Germanic languages
- tgt_name: Germanic languages
- train_date: 2020-07-27
- src_alpha2: gem
- tgt_alpha2: gem
- prefer_old: False
- long_pair: gem-gem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gil-en
|
Helsinki-NLP
|
marian
| 10 | 33 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gil-en
* source languages: gil
* target languages: en
* OPUS readme: [gil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.en | 36.0 | 0.522 |
|
Helsinki-NLP/opus-mt-gil-es
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gil-es
* source languages: gil
* target languages: es
* OPUS readme: [gil-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.es | 21.8 | 0.398 |
|
Helsinki-NLP/opus-mt-gil-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gil-fi
* source languages: gil
* target languages: fi
* OPUS readme: [gil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fi | 23.1 | 0.447 |
|
Helsinki-NLP/opus-mt-gil-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gil-fr
* source languages: gil
* target languages: fr
* OPUS readme: [gil-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fr | 24.9 | 0.424 |
|
Helsinki-NLP/opus-mt-gil-sv
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-gil-sv
* source languages: gil
* target languages: sv
* OPUS readme: [gil-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.sv | 25.8 | 0.441 |
|
Helsinki-NLP/opus-mt-gl-en
|
Helsinki-NLP
|
marian
| 11 | 1,015 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['gl', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,987 |
### glg-eng
* source group: Galician
* target group: English
* OPUS readme: [glg-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.eng | 44.4 | 0.628 |
### System Info:
- hf_name: glg-eng
- source_languages: glg
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'en']
- src_constituents: {'glg'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: eng
- short_pair: gl-en
- chrF2_score: 0.628
- bleu: 44.4
- brevity_penalty: 0.975
- ref_len: 8365.0
- src_name: Galician
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: en
- prefer_old: False
- long_pair: glg-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gl-es
|
Helsinki-NLP
|
marian
| 11 | 50 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['gl', 'es']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,984 |
### glg-spa
* source group: Galician
* target group: Spanish
* OPUS readme: [glg-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.spa | 72.2 | 0.836 |
### System Info:
- hf_name: glg-spa
- source_languages: glg
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'es']
- src_constituents: {'glg'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: spa
- short_pair: gl-es
- chrF2_score: 0.836
- bleu: 72.2
- brevity_penalty: 0.982
- ref_len: 17443.0
- src_name: Galician
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: es
- prefer_old: False
- long_pair: glg-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gl-pt
|
Helsinki-NLP
|
marian
| 11 | 18 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['gl', 'pt']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,989 |
### glg-por
* source group: Galician
* target group: Portuguese
* OPUS readme: [glg-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.por | 57.9 | 0.758 |
### System Info:
- hf_name: glg-por
- source_languages: glg
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'pt']
- src_constituents: {'glg'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: por
- short_pair: gl-pt
- chrF2_score: 0.758
- bleu: 57.9
- brevity_penalty: 0.977
- ref_len: 3078.0
- src_name: Galician
- tgt_name: Portuguese
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: pt
- prefer_old: False
- long_pair: glg-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gmq-en
|
Helsinki-NLP
|
marian
| 11 | 7,110 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,160 |
### gmq-eng
* source group: North Germanic languages
* target group: English
* OPUS readme: [gmq-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md)
* model: transformer
* source language(s): dan fao isl nno nob nob_Hebr non_Latn swe
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip)
* test set translations: [opus2m-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt)
* test set scores: [opus2m-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.multi.eng | 58.1 | 0.720 |
### System Info:
- hf_name: gmq-eng
- source_languages: gmq
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']
- src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt
- src_alpha3: gmq
- tgt_alpha3: eng
- short_pair: gmq-en
- chrF2_score: 0.72
- bleu: 58.1
- brevity_penalty: 0.982
- ref_len: 72641.0
- src_name: North Germanic languages
- tgt_name: English
- train_date: 2020-07-26
- src_alpha2: gmq
- tgt_alpha2: en
- prefer_old: False
- long_pair: gmq-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gmq-gmq
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,401 |
### gmq-gmq
* source group: North Germanic languages
* target group: North Germanic languages
* OPUS readme: [gmq-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md)
* model: transformer
* source language(s): dan fao isl nno nob swe
* target language(s): dan fao isl nno nob swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan-fao.dan.fao | 8.1 | 0.173 |
| Tatoeba-test.dan-isl.dan.isl | 52.5 | 0.827 |
| Tatoeba-test.dan-nor.dan.nor | 62.8 | 0.772 |
| Tatoeba-test.dan-swe.dan.swe | 67.6 | 0.802 |
| Tatoeba-test.fao-dan.fao.dan | 11.3 | 0.306 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.359 |
| Tatoeba-test.fao-nor.fao.nor | 36.8 | 0.531 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 0.632 |
| Tatoeba-test.isl-dan.isl.dan | 67.0 | 0.739 |
| Tatoeba-test.isl-fao.isl.fao | 14.5 | 0.243 |
| Tatoeba-test.isl-nor.isl.nor | 51.8 | 0.674 |
| Tatoeba-test.isl-swe.isl.swe | 100.0 | 1.000 |
| Tatoeba-test.multi.multi | 64.7 | 0.782 |
| Tatoeba-test.nor-dan.nor.dan | 65.6 | 0.797 |
| Tatoeba-test.nor-fao.nor.fao | 9.4 | 0.362 |
| Tatoeba-test.nor-isl.nor.isl | 38.8 | 0.587 |
| Tatoeba-test.nor-nor.nor.nor | 51.9 | 0.721 |
| Tatoeba-test.nor-swe.nor.swe | 66.5 | 0.789 |
| Tatoeba-test.swe-dan.swe.dan | 67.6 | 0.802 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.268 |
| Tatoeba-test.swe-isl.swe.isl | 65.8 | 0.914 |
| Tatoeba-test.swe-nor.swe.nor | 60.6 | 0.755 |
### System Info:
- hf_name: gmq-gmq
- source_languages: gmq
- target_languages: gmq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
- src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt
- src_alpha3: gmq
- tgt_alpha3: gmq
- short_pair: gmq-gmq
- chrF2_score: 0.782
- bleu: 64.7
- brevity_penalty: 0.9940000000000001
- ref_len: 49385.0
- src_name: North Germanic languages
- tgt_name: North Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmq
- tgt_alpha2: gmq
- prefer_old: False
- long_pair: gmq-gmq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gmw-en
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,796 |
### gmw-eng
* source group: West Germanic languages
* target group: English
* OPUS readme: [gmw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.538 |
| news-test2008-deueng.deu.eng | 25.7 | 0.534 |
| newstest2009-deueng.deu.eng | 25.1 | 0.530 |
| newstest2010-deueng.deu.eng | 27.9 | 0.565 |
| newstest2011-deueng.deu.eng | 25.3 | 0.539 |
| newstest2012-deueng.deu.eng | 26.6 | 0.548 |
| newstest2013-deueng.deu.eng | 29.6 | 0.565 |
| newstest2014-deen-deueng.deu.eng | 30.2 | 0.571 |
| newstest2015-ende-deueng.deu.eng | 31.5 | 0.577 |
| newstest2016-ende-deueng.deu.eng | 36.7 | 0.622 |
| newstest2017-ende-deueng.deu.eng | 32.3 | 0.585 |
| newstest2018-ende-deueng.deu.eng | 39.9 | 0.638 |
| newstest2019-deen-deueng.deu.eng | 35.9 | 0.611 |
| Tatoeba-test.afr-eng.afr.eng | 61.8 | 0.750 |
| Tatoeba-test.ang-eng.ang.eng | 7.3 | 0.220 |
| Tatoeba-test.deu-eng.deu.eng | 48.3 | 0.657 |
| Tatoeba-test.enm-eng.enm.eng | 16.1 | 0.423 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.168 |
| Tatoeba-test.fry-eng.fry.eng | 28.6 | 0.488 |
| Tatoeba-test.gos-eng.gos.eng | 15.5 | 0.326 |
| Tatoeba-test.gsw-eng.gsw.eng | 12.7 | 0.308 |
| Tatoeba-test.ksh-eng.ksh.eng | 8.4 | 0.254 |
| Tatoeba-test.ltz-eng.ltz.eng | 28.7 | 0.453 |
| Tatoeba-test.multi.eng | 48.5 | 0.646 |
| Tatoeba-test.nds-eng.nds.eng | 31.4 | 0.509 |
| Tatoeba-test.nld-eng.nld.eng | 58.1 | 0.728 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.1 | 0.406 |
| Tatoeba-test.sco-eng.sco.eng | 40.8 | 0.570 |
| Tatoeba-test.stq-eng.stq.eng | 20.3 | 0.380 |
| Tatoeba-test.swg-eng.swg.eng | 20.5 | 0.315 |
| Tatoeba-test.yid-eng.yid.eng | 16.0 | 0.366 |
### System Info:
- hf_name: gmw-eng
- source_languages: gmw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gmw
- tgt_alpha3: eng
- short_pair: gmw-en
- chrF2_score: 0.6459999999999999
- bleu: 48.5
- brevity_penalty: 0.997
- ref_len: 72584.0
- src_name: West Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gmw
- tgt_alpha2: en
- prefer_old: False
- long_pair: gmw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-gmw-gmw
|
Helsinki-NLP
|
marian
| 11 | 3,462 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 11,598 |
### gmw-gmw
* source group: West Germanic languages
* target group: West Germanic languages
* OPUS readme: [gmw-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md)
* model: transformer
* source language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 25.3 | 0.527 |
| newssyscomb2009-engdeu.eng.deu | 19.0 | 0.502 |
| news-test2008-deueng.deu.eng | 23.7 | 0.515 |
| news-test2008-engdeu.eng.deu | 19.2 | 0.491 |
| newstest2009-deueng.deu.eng | 23.1 | 0.514 |
| newstest2009-engdeu.eng.deu | 18.6 | 0.495 |
| newstest2010-deueng.deu.eng | 25.8 | 0.545 |
| newstest2010-engdeu.eng.deu | 20.3 | 0.505 |
| newstest2011-deueng.deu.eng | 23.7 | 0.523 |
| newstest2011-engdeu.eng.deu | 18.9 | 0.490 |
| newstest2012-deueng.deu.eng | 24.4 | 0.529 |
| newstest2012-engdeu.eng.deu | 19.2 | 0.489 |
| newstest2013-deueng.deu.eng | 27.2 | 0.545 |
| newstest2013-engdeu.eng.deu | 22.4 | 0.514 |
| newstest2014-deen-deueng.deu.eng | 27.0 | 0.546 |
| newstest2015-ende-deueng.deu.eng | 28.4 | 0.552 |
| newstest2015-ende-engdeu.eng.deu | 25.3 | 0.541 |
| newstest2016-ende-deueng.deu.eng | 33.2 | 0.595 |
| newstest2016-ende-engdeu.eng.deu | 29.8 | 0.578 |
| newstest2017-ende-deueng.deu.eng | 29.0 | 0.557 |
| newstest2017-ende-engdeu.eng.deu | 23.9 | 0.534 |
| newstest2018-ende-deueng.deu.eng | 35.9 | 0.607 |
| newstest2018-ende-engdeu.eng.deu | 34.8 | 0.609 |
| newstest2019-deen-deueng.deu.eng | 32.1 | 0.579 |
| newstest2019-ende-engdeu.eng.deu | 31.0 | 0.579 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.065 |
| Tatoeba-test.afr-deu.afr.deu | 46.8 | 0.668 |
| Tatoeba-test.afr-eng.afr.eng | 58.5 | 0.728 |
| Tatoeba-test.afr-enm.afr.enm | 13.4 | 0.357 |
| Tatoeba-test.afr-fry.afr.fry | 5.3 | 0.026 |
| Tatoeba-test.afr-gos.afr.gos | 3.5 | 0.228 |
| Tatoeba-test.afr-ltz.afr.ltz | 1.6 | 0.131 |
| Tatoeba-test.afr-nld.afr.nld | 55.4 | 0.715 |
| Tatoeba-test.afr-yid.afr.yid | 3.4 | 0.008 |
| Tatoeba-test.ang-afr.ang.afr | 3.1 | 0.096 |
| Tatoeba-test.ang-deu.ang.deu | 2.6 | 0.188 |
| Tatoeba-test.ang-eng.ang.eng | 5.4 | 0.211 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.197 |
| Tatoeba-test.ang-gos.ang.gos | 6.6 | 0.186 |
| Tatoeba-test.ang-ltz.ang.ltz | 5.3 | 0.072 |
| Tatoeba-test.ang-yid.ang.yid | 0.9 | 0.131 |
| Tatoeba-test.deu-afr.deu.afr | 52.7 | 0.699 |
| Tatoeba-test.deu-ang.deu.ang | 0.8 | 0.133 |
| Tatoeba-test.deu-eng.deu.eng | 43.5 | 0.621 |
| Tatoeba-test.deu-enm.deu.enm | 6.9 | 0.245 |
| Tatoeba-test.deu-frr.deu.frr | 0.8 | 0.200 |
| Tatoeba-test.deu-fry.deu.fry | 15.1 | 0.367 |
| Tatoeba-test.deu-gos.deu.gos | 2.2 | 0.279 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.0 | 0.176 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.6 | 0.208 |
| Tatoeba-test.deu-ltz.deu.ltz | 12.1 | 0.274 |
| Tatoeba-test.deu-nds.deu.nds | 18.8 | 0.446 |
| Tatoeba-test.deu-nld.deu.nld | 48.6 | 0.669 |
| Tatoeba-test.deu-pdc.deu.pdc | 4.6 | 0.198 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.340 |
| Tatoeba-test.deu-stq.deu.stq | 3.2 | 0.240 |
| Tatoeba-test.deu-swg.deu.swg | 0.5 | 0.179 |
| Tatoeba-test.deu-yid.deu.yid | 1.7 | 0.160 |
| Tatoeba-test.eng-afr.eng.afr | 55.8 | 0.730 |
| Tatoeba-test.eng-ang.eng.ang | 5.7 | 0.157 |
| Tatoeba-test.eng-deu.eng.deu | 36.7 | 0.584 |
| Tatoeba-test.eng-enm.eng.enm | 2.0 | 0.272 |
| Tatoeba-test.eng-frr.eng.frr | 6.1 | 0.246 |
| Tatoeba-test.eng-fry.eng.fry | 15.3 | 0.378 |
| Tatoeba-test.eng-gos.eng.gos | 1.2 | 0.242 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.164 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.9 | 0.170 |
| Tatoeba-test.eng-ltz.eng.ltz | 13.7 | 0.263 |
| Tatoeba-test.eng-nds.eng.nds | 17.1 | 0.410 |
| Tatoeba-test.eng-nld.eng.nld | 49.6 | 0.673 |
| Tatoeba-test.eng-pdc.eng.pdc | 5.1 | 0.218 |
| Tatoeba-test.eng-sco.eng.sco | 34.8 | 0.587 |
| Tatoeba-test.eng-stq.eng.stq | 2.1 | 0.322 |
| Tatoeba-test.eng-swg.eng.swg | 1.7 | 0.192 |
| Tatoeba-test.eng-yid.eng.yid | 1.7 | 0.173 |
| Tatoeba-test.enm-afr.enm.afr | 13.4 | 0.397 |
| Tatoeba-test.enm-ang.enm.ang | 0.7 | 0.063 |
| Tatoeba-test.enm-deu.enm.deu | 41.5 | 0.514 |
| Tatoeba-test.enm-eng.enm.eng | 21.3 | 0.483 |
| Tatoeba-test.enm-fry.enm.fry | 0.0 | 0.058 |
| Tatoeba-test.enm-gos.enm.gos | 10.7 | 0.354 |
| Tatoeba-test.enm-ksh.enm.ksh | 7.0 | 0.161 |
| Tatoeba-test.enm-nds.enm.nds | 18.6 | 0.316 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.524 |
| Tatoeba-test.enm-yid.enm.yid | 0.7 | 0.128 |
| Tatoeba-test.frr-deu.frr.deu | 4.1 | 0.219 |
| Tatoeba-test.frr-eng.frr.eng | 14.1 | 0.186 |
| Tatoeba-test.frr-fry.frr.fry | 3.1 | 0.129 |
| Tatoeba-test.frr-gos.frr.gos | 3.6 | 0.226 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.145 |
| Tatoeba-test.frr-nld.frr.nld | 9.8 | 0.209 |
| Tatoeba-test.frr-stq.frr.stq | 2.8 | 0.142 |
| Tatoeba-test.fry-afr.fry.afr | 0.0 | 1.000 |
| Tatoeba-test.fry-deu.fry.deu | 30.1 | 0.535 |
| Tatoeba-test.fry-eng.fry.eng | 28.0 | 0.486 |
| Tatoeba-test.fry-enm.fry.enm | 16.0 | 0.262 |
| Tatoeba-test.fry-frr.fry.frr | 5.5 | 0.160 |
| Tatoeba-test.fry-gos.fry.gos | 1.6 | 0.307 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.438 |
| Tatoeba-test.fry-nds.fry.nds | 8.1 | 0.083 |
| Tatoeba-test.fry-nld.fry.nld | 41.4 | 0.616 |
| Tatoeba-test.fry-stq.fry.stq | 1.6 | 0.217 |
| Tatoeba-test.fry-yid.fry.yid | 1.6 | 0.159 |
| Tatoeba-test.gos-afr.gos.afr | 6.3 | 0.318 |
| Tatoeba-test.gos-ang.gos.ang | 6.2 | 0.058 |
| Tatoeba-test.gos-deu.gos.deu | 11.7 | 0.363 |
| Tatoeba-test.gos-eng.gos.eng | 14.9 | 0.322 |
| Tatoeba-test.gos-enm.gos.enm | 9.1 | 0.398 |
| Tatoeba-test.gos-frr.gos.frr | 3.3 | 0.117 |
| Tatoeba-test.gos-fry.gos.fry | 13.1 | 0.387 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.1 | 0.154 |
| Tatoeba-test.gos-nds.gos.nds | 2.4 | 0.206 |
| Tatoeba-test.gos-nld.gos.nld | 13.9 | 0.395 |
| Tatoeba-test.gos-stq.gos.stq | 2.1 | 0.209 |
| Tatoeba-test.gos-yid.gos.yid | 1.7 | 0.147 |
| Tatoeba-test.gsw-deu.gsw.deu | 10.5 | 0.350 |
| Tatoeba-test.gsw-eng.gsw.eng | 10.7 | 0.299 |
| Tatoeba-test.ksh-deu.ksh.deu | 12.0 | 0.373 |
| Tatoeba-test.ksh-eng.ksh.eng | 3.2 | 0.225 |
| Tatoeba-test.ksh-enm.ksh.enm | 13.4 | 0.308 |
| Tatoeba-test.ltz-afr.ltz.afr | 37.4 | 0.525 |
| Tatoeba-test.ltz-ang.ltz.ang | 2.8 | 0.036 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.3 | 0.596 |
| Tatoeba-test.ltz-eng.ltz.eng | 31.7 | 0.490 |
| Tatoeba-test.ltz-fry.ltz.fry | 36.3 | 0.658 |
| Tatoeba-test.ltz-gos.ltz.gos | 2.9 | 0.209 |
| Tatoeba-test.ltz-nld.ltz.nld | 38.8 | 0.530 |
| Tatoeba-test.ltz-stq.ltz.stq | 5.8 | 0.165 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.0 | 0.159 |
| Tatoeba-test.multi.multi | 36.4 | 0.568 |
| Tatoeba-test.nds-deu.nds.deu | 35.0 | 0.573 |
| Tatoeba-test.nds-eng.nds.eng | 29.6 | 0.495 |
| Tatoeba-test.nds-enm.nds.enm | 3.7 | 0.194 |
| Tatoeba-test.nds-frr.nds.frr | 6.6 | 0.133 |
| Tatoeba-test.nds-fry.nds.fry | 4.2 | 0.087 |
| Tatoeba-test.nds-gos.nds.gos | 2.0 | 0.243 |
| Tatoeba-test.nds-nld.nds.nld | 41.4 | 0.618 |
| Tatoeba-test.nds-swg.nds.swg | 0.6 | 0.178 |
| Tatoeba-test.nds-yid.nds.yid | 8.3 | 0.238 |
| Tatoeba-test.nld-afr.nld.afr | 59.4 | 0.759 |
| Tatoeba-test.nld-deu.nld.deu | 49.9 | 0.685 |
| Tatoeba-test.nld-eng.nld.eng | 54.1 | 0.699 |
| Tatoeba-test.nld-enm.nld.enm | 5.0 | 0.250 |
| Tatoeba-test.nld-frr.nld.frr | 2.4 | 0.224 |
| Tatoeba-test.nld-fry.nld.fry | 19.4 | 0.446 |
| Tatoeba-test.nld-gos.nld.gos | 2.5 | 0.273 |
| Tatoeba-test.nld-ltz.nld.ltz | 13.8 | 0.292 |
| Tatoeba-test.nld-nds.nld.nds | 21.3 | 0.457 |
| Tatoeba-test.nld-sco.nld.sco | 14.7 | 0.423 |
| Tatoeba-test.nld-stq.nld.stq | 1.9 | 0.257 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.162 |
| Tatoeba-test.nld-yid.nld.yid | 2.6 | 0.186 |
| Tatoeba-test.pdc-deu.pdc.deu | 39.7 | 0.529 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.0 | 0.427 |
| Tatoeba-test.sco-deu.sco.deu | 28.4 | 0.428 |
| Tatoeba-test.sco-eng.sco.eng | 41.8 | 0.595 |
| Tatoeba-test.sco-nld.sco.nld | 36.4 | 0.565 |
| Tatoeba-test.stq-deu.stq.deu | 7.7 | 0.328 |
| Tatoeba-test.stq-eng.stq.eng | 21.1 | 0.428 |
| Tatoeba-test.stq-frr.stq.frr | 2.0 | 0.118 |
| Tatoeba-test.stq-fry.stq.fry | 6.3 | 0.255 |
| Tatoeba-test.stq-gos.stq.gos | 1.4 | 0.244 |
| Tatoeba-test.stq-ltz.stq.ltz | 4.4 | 0.204 |
| Tatoeba-test.stq-nld.stq.nld | 10.7 | 0.371 |
| Tatoeba-test.stq-yid.stq.yid | 1.4 | 0.105 |
| Tatoeba-test.swg-deu.swg.deu | 9.5 | 0.343 |
| Tatoeba-test.swg-eng.swg.eng | 15.1 | 0.306 |
| Tatoeba-test.swg-nds.swg.nds | 0.7 | 0.196 |
| Tatoeba-test.swg-nld.swg.nld | 11.6 | 0.308 |
| Tatoeba-test.swg-yid.swg.yid | 0.9 | 0.186 |
| Tatoeba-test.yid-afr.yid.afr | 100.0 | 1.000 |
| Tatoeba-test.yid-ang.yid.ang | 0.6 | 0.079 |
| Tatoeba-test.yid-deu.yid.deu | 16.7 | 0.372 |
| Tatoeba-test.yid-eng.yid.eng | 15.8 | 0.344 |
| Tatoeba-test.yid-enm.yid.enm | 1.3 | 0.166 |
| Tatoeba-test.yid-fry.yid.fry | 5.6 | 0.157 |
| Tatoeba-test.yid-gos.yid.gos | 2.2 | 0.160 |
| Tatoeba-test.yid-ltz.yid.ltz | 2.1 | 0.238 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.365 |
| Tatoeba-test.yid-nld.yid.nld | 20.9 | 0.397 |
| Tatoeba-test.yid-stq.yid.stq | 3.7 | 0.165 |
| Tatoeba-test.yid-swg.yid.swg | 1.8 | 0.156 |
### System Info:
- hf_name: gmw-gmw
- source_languages: gmw
- target_languages: gmw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt
- src_alpha3: gmw
- tgt_alpha3: gmw
- short_pair: gmw-gmw
- chrF2_score: 0.568
- bleu: 36.4
- brevity_penalty: 1.0
- ref_len: 72534.0
- src_name: West Germanic languages
- tgt_name: West Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmw
- tgt_alpha2: gmw
- prefer_old: False
- long_pair: gmw-gmw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-grk-en
|
Helsinki-NLP
|
marian
| 11 | 628 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['el', 'grk', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,136 |
### grk-eng
* source group: Greek languages
* target group: English
* OPUS readme: [grk-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/grk-eng/README.md)
* model: transformer
* source language(s): ell grc_Grek
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell-eng.ell.eng | 65.9 | 0.779 |
| Tatoeba-test.grc-eng.grc.eng | 4.1 | 0.187 |
| Tatoeba-test.multi.eng | 60.9 | 0.733 |
### System Info:
- hf_name: grk-eng
- source_languages: grk
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/grk-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'grk', 'en']
- src_constituents: {'grc_Grek', 'ell'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.test.txt
- src_alpha3: grk
- tgt_alpha3: eng
- short_pair: grk-en
- chrF2_score: 0.733
- bleu: 60.9
- brevity_penalty: 0.973
- ref_len: 62205.0
- src_name: Greek languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: grk
- tgt_alpha2: en
- prefer_old: False
- long_pair: grk-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-guw-de
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-guw-de
* source languages: guw
* target languages: de
* OPUS readme: [guw-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.de | 22.7 | 0.434 |
|
Helsinki-NLP/opus-mt-guw-en
|
Helsinki-NLP
|
marian
| 10 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-guw-en
* source languages: guw
* target languages: en
* OPUS readme: [guw-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.en | 44.8 | 0.601 |
|
Helsinki-NLP/opus-mt-guw-es
|
Helsinki-NLP
|
marian
| 10 | 28 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-guw-es
* source languages: guw
* target languages: es
* OPUS readme: [guw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.es | 27.2 | 0.457 |
|
Helsinki-NLP/opus-mt-guw-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-guw-fi
* source languages: guw
* target languages: fi
* OPUS readme: [guw-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.fi | 27.7 | 0.512 |
|
Helsinki-NLP/opus-mt-guw-fr
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-guw-fr
* source languages: guw
* target languages: fr
* OPUS readme: [guw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.fr | 29.7 | 0.479 |
|
Helsinki-NLP/opus-mt-guw-sv
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-guw-sv
* source languages: guw
* target languages: sv
* OPUS readme: [guw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.sv | 31.2 | 0.498 |
|
Helsinki-NLP/opus-mt-gv-en
|
Helsinki-NLP
|
marian
| 10 | 56 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 774 |
### opus-mt-gv-en
* source languages: gv
* target languages: en
* OPUS readme: [gv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gv-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.gv.en | 38.9 | 0.668 |
|
Helsinki-NLP/opus-mt-ha-en
|
Helsinki-NLP
|
marian
| 10 | 103 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 803 |
### opus-mt-ha-en
* source languages: ha
* target languages: en
* OPUS readme: [ha-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.en | 35.0 | 0.506 |
| Tatoeba.ha.en | 39.0 | 0.497 |
|
Helsinki-NLP/opus-mt-ha-es
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ha-es
* source languages: ha
* target languages: es
* OPUS readme: [ha-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.es | 21.8 | 0.394 |
|
Helsinki-NLP/opus-mt-ha-fi
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ha-fi
* source languages: ha
* target languages: fi
* OPUS readme: [ha-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.fi | 21.9 | 0.435 |
|
Helsinki-NLP/opus-mt-ha-fr
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ha-fr
* source languages: ha
* target languages: fr
* OPUS readme: [ha-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.fr | 24.3 | 0.415 |
|
Helsinki-NLP/opus-mt-ha-sv
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ha-sv
* source languages: ha
* target languages: sv
* OPUS readme: [ha-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.sv | 25.8 | 0.438 |
|
Helsinki-NLP/opus-mt-he-ar
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['he', 'ar']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,171 |
### heb-ara
* source group: Hebrew
* target group: Arabic
* OPUS readme: [heb-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ara/README.md)
* model: transformer
* source language(s): heb
* target language(s): apc apc_Latn ara arq arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ara | 23.6 | 0.532 |
### System Info:
- hf_name: heb-ara
- source_languages: heb
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'ar']
- src_constituents: {'heb'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.test.txt
- src_alpha3: heb
- tgt_alpha3: ara
- short_pair: he-ar
- chrF2_score: 0.532
- bleu: 23.6
- brevity_penalty: 0.9259999999999999
- ref_len: 6372.0
- src_name: Hebrew
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: he
- tgt_alpha2: ar
- prefer_old: False
- long_pair: heb-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-he-de
|
Helsinki-NLP
|
marian
| 10 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 |
### opus-mt-he-de
* source languages: he
* target languages: de
* OPUS readme: [he-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.he.de | 45.5 | 0.647 |
|
Helsinki-NLP/opus-mt-he-eo
|
Helsinki-NLP
|
marian
| 11 | 17 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['he', 'eo']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,984 |
### heb-epo
* source group: Hebrew
* target group: Esperanto
* OPUS readme: [heb-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-epo/README.md)
* model: transformer-align
* source language(s): heb
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.epo | 17.6 | 0.348 |
### System Info:
- hf_name: heb-epo
- source_languages: heb
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'eo']
- src_constituents: {'heb'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.test.txt
- src_alpha3: heb
- tgt_alpha3: epo
- short_pair: he-eo
- chrF2_score: 0.348
- bleu: 17.6
- brevity_penalty: 0.899
- ref_len: 78217.0
- src_name: Hebrew
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: he
- tgt_alpha2: eo
- prefer_old: False
- long_pair: heb-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-he-es
|
Helsinki-NLP
|
marian
| 12 | 55 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['he', 'es']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,024 |
### he-es
* source group: Hebrew
* target group: Spanish
* OPUS readme: [heb-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md)
* model: transformer
* source language(s): heb
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.spa | 51.3 | 0.689 |
### System Info:
- hf_name: he-es
- source_languages: heb
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'es']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Spanish', {'spa'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-spa
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: spa
- chrF2_score: 0.6890000000000001
- bleu: 51.3
- brevity_penalty: 0.97
- ref_len: 14213.0
- src_name: Hebrew
- tgt_name: Spanish
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: es
- prefer_old: False
- short_pair: he-es
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-09:15
|
Helsinki-NLP/opus-mt-he-fi
|
Helsinki-NLP
|
marian
| 10 | 48 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-he-fi
* source languages: he
* target languages: fi
* OPUS readme: [he-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.he.fi | 23.3 | 0.492 |
|
Helsinki-NLP/opus-mt-he-it
|
Helsinki-NLP
|
marian
| 12 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['he', 'it']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,012 |
### he-it
* source group: Hebrew
* target group: Italian
* OPUS readme: [heb-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md)
* model: transformer
* source language(s): heb
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ita | 41.1 | 0.643 |
### System Info:
- hf_name: he-it
- source_languages: heb
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'it']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: ita
- chrF2_score: 0.643
- bleu: 41.1
- brevity_penalty: 0.997
- ref_len: 11464.0
- src_name: Hebrew
- tgt_name: Italian
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: it
- prefer_old: False
- short_pair: he-it
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-11:50
|
Helsinki-NLP/opus-mt-he-ru
|
Helsinki-NLP
|
marian
| 12 | 26 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['he', 'ru']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,013 |
### he-ru
* source group: Hebrew
* target group: Russian
* OPUS readme: [heb-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-rus/README.md)
* model: transformer
* source language(s): heb
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.rus | 40.5 | 0.599 |
### System Info:
- hf_name: he-ru
- source_languages: heb
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'ru']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Russian', {'rus'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-rus
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.test.txt
- src_alpha3: heb
- tgt_alpha3: rus
- chrF2_score: 0.599
- bleu: 40.5
- brevity_penalty: 0.963
- ref_len: 16583.0
- src_name: Hebrew
- tgt_name: Russian
- train_date: 2020-10-04 00:00:00
- src_alpha2: he
- tgt_alpha2: ru
- prefer_old: False
- short_pair: he-ru
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
- port_machine: LM0-400-22516.local
- port_time: 2020-10-26-16:16
|
Helsinki-NLP/opus-mt-he-sv
|
Helsinki-NLP
|
marian
| 10 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-he-sv
* source languages: he
* target languages: sv
* OPUS readme: [he-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.he.sv | 28.9 | 0.493 |
|
Helsinki-NLP/opus-mt-he-uk
|
Helsinki-NLP
|
marian
| 11 | 20 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['he', 'uk']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,987 |
### heb-ukr
* source group: Hebrew
* target group: Ukrainian
* OPUS readme: [heb-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ukr/README.md)
* model: transformer-align
* source language(s): heb
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ukr | 35.4 | 0.552 |
### System Info:
- hf_name: heb-ukr
- source_languages: heb
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'uk']
- src_constituents: {'heb'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.test.txt
- src_alpha3: heb
- tgt_alpha3: ukr
- short_pair: he-uk
- chrF2_score: 0.552
- bleu: 35.4
- brevity_penalty: 0.971
- ref_len: 5163.0
- src_name: Hebrew
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: he
- tgt_alpha2: uk
- prefer_old: False
- long_pair: heb-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-hi-en
|
Helsinki-NLP
|
marian
| 11 | 12,099 |
transformers
| 4 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 853 |
### opus-mt-hi-en
* source languages: hi
* target languages: en
* OPUS readme: [hi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014.hi.en | 9.1 | 0.357 |
| newstest2014-hien.hi.en | 13.6 | 0.409 |
| Tatoeba.hi.en | 40.4 | 0.580 |
|
Helsinki-NLP/opus-mt-hi-ur
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['hi', 'ur']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,983 |
### hin-urd
* source group: Hindi
* target group: Urdu
* OPUS readme: [hin-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md)
* model: transformer-align
* source language(s): hin
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hin.urd | 12.4 | 0.393 |
### System Info:
- hf_name: hin-urd
- source_languages: hin
- target_languages: urd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hi', 'ur']
- src_constituents: {'hin'}
- tgt_constituents: {'urd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt
- src_alpha3: hin
- tgt_alpha3: urd
- short_pair: hi-ur
- chrF2_score: 0.39299999999999996
- bleu: 12.4
- brevity_penalty: 1.0
- ref_len: 1618.0
- src_name: Hindi
- tgt_name: Urdu
- train_date: 2020-06-16
- src_alpha2: hi
- tgt_alpha2: ur
- prefer_old: False
- long_pair: hin-urd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
Helsinki-NLP/opus-mt-hil-de
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-hil-de
* source languages: hil
* target languages: de
* OPUS readme: [hil-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.de | 26.4 | 0.479 |
|
Helsinki-NLP/opus-mt-hil-en
|
Helsinki-NLP
|
marian
| 10 | 14 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-hil-en
* source languages: hil
* target languages: en
* OPUS readme: [hil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.en | 49.2 | 0.638 |
|
Helsinki-NLP/opus-mt-hil-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 |
### opus-mt-hil-fi
* source languages: hil
* target languages: fi
* OPUS readme: [hil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.fi | 29.9 | 0.547 |
|
Helsinki-NLP/opus-mt-ho-en
|
Helsinki-NLP
|
marian
| 10 | 18 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ho-en
* source languages: ho
* target languages: en
* OPUS readme: [ho-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ho-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ho.en | 26.8 | 0.428 |
|
Helsinki-NLP/opus-mt-hr-es
|
Helsinki-NLP
|
marian
| 10 | 26 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-hr-es
* source languages: hr
* target languages: es
* OPUS readme: [hr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.es | 27.9 | 0.498 |
|
Helsinki-NLP/opus-mt-hr-fi
|
Helsinki-NLP
|
marian
| 10 | 430 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-hr-fi
* source languages: hr
* target languages: fi
* OPUS readme: [hr-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.fi | 25.0 | 0.519 |
|
Helsinki-NLP/opus-mt-hr-fr
|
Helsinki-NLP
|
marian
| 10 | 75 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-hr-fr
* source languages: hr
* target languages: fr
* OPUS readme: [hr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.fr | 26.1 | 0.482 |
|
Helsinki-NLP/opus-mt-hr-sv
|
Helsinki-NLP
|
marian
| 10 | 174 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-hr-sv
* source languages: hr
* target languages: sv
* OPUS readme: [hr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.sv | 30.5 | 0.526 |
|
Helsinki-NLP/opus-mt-ht-en
|
Helsinki-NLP
|
marian
| 10 | 326 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 803 |
### opus-mt-ht-en
* source languages: ht
* target languages: en
* OPUS readme: [ht-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.en | 37.5 | 0.542 |
| Tatoeba.ht.en | 57.0 | 0.689 |
|
Helsinki-NLP/opus-mt-ht-es
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ht-es
* source languages: ht
* target languages: es
* OPUS readme: [ht-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.es | 23.7 | 0.418 |
|
Helsinki-NLP/opus-mt-ht-fi
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 |
### opus-mt-ht-fi
* source languages: ht
* target languages: fi
* OPUS readme: [ht-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.fi | 23.3 | 0.464 |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.