repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
Helsinki-NLP/opus-mt-sv-mt
Helsinki-NLP
marian
10
12
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-mt * source languages: sv * target languages: mt * OPUS readme: [sv-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-mt/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-mt/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mt/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mt/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.mt | 32.2 | 0.509 |
Helsinki-NLP/opus-mt-sv-niu
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-niu * source languages: sv * target languages: niu * OPUS readme: [sv-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-niu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-niu/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-niu/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-niu/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.niu | 37.0 | 0.575 |
Helsinki-NLP/opus-mt-sv-nl
Helsinki-NLP
marian
10
12
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
775
### opus-mt-sv-nl * source languages: sv * target languages: nl * OPUS readme: [sv-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-nl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-nl/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nl/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nl/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.sv.nl | 24.3 | 0.522 |
Helsinki-NLP/opus-mt-sv-no
Helsinki-NLP
marian
11
36
transformers
0
translation
true
true
false
apache-2.0
['sv', False]
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
2,113
### swe-nor * source group: Swedish * target group: Norwegian * OPUS readme: [swe-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-nor/README.md) * model: transformer-align * source language(s): swe * target language(s): nno nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.swe.nor | 65.8 | 0.796 | ### System Info: - hf_name: swe-nor - source_languages: swe - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/swe-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sv', 'no'] - src_constituents: {'swe'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/swe-nor/opus-2020-06-17.test.txt - src_alpha3: swe - tgt_alpha3: nor - short_pair: sv-no - chrF2_score: 0.7959999999999999 - bleu: 65.8 - brevity_penalty: 0.991 - ref_len: 3682.0 - src_name: Swedish - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: sv - tgt_alpha2: no - prefer_old: False - long_pair: swe-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-sv-nso
Helsinki-NLP
marian
10
11
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-nso * source languages: sv * target languages: nso * OPUS readme: [sv-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-nso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-nso/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nso/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nso/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.nso | 37.9 | 0.575 |
Helsinki-NLP/opus-mt-sv-ny
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-ny * source languages: sv * target languages: ny * OPUS readme: [sv-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ny/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ny/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ny/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ny/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.ny | 25.9 | 0.523 |
Helsinki-NLP/opus-mt-sv-pag
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-pag * source languages: sv * target languages: pag * OPUS readme: [sv-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-pag/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-pag/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pag/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pag/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.pag | 29.3 | 0.522 |
Helsinki-NLP/opus-mt-sv-pap
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-pap * source languages: sv * target languages: pap * OPUS readme: [sv-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-pap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-pap/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pap/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pap/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.pap | 31.0 | 0.505 |
Helsinki-NLP/opus-mt-sv-pis
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-pis * source languages: sv * target languages: pis * OPUS readme: [sv-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-pis/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-pis/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pis/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pis/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.pis | 30.9 | 0.519 |
Helsinki-NLP/opus-mt-sv-pon
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-pon * source languages: sv * target languages: pon * OPUS readme: [sv-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-pon/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-pon/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pon/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-pon/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.pon | 26.0 | 0.491 |
Helsinki-NLP/opus-mt-sv-rnd
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-rnd * source languages: sv * target languages: rnd * OPUS readme: [sv-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-rnd/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-rnd/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-rnd/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-rnd/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.rnd | 20.3 | 0.433 |
Helsinki-NLP/opus-mt-sv-ro
Helsinki-NLP
marian
10
12
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-ro * source languages: sv * target languages: ro * OPUS readme: [sv-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ro/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.ro | 29.5 | 0.510 |
Helsinki-NLP/opus-mt-sv-ru
Helsinki-NLP
marian
10
106
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-sv-ru * source languages: sv * target languages: ru * OPUS readme: [sv-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ru/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ru/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ru/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ru/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.sv.ru | 46.6 | 0.662 |
Helsinki-NLP/opus-mt-sv-run
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-run * source languages: sv * target languages: run * OPUS readme: [sv-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-run/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-run/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-run/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-run/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.run | 24.4 | 0.502 |
Helsinki-NLP/opus-mt-sv-rw
Helsinki-NLP
marian
10
13
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-rw * source languages: sv * target languages: rw * OPUS readme: [sv-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-rw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-rw/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-rw/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-rw/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.rw | 26.7 | 0.514 |
Helsinki-NLP/opus-mt-sv-sg
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-sg * source languages: sv * target languages: sg * OPUS readme: [sv-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sg/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sg/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sg/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.sg | 30.0 | 0.487 |
Helsinki-NLP/opus-mt-sv-sk
Helsinki-NLP
marian
10
13
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-sk * source languages: sv * target languages: sk * OPUS readme: [sv-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sk/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sk/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sk/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.sk | 30.7 | 0.516 |
Helsinki-NLP/opus-mt-sv-sl
Helsinki-NLP
marian
10
14
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-sl * source languages: sv * target languages: sl * OPUS readme: [sv-sl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sl/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sl/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sl/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.sl | 25.1 | 0.487 |
Helsinki-NLP/opus-mt-sv-sm
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-sm * source languages: sv * target languages: sm * OPUS readme: [sv-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sm/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sm/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sm/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sm/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.sm | 30.1 | 0.500 |
Helsinki-NLP/opus-mt-sv-sn
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-sn * source languages: sv * target languages: sn * OPUS readme: [sv-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sn/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sn/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sn/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.sn | 27.4 | 0.557 |
Helsinki-NLP/opus-mt-sv-sq
Helsinki-NLP
marian
10
12
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-sq * source languages: sv * target languages: sq * OPUS readme: [sv-sq](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sq/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sq/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sq/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sq/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.sq | 34.4 | 0.553 |
Helsinki-NLP/opus-mt-sv-srn
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-srn * source languages: sv * target languages: srn * OPUS readme: [sv-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-srn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-srn/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.srn | 31.3 | 0.506 |
Helsinki-NLP/opus-mt-sv-st
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-st * source languages: sv * target languages: st * OPUS readme: [sv-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-st/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-st/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-st/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-st/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.st | 38.8 | 0.584 |
Helsinki-NLP/opus-mt-sv-sv
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-sv-sv * source languages: sv * target languages: sv * OPUS readme: [sv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-sv/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sv/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-sv/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.sv.sv | 49.2 | 0.741 |
Helsinki-NLP/opus-mt-sv-swc
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-swc * source languages: sv * target languages: swc * OPUS readme: [sv-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-swc/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-swc/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-swc/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-swc/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.swc | 30.1 | 0.536 |
Helsinki-NLP/opus-mt-sv-th
Helsinki-NLP
marian
10
18
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-th * source languages: sv * target languages: th * OPUS readme: [sv-th](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-th/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-th/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-th/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-th/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.th | 21.2 | 0.373 |
Helsinki-NLP/opus-mt-sv-tiv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-tiv * source languages: sv * target languages: tiv * OPUS readme: [sv-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tiv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tiv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tiv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tiv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tiv | 25.2 | 0.439 |
Helsinki-NLP/opus-mt-sv-tll
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-tll * source languages: sv * target languages: tll * OPUS readme: [sv-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tll/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tll/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tll | 24.9 | 0.484 |
Helsinki-NLP/opus-mt-sv-tn
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-tn * source languages: sv * target languages: tn * OPUS readme: [sv-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tn/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tn/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tn/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tn | 36.3 | 0.561 |
Helsinki-NLP/opus-mt-sv-to
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-to * source languages: sv * target languages: to * OPUS readme: [sv-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-to/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-to/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-to/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-to/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.to | 41.8 | 0.564 |
Helsinki-NLP/opus-mt-sv-toi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-toi * source languages: sv * target languages: toi * OPUS readme: [sv-toi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-toi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-toi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-toi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-toi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.toi | 23.2 | 0.512 |
Helsinki-NLP/opus-mt-sv-tpi
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-tpi * source languages: sv * target languages: tpi * OPUS readme: [sv-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tpi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tpi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tpi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tpi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tpi | 31.4 | 0.513 |
Helsinki-NLP/opus-mt-sv-ts
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-ts * source languages: sv * target languages: ts * OPUS readme: [sv-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ts/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ts/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ts/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ts/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.ts | 34.4 | 0.567 |
Helsinki-NLP/opus-mt-sv-tum
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-tum * source languages: sv * target languages: tum * OPUS readme: [sv-tum](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tum/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tum/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tum/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tum/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tum | 22.0 | 0.475 |
Helsinki-NLP/opus-mt-sv-tvl
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-tvl * source languages: sv * target languages: tvl * OPUS readme: [sv-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tvl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tvl/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tvl/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tvl/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tvl | 34.4 | 0.521 |
Helsinki-NLP/opus-mt-sv-tw
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-tw * source languages: sv * target languages: tw * OPUS readme: [sv-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-tw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-tw/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tw/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-tw/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.tw | 30.7 | 0.509 |
Helsinki-NLP/opus-mt-sv-ty
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-ty * source languages: sv * target languages: ty * OPUS readme: [sv-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ty/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ty/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ty/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ty/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.ty | 40.5 | 0.571 |
Helsinki-NLP/opus-mt-sv-uk
Helsinki-NLP
marian
10
13
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-uk * source languages: sv * target languages: uk * OPUS readme: [sv-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-uk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-uk/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-uk/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-uk/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.uk | 24.0 | 0.447 |
Helsinki-NLP/opus-mt-sv-umb
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-umb * source languages: sv * target languages: umb * OPUS readme: [sv-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-umb/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-umb/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.umb | 20.4 | 0.431 |
Helsinki-NLP/opus-mt-sv-ve
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-ve * source languages: sv * target languages: ve * OPUS readme: [sv-ve](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ve/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ve/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ve/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ve/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.ve | 26.4 | 0.496 |
Helsinki-NLP/opus-mt-sv-war
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-war * source languages: sv * target languages: war * OPUS readme: [sv-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-war/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.war | 36.7 | 0.576 |
Helsinki-NLP/opus-mt-sv-wls
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-wls * source languages: sv * target languages: wls * OPUS readme: [sv-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-wls/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-wls/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-wls/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-wls/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.wls | 29.0 | 0.501 |
Helsinki-NLP/opus-mt-sv-xh
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-xh * source languages: sv * target languages: xh * OPUS readme: [sv-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-xh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-xh/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.xh | 26.7 | 0.561 |
Helsinki-NLP/opus-mt-sv-yap
Helsinki-NLP
marian
10
25
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-yap * source languages: sv * target languages: yap * OPUS readme: [sv-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-yap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-yap/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yap/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yap/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.yap | 27.3 | 0.461 |
Helsinki-NLP/opus-mt-sv-yo
Helsinki-NLP
marian
10
12
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-sv-yo * source languages: sv * target languages: yo * OPUS readme: [sv-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-yo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-yo/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yo/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yo/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.yo | 26.4 | 0.432 |
Helsinki-NLP/opus-mt-sv-zne
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-sv-zne * source languages: sv * target languages: zne * OPUS readme: [sv-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-zne/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.zne | 23.8 | 0.474 |
Helsinki-NLP/opus-mt-swc-en
Helsinki-NLP
marian
10
466
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-swc-en * source languages: swc * target languages: en * OPUS readme: [swc-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.swc.en | 41.1 | 0.569 |
Helsinki-NLP/opus-mt-swc-es
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-swc-es * source languages: swc * target languages: es * OPUS readme: [swc-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.swc.es | 27.4 | 0.458 |
Helsinki-NLP/opus-mt-swc-fi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-swc-fi * source languages: swc * target languages: fi * OPUS readme: [swc-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.swc.fi | 26.0 | 0.489 |
Helsinki-NLP/opus-mt-swc-fr
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-swc-fr * source languages: swc * target languages: fr * OPUS readme: [swc-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.swc.fr | 28.6 | 0.470 |
Helsinki-NLP/opus-mt-swc-sv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-swc-sv * source languages: swc * target languages: sv * OPUS readme: [swc-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.swc.sv | 30.7 | 0.495 |
Helsinki-NLP/opus-mt-taw-en
Helsinki-NLP
marian
11
10
transformers
0
translation
true
true
false
apache-2.0
['lo', 'th', 'taw', 'en']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,102
### taw-eng * source group: Tai * target group: English * OPUS readme: [taw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/taw-eng/README.md) * model: transformer * source language(s): lao tha * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.zip) * test set translations: [opus-2020-06-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.test.txt) * test set scores: [opus-2020-06-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lao-eng.lao.eng | 1.1 | 0.133 | | Tatoeba-test.multi.eng | 38.9 | 0.572 | | Tatoeba-test.tha-eng.tha.eng | 40.6 | 0.588 | ### System Info: - hf_name: taw-eng - source_languages: taw - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/taw-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lo', 'th', 'taw', 'en'] - src_constituents: {'lao', 'tha'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.test.txt - src_alpha3: taw - tgt_alpha3: eng - short_pair: taw-en - chrF2_score: 0.5720000000000001 - bleu: 38.9 - brevity_penalty: 1.0 - ref_len: 7630.0 - src_name: Tai - tgt_name: English - train_date: 2020-06-28 - src_alpha2: taw - tgt_alpha2: en - prefer_old: False - long_pair: taw-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tc-base-gmw-gmw
Helsinki-NLP
marian
13
6
transformers
0
translation
true
true
false
cc-by-4.0
['af', 'de', 'en', 'fy', 'gmw', 'gos', 'hrx', 'lb', 'nds', 'nl', 'pdc', 'yi']
null
null
2
1
1
0
0
0
0
['translation', 'opus-mt-tc']
true
true
true
10,601
# opus-mt-tc-base-gmw-gmw Neural machine translation model for translating from West Germanic languages (gmw) to West Germanic languages (gmw). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2021-02-23 * source language(s): afr deu eng fry gos hrx ltz nds nld pdc yid * target language(s): afr deu eng fry nds nld * valid target language labels: >>afr<< >>ang_Latn<< >>deu<< >>eng<< >>fry<< >>ltz<< >>nds<< >>nld<< >>sco<< >>yid<< * model: transformer (base) * data: opus ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opus-2021-02-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.zip) * more information released models: [OPUS-MT gmw-gmw README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>afr<<` ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>nld<< You need help.", ">>afr<< I love your son." ] model_name = "pytorch-models/opus-mt-tc-base-gmw-gmw" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Je hebt hulp nodig. # Ek is lief vir jou seun. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-gmw-gmw") print(pipe(>>nld<< You need help.)) # expected output: Je hebt hulp nodig. ``` ## Benchmarks * test set translations: [opus-2021-02-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.test.txt) * test set scores: [opus-2021-02-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | afr-deu | tatoeba-test-v2021-08-07 | 0.674 | 48.1 | 1583 | 9105 | | afr-eng | tatoeba-test-v2021-08-07 | 0.728 | 58.8 | 1374 | 9622 | | afr-nld | tatoeba-test-v2021-08-07 | 0.711 | 54.5 | 1056 | 6710 | | deu-afr | tatoeba-test-v2021-08-07 | 0.696 | 52.4 | 1583 | 9507 | | deu-eng | tatoeba-test-v2021-08-07 | 0.609 | 42.1 | 17565 | 149462 | | deu-nds | tatoeba-test-v2021-08-07 | 0.442 | 18.6 | 9999 | 76137 | | deu-nld | tatoeba-test-v2021-08-07 | 0.672 | 48.7 | 10218 | 75235 | | eng-afr | tatoeba-test-v2021-08-07 | 0.735 | 56.5 | 1374 | 10317 | | eng-deu | tatoeba-test-v2021-08-07 | 0.580 | 35.9 | 17565 | 151568 | | eng-nds | tatoeba-test-v2021-08-07 | 0.412 | 16.6 | 2500 | 18264 | | eng-nld | tatoeba-test-v2021-08-07 | 0.663 | 48.3 | 12696 | 91796 | | fry-eng | tatoeba-test-v2021-08-07 | 0.500 | 32.5 | 220 | 1573 | | fry-nld | tatoeba-test-v2021-08-07 | 0.633 | 43.1 | 260 | 1854 | | gos-nld | tatoeba-test-v2021-08-07 | 0.405 | 15.6 | 1852 | 9903 | | hrx-deu | tatoeba-test-v2021-08-07 | 0.484 | 24.7 | 471 | 2805 | | hrx-eng | tatoeba-test-v2021-08-07 | 0.362 | 20.4 | 221 | 1235 | | ltz-deu | tatoeba-test-v2021-08-07 | 0.556 | 37.2 | 347 | 2208 | | ltz-eng | tatoeba-test-v2021-08-07 | 0.485 | 32.4 | 293 | 1840 | | ltz-nld | tatoeba-test-v2021-08-07 | 0.534 | 39.3 | 292 | 1685 | | nds-deu | tatoeba-test-v2021-08-07 | 0.572 | 34.5 | 9999 | 74564 | | nds-eng | tatoeba-test-v2021-08-07 | 0.493 | 29.9 | 2500 | 17589 | | nds-nld | tatoeba-test-v2021-08-07 | 0.621 | 42.3 | 1657 | 11490 | | nld-afr | tatoeba-test-v2021-08-07 | 0.755 | 58.8 | 1056 | 6823 | | nld-deu | tatoeba-test-v2021-08-07 | 0.686 | 50.4 | 10218 | 74131 | | nld-eng | tatoeba-test-v2021-08-07 | 0.690 | 53.1 | 12696 | 89978 | | nld-fry | tatoeba-test-v2021-08-07 | 0.478 | 25.1 | 260 | 1857 | | nld-nds | tatoeba-test-v2021-08-07 | 0.462 | 21.4 | 1657 | 11711 | | afr-deu | flores101-devtest | 0.524 | 21.6 | 1012 | 25094 | | afr-eng | flores101-devtest | 0.693 | 46.8 | 1012 | 24721 | | afr-nld | flores101-devtest | 0.509 | 18.4 | 1012 | 25467 | | deu-afr | flores101-devtest | 0.534 | 21.4 | 1012 | 25740 | | deu-eng | flores101-devtest | 0.616 | 33.8 | 1012 | 24721 | | deu-nld | flores101-devtest | 0.516 | 19.2 | 1012 | 25467 | | eng-afr | flores101-devtest | 0.628 | 33.8 | 1012 | 25740 | | eng-deu | flores101-devtest | 0.581 | 29.1 | 1012 | 25094 | | eng-nld | flores101-devtest | 0.533 | 21.0 | 1012 | 25467 | | ltz-afr | flores101-devtest | 0.430 | 12.9 | 1012 | 25740 | | ltz-deu | flores101-devtest | 0.482 | 17.1 | 1012 | 25094 | | ltz-eng | flores101-devtest | 0.468 | 18.8 | 1012 | 24721 | | ltz-nld | flores101-devtest | 0.409 | 10.7 | 1012 | 25467 | | nld-afr | flores101-devtest | 0.494 | 16.8 | 1012 | 25740 | | nld-deu | flores101-devtest | 0.501 | 17.9 | 1012 | 25094 | | nld-eng | flores101-devtest | 0.551 | 25.6 | 1012 | 24721 | | deu-eng | multi30k_test_2016_flickr | 0.546 | 32.2 | 1000 | 12955 | | eng-deu | multi30k_test_2016_flickr | 0.582 | 28.8 | 1000 | 12106 | | deu-eng | multi30k_test_2017_flickr | 0.561 | 32.7 | 1000 | 11374 | | eng-deu | multi30k_test_2017_flickr | 0.573 | 27.6 | 1000 | 10755 | | deu-eng | multi30k_test_2017_mscoco | 0.499 | 25.5 | 461 | 5231 | | eng-deu | multi30k_test_2017_mscoco | 0.514 | 22.0 | 461 | 5158 | | deu-eng | multi30k_test_2018_flickr | 0.535 | 30.0 | 1071 | 14689 | | eng-deu | multi30k_test_2018_flickr | 0.547 | 25.3 | 1071 | 13703 | | deu-eng | newssyscomb2009 | 0.527 | 25.4 | 502 | 11818 | | eng-deu | newssyscomb2009 | 0.504 | 19.3 | 502 | 11271 | | deu-eng | news-test2008 | 0.518 | 23.8 | 2051 | 49380 | | eng-deu | news-test2008 | 0.492 | 19.3 | 2051 | 47447 | | deu-eng | newstest2009 | 0.516 | 23.4 | 2525 | 65399 | | eng-deu | newstest2009 | 0.498 | 18.8 | 2525 | 62816 | | deu-eng | newstest2010 | 0.546 | 25.8 | 2489 | 61711 | | eng-deu | newstest2010 | 0.508 | 20.7 | 2489 | 61503 | | deu-eng | newstest2011 | 0.524 | 23.7 | 3003 | 74681 | | eng-deu | newstest2011 | 0.493 | 19.2 | 3003 | 72981 | | deu-eng | newstest2012 | 0.532 | 24.8 | 3003 | 72812 | | eng-deu | newstest2012 | 0.493 | 19.5 | 3003 | 72886 | | deu-eng | newstest2013 | 0.548 | 27.7 | 3000 | 64505 | | eng-deu | newstest2013 | 0.517 | 22.5 | 3000 | 63737 | | deu-eng | newstest2014-deen | 0.548 | 27.3 | 3003 | 67337 | | eng-deu | newstest2014-deen | 0.532 | 22.0 | 3003 | 62688 | | deu-eng | newstest2015-deen | 0.553 | 28.6 | 2169 | 46443 | | eng-deu | newstest2015-ende | 0.544 | 25.7 | 2169 | 44260 | | deu-eng | newstest2016-deen | 0.596 | 33.3 | 2999 | 64119 | | eng-deu | newstest2016-ende | 0.580 | 30.0 | 2999 | 62669 | | deu-eng | newstest2017-deen | 0.561 | 29.5 | 3004 | 64399 | | eng-deu | newstest2017-ende | 0.535 | 24.1 | 3004 | 61287 | | deu-eng | newstest2018-deen | 0.610 | 36.1 | 2998 | 67012 | | eng-deu | newstest2018-ende | 0.613 | 35.4 | 2998 | 64276 | | deu-eng | newstest2019-deen | 0.582 | 32.3 | 2000 | 39227 | | eng-deu | newstest2019-ende | 0.583 | 31.2 | 1997 | 48746 | | deu-eng | newstest2020-deen | 0.604 | 32.0 | 785 | 38220 | | eng-deu | newstest2020-ende | 0.542 | 23.9 | 1418 | 52383 | | deu-eng | newstestB2020-deen | 0.598 | 31.2 | 785 | 37696 | | eng-deu | newstestB2020-ende | 0.532 | 23.3 | 1418 | 53092 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.12.3 * OPUS-MT git hash: e56a06b * port time: Sun Feb 13 14:42:10 EET 2022 * port machine: LM0-400-22516.local
Helsinki-NLP/opus-mt-th-en
Helsinki-NLP
marian
11
2,223
transformers
1
translation
true
true
false
apache-2.0
['th', 'en']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,992
### tha-eng * source group: Thai * target group: English * OPUS readme: [tha-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md) * model: transformer-align * source language(s): tha * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tha.eng | 48.1 | 0.644 | ### System Info: - hf_name: tha-eng - source_languages: tha - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['th', 'en'] - src_constituents: {'tha'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt - src_alpha3: tha - tgt_alpha3: eng - short_pair: th-en - chrF2_score: 0.644 - bleu: 48.1 - brevity_penalty: 0.9740000000000001 - ref_len: 7407.0 - src_name: Thai - tgt_name: English - train_date: 2020-06-17 - src_alpha2: th - tgt_alpha2: en - prefer_old: False - long_pair: tha-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-th-fr
Helsinki-NLP
marian
10
705
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-th-fr * source languages: th * target languages: fr * OPUS readme: [th-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/th-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.th.fr | 20.4 | 0.363 |
Helsinki-NLP/opus-mt-ti-en
Helsinki-NLP
marian
10
19
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-ti-en * source languages: ti * target languages: en * OPUS readme: [ti-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ti-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ti.en | 30.4 | 0.461 |
Helsinki-NLP/opus-mt-tiv-en
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tiv-en * source languages: tiv * target languages: en * OPUS readme: [tiv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tiv.en | 31.5 | 0.473 |
Helsinki-NLP/opus-mt-tiv-fr
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tiv-fr * source languages: tiv * target languages: fr * OPUS readme: [tiv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tiv.fr | 22.3 | 0.389 |
Helsinki-NLP/opus-mt-tiv-sv
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tiv-sv * source languages: tiv * target languages: sv * OPUS readme: [tiv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tiv.sv | 23.7 | 0.416 |
Helsinki-NLP/opus-mt-tl-de
Helsinki-NLP
marian
11
7
transformers
0
translation
true
true
false
apache-2.0
['tl', 'de']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,006
### tgl-deu * source group: Tagalog * target group: German * OPUS readme: [tgl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.deu | 22.7 | 0.473 | ### System Info: - hf_name: tgl-deu - source_languages: tgl - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'de'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: deu - short_pair: tl-de - chrF2_score: 0.473 - bleu: 22.7 - brevity_penalty: 0.9690000000000001 - ref_len: 2453.0 - src_name: Tagalog - tgt_name: German - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: de - prefer_old: False - long_pair: tgl-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tl-en
Helsinki-NLP
marian
11
1,126
transformers
0
translation
true
true
false
apache-2.0
['tl', 'en']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,996
### tgl-eng * source group: Tagalog * target group: English * OPUS readme: [tgl-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.eng | 35.0 | 0.542 | ### System Info: - hf_name: tgl-eng - source_languages: tgl - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'en'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: eng - short_pair: tl-en - chrF2_score: 0.542 - bleu: 35.0 - brevity_penalty: 0.975 - ref_len: 18168.0 - src_name: Tagalog - tgt_name: English - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: en - prefer_old: False - long_pair: tgl-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tl-es
Helsinki-NLP
marian
11
26
transformers
0
translation
true
true
false
apache-2.0
['tl', 'es']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,995
### tgl-spa * source group: Tagalog * target group: Spanish * OPUS readme: [tgl-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.spa | 31.6 | 0.531 | ### System Info: - hf_name: tgl-spa - source_languages: tgl - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'es'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: spa - short_pair: tl-es - chrF2_score: 0.531 - bleu: 31.6 - brevity_penalty: 0.997 - ref_len: 4327.0 - src_name: Tagalog - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: es - prefer_old: False - long_pair: tgl-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tl-pt
Helsinki-NLP
marian
11
7
transformers
0
translation
true
true
false
apache-2.0
['tl', 'pt']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,002
### tgl-por * source group: Tagalog * target group: Portuguese * OPUS readme: [tgl-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.por | 28.8 | 0.522 | ### System Info: - hf_name: tgl-por - source_languages: tgl - target_languages: por - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'pt'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'por'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: por - short_pair: tl-pt - chrF2_score: 0.522 - bleu: 28.8 - brevity_penalty: 0.981 - ref_len: 12826.0 - src_name: Tagalog - tgt_name: Portuguese - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: pt - prefer_old: False - long_pair: tgl-por - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tll-en
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tll-en * source languages: tll * target languages: en * OPUS readme: [tll-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.en | 34.5 | 0.500 |
Helsinki-NLP/opus-mt-tll-es
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tll-es * source languages: tll * target languages: es * OPUS readme: [tll-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.es | 22.9 | 0.403 |
Helsinki-NLP/opus-mt-tll-fi
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tll-fi * source languages: tll * target languages: fi * OPUS readme: [tll-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.fi | 22.4 | 0.441 |
Helsinki-NLP/opus-mt-tll-fr
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tll-fr * source languages: tll * target languages: fr * OPUS readme: [tll-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.fr | 25.2 | 0.426 |
Helsinki-NLP/opus-mt-tll-sv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tll-sv * source languages: tll * target languages: sv * OPUS readme: [tll-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.sv | 25.6 | 0.436 |
Helsinki-NLP/opus-mt-tn-en
Helsinki-NLP
marian
10
23
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-tn-en * source languages: tn * target languages: en * OPUS readme: [tn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.en | 43.4 | 0.589 |
Helsinki-NLP/opus-mt-tn-es
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-tn-es * source languages: tn * target languages: es * OPUS readme: [tn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.es | 29.1 | 0.479 |
Helsinki-NLP/opus-mt-tn-fr
Helsinki-NLP
marian
10
21
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-tn-fr * source languages: tn * target languages: fr * OPUS readme: [tn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.fr | 29.0 | 0.474 |
Helsinki-NLP/opus-mt-tn-sv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-tn-sv * source languages: tn * target languages: sv * OPUS readme: [tn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.sv | 32.0 | 0.508 |
Helsinki-NLP/opus-mt-to-en
Helsinki-NLP
marian
10
22
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-to-en * source languages: to * target languages: en * OPUS readme: [to-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.en | 49.3 | 0.627 |
Helsinki-NLP/opus-mt-to-es
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-to-es * source languages: to * target languages: es * OPUS readme: [to-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.es | 26.6 | 0.447 |
Helsinki-NLP/opus-mt-to-fr
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-to-fr * source languages: to * target languages: fr * OPUS readme: [to-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.fr | 27.9 | 0.456 |
Helsinki-NLP/opus-mt-to-sv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-to-sv * source languages: to * target languages: sv * OPUS readme: [to-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.sv | 30.7 | 0.493 |
Helsinki-NLP/opus-mt-toi-en
Helsinki-NLP
marian
10
11
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-toi-en * source languages: toi * target languages: en * OPUS readme: [toi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.en | 39.0 | 0.539 |
Helsinki-NLP/opus-mt-toi-es
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-toi-es * source languages: toi * target languages: es * OPUS readme: [toi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.es | 24.6 | 0.416 |
Helsinki-NLP/opus-mt-toi-fi
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-toi-fi * source languages: toi * target languages: fi * OPUS readme: [toi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.fi | 24.5 | 0.464 |
Helsinki-NLP/opus-mt-toi-fr
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-toi-fr * source languages: toi * target languages: fr * OPUS readme: [toi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.fr | 26.5 | 0.432 |
Helsinki-NLP/opus-mt-toi-sv
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-toi-sv * source languages: toi * target languages: sv * OPUS readme: [toi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.sv | 27.0 | 0.448 |
Helsinki-NLP/opus-mt-tpi-en
Helsinki-NLP
marian
10
18
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tpi-en * source languages: tpi * target languages: en * OPUS readme: [tpi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tpi.en | 29.1 | 0.448 |
Helsinki-NLP/opus-mt-tpi-sv
Helsinki-NLP
marian
10
16
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tpi-sv * source languages: tpi * target languages: sv * OPUS readme: [tpi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tpi.sv | 21.6 | 0.396 |
Helsinki-NLP/opus-mt-tr-ar
Helsinki-NLP
marian
11
19
transformers
0
translation
true
true
false
apache-2.0
['tr', 'ar']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,166
### tur-ara * source group: Turkish * target group: Arabic * OPUS readme: [tur-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md) * model: transformer * source language(s): tur * target language(s): apc_Latn ara ara_Latn arq_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.ara | 14.9 | 0.455 | ### System Info: - hf_name: tur-ara - source_languages: tur - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'ar'] - src_constituents: {'tur'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt - src_alpha3: tur - tgt_alpha3: ara - short_pair: tr-ar - chrF2_score: 0.455 - bleu: 14.9 - brevity_penalty: 0.988 - ref_len: 6944.0 - src_name: Turkish - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: tr - tgt_alpha2: ar - prefer_old: False - long_pair: tur-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tr-az
Helsinki-NLP
marian
11
28
transformers
1
translation
true
true
false
apache-2.0
['tr', 'az']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,997
### tur-aze * source group: Turkish * target group: Azerbaijani * OPUS readme: [tur-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md) * model: transformer-align * source language(s): tur * target language(s): aze_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.aze | 27.7 | 0.551 | ### System Info: - hf_name: tur-aze - source_languages: tur - target_languages: aze - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'az'] - src_constituents: {'tur'} - tgt_constituents: {'aze_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt - src_alpha3: tur - tgt_alpha3: aze - short_pair: tr-az - chrF2_score: 0.551 - bleu: 27.7 - brevity_penalty: 1.0 - ref_len: 5436.0 - src_name: Turkish - tgt_name: Azerbaijani - train_date: 2020-06-16 - src_alpha2: tr - tgt_alpha2: az - prefer_old: False - long_pair: tur-aze - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tr-en
Helsinki-NLP
marian
10
21,957
transformers
15
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
949
### opus-mt-tr-en * source languages: tr * target languages: en * OPUS readme: [tr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-entr.tr.en | 27.6 | 0.548 | | newstest2016-entr.tr.en | 25.2 | 0.532 | | newstest2017-entr.tr.en | 24.7 | 0.530 | | newstest2018-entr.tr.en | 27.0 | 0.547 | | Tatoeba.tr.en | 63.5 | 0.760 |
Helsinki-NLP/opus-mt-tr-eo
Helsinki-NLP
marian
11
8
transformers
0
translation
true
true
false
apache-2.0
['tr', 'eo']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,999
### tur-epo * source group: Turkish * target group: Esperanto * OPUS readme: [tur-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md) * model: transformer-align * source language(s): tur * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.epo | 17.0 | 0.373 | ### System Info: - hf_name: tur-epo - source_languages: tur - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'eo'] - src_constituents: {'tur'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt - src_alpha3: tur - tgt_alpha3: epo - short_pair: tr-eo - chrF2_score: 0.373 - bleu: 17.0 - brevity_penalty: 0.8809999999999999 - ref_len: 33762.0 - src_name: Turkish - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: tr - tgt_alpha2: eo - prefer_old: False - long_pair: tur-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tr-es
Helsinki-NLP
marian
10
111
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-tr-es * source languages: tr * target languages: es * OPUS readme: [tr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.tr.es | 56.3 | 0.722 |
Helsinki-NLP/opus-mt-tr-fr
Helsinki-NLP
marian
10
51
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-tr-fr * source languages: tr * target languages: fr * OPUS readme: [tr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.tr.fr | 45.3 | 0.627 |
Helsinki-NLP/opus-mt-tr-lt
Helsinki-NLP
marian
10
7
transformers
0
translation
true
false
false
apache-2.0
['tr', 'lt']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,004
### tur-lit * source group: Turkish * target group: Lithuanian * OPUS readme: [tur-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md) * model: transformer-align * source language(s): tur * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.lit | 35.6 | 0.631 | ### System Info: - hf_name: tur-lit - source_languages: tur - target_languages: lit - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'lt'] - src_constituents: {'tur'} - tgt_constituents: {'lit'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt - src_alpha3: tur - tgt_alpha3: lit - short_pair: tr-lt - chrF2_score: 0.631 - bleu: 35.6 - brevity_penalty: 0.9490000000000001 - ref_len: 8285.0 - src_name: Turkish - tgt_name: Lithuanian - train_date: 2020-06-17 - src_alpha2: tr - tgt_alpha2: lt - prefer_old: False - long_pair: tur-lit - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-tr-sv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-tr-sv * source languages: tr * target languages: sv * OPUS readme: [tr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tr.sv | 26.3 | 0.478 |
Helsinki-NLP/opus-mt-tr-uk
Helsinki-NLP
marian
11
21
transformers
0
translation
true
true
false
apache-2.0
['tr', 'uk']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,990
### tur-ukr * source group: Turkish * target group: Ukrainian * OPUS readme: [tur-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md) * model: transformer-align * source language(s): tur * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.ukr | 42.5 | 0.624 | ### System Info: - hf_name: tur-ukr - source_languages: tur - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'uk'] - src_constituents: {'tur'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt - src_alpha3: tur - tgt_alpha3: ukr - short_pair: tr-uk - chrF2_score: 0.624 - bleu: 42.5 - brevity_penalty: 0.983 - ref_len: 12988.0 - src_name: Turkish - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: tr - tgt_alpha2: uk - prefer_old: False - long_pair: tur-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-trk-en
Helsinki-NLP
marian
11
26
transformers
1
translation
true
true
false
apache-2.0
['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
3,433
### trk-eng * source group: Turkic languages * target group: English * OPUS readme: [trk-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md) * model: transformer * source language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-entr-tureng.tur.eng | 5.0 | 0.242 | | newstest2016-entr-tureng.tur.eng | 3.7 | 0.231 | | newstest2017-entr-tureng.tur.eng | 3.7 | 0.229 | | newstest2018-entr-tureng.tur.eng | 4.1 | 0.230 | | Tatoeba-test.aze-eng.aze.eng | 15.1 | 0.330 | | Tatoeba-test.bak-eng.bak.eng | 3.3 | 0.185 | | Tatoeba-test.chv-eng.chv.eng | 1.3 | 0.161 | | Tatoeba-test.crh-eng.crh.eng | 10.8 | 0.325 | | Tatoeba-test.kaz-eng.kaz.eng | 9.6 | 0.264 | | Tatoeba-test.kir-eng.kir.eng | 15.3 | 0.328 | | Tatoeba-test.kjh-eng.kjh.eng | 1.8 | 0.121 | | Tatoeba-test.kum-eng.kum.eng | 16.1 | 0.277 | | Tatoeba-test.multi.eng | 12.0 | 0.304 | | Tatoeba-test.ota-eng.ota.eng | 2.0 | 0.149 | | Tatoeba-test.sah-eng.sah.eng | 0.7 | 0.140 | | Tatoeba-test.tat-eng.tat.eng | 4.0 | 0.215 | | Tatoeba-test.tuk-eng.tuk.eng | 5.5 | 0.243 | | Tatoeba-test.tur-eng.tur.eng | 26.8 | 0.443 | | Tatoeba-test.tyv-eng.tyv.eng | 1.3 | 0.111 | | Tatoeba-test.uig-eng.uig.eng | 0.2 | 0.111 | | Tatoeba-test.uzb-eng.uzb.eng | 4.6 | 0.195 | ### System Info: - hf_name: trk-eng - source_languages: trk - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en'] - src_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt - src_alpha3: trk - tgt_alpha3: eng - short_pair: trk-en - chrF2_score: 0.304 - bleu: 12.0 - brevity_penalty: 1.0 - ref_len: 18733.0 - src_name: Turkic languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: trk - tgt_alpha2: en - prefer_old: False - long_pair: trk-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ts-en
Helsinki-NLP
marian
10
34
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-ts-en * source languages: ts * target languages: en * OPUS readme: [ts-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.en | 44.0 | 0.590 |
Helsinki-NLP/opus-mt-ts-es
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-ts-es * source languages: ts * target languages: es * OPUS readme: [ts-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.es | 28.1 | 0.468 |
Helsinki-NLP/opus-mt-ts-fi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-ts-fi * source languages: ts * target languages: fi * OPUS readme: [ts-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.fi | 27.7 | 0.509 |
Helsinki-NLP/opus-mt-ts-fr
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-ts-fr * source languages: ts * target languages: fr * OPUS readme: [ts-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.fr | 29.9 | 0.475 |
Helsinki-NLP/opus-mt-ts-sv
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-ts-sv * source languages: ts * target languages: sv * OPUS readme: [ts-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.sv | 32.6 | 0.510 |
Helsinki-NLP/opus-mt-tum-en
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tum-en * source languages: tum * target languages: en * OPUS readme: [tum-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tum.en | 31.7 | 0.470 |
Helsinki-NLP/opus-mt-tum-es
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-tum-es * source languages: tum * target languages: es * OPUS readme: [tum-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tum.es | 22.6 | 0.390 |