repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
Helsinki-NLP/opus-mt-fi-mk
Helsinki-NLP
marian
10
16
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-mk * source languages: fi * target languages: mk * OPUS readme: [fi-mk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-mk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-mk/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mk/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mk/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.mk | 28.9 | 0.501 |
Helsinki-NLP/opus-mt-fi-mos
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-mos * source languages: fi * target languages: mos * OPUS readme: [fi-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-mos/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-mos/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mos/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mos/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.mos | 21.4 | 0.366 |
Helsinki-NLP/opus-mt-fi-mt
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-mt * source languages: fi * target languages: mt * OPUS readme: [fi-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-mt/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-mt/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mt/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-mt/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.mt | 29.9 | 0.490 |
Helsinki-NLP/opus-mt-fi-niu
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-niu * source languages: fi * target languages: niu * OPUS readme: [fi-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-niu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-niu/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.niu | 35.3 | 0.565 |
Helsinki-NLP/opus-mt-fi-nl
Helsinki-NLP
marian
10
19
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-nl * source languages: fi * target languages: nl * OPUS readme: [fi-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-nl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.nl | 30.5 | 0.557 |
Helsinki-NLP/opus-mt-fi-no
Helsinki-NLP
marian
11
38
transformers
0
translation
true
true
false
apache-2.0
['fi', False]
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
2,099
### fin-nor * source group: Finnish * target group: Norwegian * OPUS readme: [fin-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-nor/README.md) * model: transformer-align * source language(s): fin * target language(s): nno nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.fin.nor | 23.5 | 0.426 | ### System Info: - hf_name: fin-nor - source_languages: fin - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fi', 'no'] - src_constituents: {'fin'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-nor/opus-2020-06-17.test.txt - src_alpha3: fin - tgt_alpha3: nor - short_pair: fi-no - chrF2_score: 0.426 - bleu: 23.5 - brevity_penalty: 1.0 - ref_len: 14768.0 - src_name: Finnish - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: fi - tgt_alpha2: no - prefer_old: False - long_pair: fin-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-fi-nso
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-nso * source languages: fi * target languages: nso * OPUS readme: [fi-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-nso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-nso/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nso/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nso/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.nso | 35.8 | 0.564 |
Helsinki-NLP/opus-mt-fi-ny
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-ny * source languages: fi * target languages: ny * OPUS readme: [fi-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ny/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ny/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ny/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ny/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.ny | 22.6 | 0.503 |
Helsinki-NLP/opus-mt-fi-pag
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-pag * source languages: fi * target languages: pag * OPUS readme: [fi-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pag/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pag/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pag/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pag/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.pag | 28.0 | 0.510 |
Helsinki-NLP/opus-mt-fi-pap
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-pap * source languages: fi * target languages: pap * OPUS readme: [fi-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pap/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pap/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pap/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.pap | 27.3 | 0.478 |
Helsinki-NLP/opus-mt-fi-pis
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-pis * source languages: fi * target languages: pis * OPUS readme: [fi-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pis/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pis/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pis/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pis/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.pis | 27.5 | 0.493 |
Helsinki-NLP/opus-mt-fi-pon
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-pon * source languages: fi * target languages: pon * OPUS readme: [fi-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-pon/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-pon/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pon/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-pon/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.pon | 23.7 | 0.475 |
Helsinki-NLP/opus-mt-fi-ro
Helsinki-NLP
marian
10
49
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-ro * source languages: fi * target languages: ro * OPUS readme: [fi-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ro/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ro/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ro/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ro/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.ro | 27.0 | 0.490 |
Helsinki-NLP/opus-mt-fi-ru
Helsinki-NLP
marian
10
241
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-fi-ru * source languages: fi * target languages: ru * OPUS readme: [fi-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ru/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ru/opus-2020-04-12.zip) * test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ru/opus-2020-04-12.test.txt) * test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ru/opus-2020-04-12.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fi.ru | 46.3 | 0.670 |
Helsinki-NLP/opus-mt-fi-run
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-run * source languages: fi * target languages: run * OPUS readme: [fi-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-run/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-run/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-run/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-run/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.run | 23.2 | 0.498 |
Helsinki-NLP/opus-mt-fi-rw
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-rw * source languages: fi * target languages: rw * OPUS readme: [fi-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-rw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-rw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-rw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-rw/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.rw | 25.3 | 0.509 |
Helsinki-NLP/opus-mt-fi-sg
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-sg * source languages: fi * target languages: sg * OPUS readme: [fi-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sg | 29.3 | 0.480 |
Helsinki-NLP/opus-mt-fi-sk
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-sk * source languages: fi * target languages: sk * OPUS readme: [fi-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sk | 28.1 | 0.501 |
Helsinki-NLP/opus-mt-fi-sl
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-sl * source languages: fi * target languages: sl * OPUS readme: [fi-sl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sl/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sl/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sl/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sl | 24.1 | 0.481 |
Helsinki-NLP/opus-mt-fi-sm
Helsinki-NLP
marian
10
25
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-sm * source languages: fi * target languages: sm * OPUS readme: [fi-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sm/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sm/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sm/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sm/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sm | 24.0 | 0.443 |
Helsinki-NLP/opus-mt-fi-sn
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-sn * source languages: fi * target languages: sn * OPUS readme: [fi-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sn | 25.3 | 0.547 |
Helsinki-NLP/opus-mt-fi-sq
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-sq * source languages: fi * target languages: sq * OPUS readme: [fi-sq](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sq/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sq/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sq/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sq/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sq | 32.0 | 0.535 |
Helsinki-NLP/opus-mt-fi-srn
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-srn * source languages: fi * target languages: srn * OPUS readme: [fi-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-srn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-srn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-srn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-srn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.srn | 29.2 | 0.491 |
Helsinki-NLP/opus-mt-fi-st
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-st * source languages: fi * target languages: st * OPUS readme: [fi-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-st/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-st/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-st/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-st/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.st | 37.1 | 0.570 |
Helsinki-NLP/opus-mt-fi-sv
Helsinki-NLP
marian
10
2,724
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
833
### opus-mt-fi-sv * source languages: fi * target languages: sv * OPUS readme: [fi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sv/README.md) * dataset: opus+bt * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus+bt-2020-04-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sv/opus+bt-2020-04-11.zip) * test set translations: [opus+bt-2020-04-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sv/opus+bt-2020-04-11.test.txt) * test set scores: [opus+bt-2020-04-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sv/opus+bt-2020-04-11.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | fiskmo_testset.fi.sv | 27.4 | 0.605 | | Tatoeba.fi.sv | 54.7 | 0.709 |
Helsinki-NLP/opus-mt-fi-sw
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-sw * source languages: fi * target languages: sw * OPUS readme: [fi-sw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.sw | 29.9 | 0.548 |
Helsinki-NLP/opus-mt-fi-swc
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-swc * source languages: fi * target languages: swc * OPUS readme: [fi-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-swc/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-swc/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-swc/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-swc/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.swc | 27.5 | 0.515 |
Helsinki-NLP/opus-mt-fi-tiv
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-tiv * source languages: fi * target languages: tiv * OPUS readme: [fi-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tiv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tiv/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tiv/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tiv/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.tiv | 23.6 | 0.425 |
Helsinki-NLP/opus-mt-fi-tll
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-tll * source languages: fi * target languages: tll * OPUS readme: [fi-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tll/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tll/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tll/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tll/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.tll | 23.6 | 0.478 |
Helsinki-NLP/opus-mt-fi-tn
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-tn * source languages: fi * target languages: tn * OPUS readme: [fi-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tn/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tn/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tn/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.tn | 34.5 | 0.555 |
Helsinki-NLP/opus-mt-fi-to
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-to * source languages: fi * target languages: to * OPUS readme: [fi-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-to/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.to | 38.3 | 0.541 |
Helsinki-NLP/opus-mt-fi-toi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-toi * source languages: fi * target languages: toi * OPUS readme: [fi-toi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-toi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-toi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-toi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-toi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.toi | 22.0 | 0.509 |
Helsinki-NLP/opus-mt-fi-tpi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-tpi * source languages: fi * target languages: tpi * OPUS readme: [fi-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tpi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tpi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tpi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tpi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.tpi | 30.5 | 0.504 |
Helsinki-NLP/opus-mt-fi-tr
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-fi-tr * source languages: fi * target languages: tr * OPUS readme: [fi-tr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tr/opus-2020-04-12.zip) * test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tr/opus-2020-04-12.test.txt) * test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tr/opus-2020-04-12.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fi.tr | 31.6 | 0.619 |
Helsinki-NLP/opus-mt-fi-ts
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-ts * source languages: fi * target languages: ts * OPUS readme: [fi-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ts/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ts/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ts/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ts/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.ts | 33.6 | 0.563 |
Helsinki-NLP/opus-mt-fi-tvl
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-tvl * source languages: fi * target languages: tvl * OPUS readme: [fi-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tvl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tvl/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tvl/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tvl/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.tvl | 33.6 | 0.517 |
Helsinki-NLP/opus-mt-fi-tw
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-tw * source languages: fi * target languages: tw * OPUS readme: [fi-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tw/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tw/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tw/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.tw | 29.2 | 0.504 |
Helsinki-NLP/opus-mt-fi-ty
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-ty * source languages: fi * target languages: ty * OPUS readme: [fi-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ty/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ty/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ty/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ty/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.ty | 39.7 | 0.565 |
Helsinki-NLP/opus-mt-fi-uk
Helsinki-NLP
marian
10
18
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-uk * source languages: fi * target languages: uk * OPUS readme: [fi-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-uk/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.uk | 23.3 | 0.445 |
Helsinki-NLP/opus-mt-fi-ve
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-ve * source languages: fi * target languages: ve * OPUS readme: [fi-ve](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ve/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ve/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ve/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ve/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.ve | 26.0 | 0.495 |
Helsinki-NLP/opus-mt-fi-war
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-war * source languages: fi * target languages: war * OPUS readme: [fi-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-war/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-war/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-war/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-war/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.war | 35.1 | 0.565 |
Helsinki-NLP/opus-mt-fi-wls
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-wls * source languages: fi * target languages: wls * OPUS readme: [fi-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-wls/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-wls/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.wls | 24.7 | 0.466 |
Helsinki-NLP/opus-mt-fi-xh
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-xh * source languages: fi * target languages: xh * OPUS readme: [fi-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-xh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-xh/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.xh | 25.3 | 0.554 |
Helsinki-NLP/opus-mt-fi-yap
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-yap * source languages: fi * target languages: yap * OPUS readme: [fi-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-yap/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yap/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.yap | 25.4 | 0.445 |
Helsinki-NLP/opus-mt-fi-yo
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fi-yo * source languages: fi * target languages: yo * OPUS readme: [fi-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-yo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-yo/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yo/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-yo/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.yo | 25.8 | 0.427 |
Helsinki-NLP/opus-mt-fi-zne
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fi-zne * source languages: fi * target languages: zne * OPUS readme: [fi-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-zne/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-zne/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-zne/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-zne/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.zne | 22.7 | 0.464 |
Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI
Helsinki-NLP
marian
9
7
transformers
0
translation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
1,147
### opus-mt-fi_nb_no_nn_ru_sv_en-SAMI * source languages: fi,nb,no,nn,ru,sv,en * target languages: se,sma,smj,smn,sms * OPUS readme: [fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/README.md) * dataset: opus+giella * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus+giella-2020-04-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/opus+giella-2020-04-18.zip) * test set translations: [opus+giella-2020-04-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/opus+giella-2020-04-18.test.txt) * test set scores: [opus+giella-2020-04-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi+nb+no+nn+ru+sv+en-se+sma+smj+smn+sms/opus+giella-2020-04-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | giella.fi.sms | 58.4 | 0.776 |
Helsinki-NLP/opus-mt-fiu-en
Helsinki-NLP
marian
11
12
transformers
0
translation
true
true
false
apache-2.0
['se', 'fi', 'hu', 'et', 'fiu', 'en']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
3,617
### fiu-eng * source group: Finno-Ugrian languages * target group: English * OPUS readme: [fiu-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-eng/README.md) * model: transformer * source language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-fineng.fin.eng | 22.9 | 0.513 | | newsdev2018-enet-esteng.est.eng | 26.3 | 0.543 | | newssyscomb2009-huneng.hun.eng | 21.2 | 0.494 | | newstest2009-huneng.hun.eng | 19.8 | 0.486 | | newstest2015-enfi-fineng.fin.eng | 24.1 | 0.521 | | newstest2016-enfi-fineng.fin.eng | 25.6 | 0.541 | | newstest2017-enfi-fineng.fin.eng | 28.7 | 0.560 | | newstest2018-enet-esteng.est.eng | 26.5 | 0.549 | | newstest2018-enfi-fineng.fin.eng | 21.2 | 0.490 | | newstest2019-fien-fineng.fin.eng | 25.6 | 0.533 | | newstestB2016-enfi-fineng.fin.eng | 21.6 | 0.500 | | newstestB2017-enfi-fineng.fin.eng | 24.3 | 0.526 | | newstestB2017-fien-fineng.fin.eng | 24.3 | 0.526 | | Tatoeba-test.chm-eng.chm.eng | 1.2 | 0.163 | | Tatoeba-test.est-eng.est.eng | 55.3 | 0.706 | | Tatoeba-test.fin-eng.fin.eng | 48.7 | 0.660 | | Tatoeba-test.fkv-eng.fkv.eng | 11.5 | 0.384 | | Tatoeba-test.hun-eng.hun.eng | 46.7 | 0.638 | | Tatoeba-test.izh-eng.izh.eng | 48.3 | 0.678 | | Tatoeba-test.kom-eng.kom.eng | 0.7 | 0.113 | | Tatoeba-test.krl-eng.krl.eng | 36.1 | 0.485 | | Tatoeba-test.liv-eng.liv.eng | 2.1 | 0.086 | | Tatoeba-test.mdf-eng.mdf.eng | 0.9 | 0.120 | | Tatoeba-test.multi.eng | 47.8 | 0.648 | | Tatoeba-test.myv-eng.myv.eng | 0.7 | 0.121 | | Tatoeba-test.sma-eng.sma.eng | 1.7 | 0.101 | | Tatoeba-test.sme-eng.sme.eng | 7.8 | 0.229 | | Tatoeba-test.udm-eng.udm.eng | 0.9 | 0.166 | ### System Info: - hf_name: fiu-eng - source_languages: fiu - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['se', 'fi', 'hu', 'et', 'fiu', 'en'] - src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-eng/opus2m-2020-07-31.test.txt - src_alpha3: fiu - tgt_alpha3: eng - short_pair: fiu-en - chrF2_score: 0.648 - bleu: 47.8 - brevity_penalty: 0.988 - ref_len: 71020.0 - src_name: Finno-Ugrian languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: fiu - tgt_alpha2: en - prefer_old: False - long_pair: fiu-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-fiu-fiu
Helsinki-NLP
marian
11
12
transformers
0
translation
true
true
false
apache-2.0
['se', 'fi', 'hu', 'et', 'fiu']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
3,612
### fiu-fiu * source group: Finno-Ugrian languages * target group: Finno-Ugrian languages * OPUS readme: [fiu-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md) * model: transformer * source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro * target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.est-est.est.est | 2.0 | 0.252 | | Tatoeba-test.est-fin.est.fin | 51.0 | 0.704 | | Tatoeba-test.est-fkv.est.fkv | 1.1 | 0.211 | | Tatoeba-test.est-vep.est.vep | 3.1 | 0.272 | | Tatoeba-test.fin-est.fin.est | 55.2 | 0.722 | | Tatoeba-test.fin-fkv.fin.fkv | 1.6 | 0.207 | | Tatoeba-test.fin-hun.fin.hun | 42.4 | 0.663 | | Tatoeba-test.fin-izh.fin.izh | 12.9 | 0.509 | | Tatoeba-test.fin-krl.fin.krl | 4.6 | 0.292 | | Tatoeba-test.fkv-est.fkv.est | 2.4 | 0.148 | | Tatoeba-test.fkv-fin.fkv.fin | 15.1 | 0.427 | | Tatoeba-test.fkv-liv.fkv.liv | 1.2 | 0.261 | | Tatoeba-test.fkv-vep.fkv.vep | 1.2 | 0.233 | | Tatoeba-test.hun-fin.hun.fin | 47.8 | 0.681 | | Tatoeba-test.izh-fin.izh.fin | 24.0 | 0.615 | | Tatoeba-test.izh-krl.izh.krl | 1.8 | 0.114 | | Tatoeba-test.krl-fin.krl.fin | 13.6 | 0.407 | | Tatoeba-test.krl-izh.krl.izh | 2.7 | 0.096 | | Tatoeba-test.liv-fkv.liv.fkv | 1.2 | 0.164 | | Tatoeba-test.liv-vep.liv.vep | 3.4 | 0.181 | | Tatoeba-test.multi.multi | 36.7 | 0.581 | | Tatoeba-test.vep-est.vep.est | 3.4 | 0.251 | | Tatoeba-test.vep-fkv.vep.fkv | 1.2 | 0.215 | | Tatoeba-test.vep-liv.vep.liv | 3.4 | 0.179 | ### System Info: - hf_name: fiu-fiu - source_languages: fiu - target_languages: fiu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['se', 'fi', 'hu', 'et', 'fiu'] - src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt - src_alpha3: fiu - tgt_alpha3: fiu - short_pair: fiu-fiu - chrF2_score: 0.581 - bleu: 36.7 - brevity_penalty: 0.981 - ref_len: 19444.0 - src_name: Finno-Ugrian languages - tgt_name: Finno-Ugrian languages - train_date: 2020-07-26 - src_alpha2: fiu - tgt_alpha2: fiu - prefer_old: False - long_pair: fiu-fiu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-fj-en
Helsinki-NLP
marian
10
75
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
803
### opus-mt-fj-en * source languages: fj * target languages: en * OPUS readme: [fj-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fj-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fj.en | 31.0 | 0.471 | | Tatoeba.fj.en | 79.7 | 0.835 |
Helsinki-NLP/opus-mt-fj-fr
Helsinki-NLP
marian
10
27
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fj-fr * source languages: fj * target languages: fr * OPUS readme: [fj-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fj-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fj.fr | 24.0 | 0.407 |
Helsinki-NLP/opus-mt-fr-af
Helsinki-NLP
marian
10
182
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-af * source languages: fr * target languages: af * OPUS readme: [fr-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-af/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-af/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-af/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-af/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.af | 36.0 | 0.546 |
Helsinki-NLP/opus-mt-fr-ar
Helsinki-NLP
marian
11
162
transformers
0
translation
true
true
false
apache-2.0
['fr', 'ar']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,160
### fra-ara * source group: French * target group: Arabic * OPUS readme: [fra-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md) * model: transformer * source language(s): fra * target language(s): apc ara arq arq_Latn ary arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.fra.ara | 14.4 | 0.439 | ### System Info: - hf_name: fra-ara - source_languages: fra - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'ar'] - src_constituents: {'fra'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt - src_alpha3: fra - tgt_alpha3: ara - short_pair: fr-ar - chrF2_score: 0.439 - bleu: 14.4 - brevity_penalty: 1.0 - ref_len: 7956.0 - src_name: French - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: fr - tgt_alpha2: ar - prefer_old: False - long_pair: fra-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-fr-ase
Helsinki-NLP
marian
10
28
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-ase * source languages: fr * target languages: ase * OPUS readme: [fr-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ase/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ase | 38.5 | 0.545 |
Helsinki-NLP/opus-mt-fr-bcl
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-bcl * source languages: fr * target languages: bcl * OPUS readme: [fr-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bcl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.bcl | 35.9 | 0.566 |
Helsinki-NLP/opus-mt-fr-bem
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-bem * source languages: fr * target languages: bem * OPUS readme: [fr-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bem/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bem/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bem/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bem/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.bem | 22.8 | 0.456 |
Helsinki-NLP/opus-mt-fr-ber
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
778
### opus-mt-fr-ber * source languages: fr * target languages: ber * OPUS readme: [fr-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ber/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ber/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ber/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ber/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.ber | 37.2 | 0.641 |
Helsinki-NLP/opus-mt-fr-bg
Helsinki-NLP
marian
11
26
transformers
0
translation
true
true
false
apache-2.0
['fr', 'bg']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,987
### fra-bul * source group: French * target group: Bulgarian * OPUS readme: [fra-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md) * model: transformer * source language(s): fra * target language(s): bul * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.fra.bul | 46.3 | 0.657 | ### System Info: - hf_name: fra-bul - source_languages: fra - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'bg'] - src_constituents: {'fra'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt - src_alpha3: fra - tgt_alpha3: bul - short_pair: fr-bg - chrF2_score: 0.657 - bleu: 46.3 - brevity_penalty: 0.953 - ref_len: 3286.0 - src_name: French - tgt_name: Bulgarian - train_date: 2020-07-03 - src_alpha2: fr - tgt_alpha2: bg - prefer_old: False - long_pair: fra-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-fr-bi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-bi * source languages: fr * target languages: bi * OPUS readme: [fr-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.bi | 28.4 | 0.464 |
Helsinki-NLP/opus-mt-fr-bzs
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-bzs * source languages: fr * target languages: bzs * OPUS readme: [fr-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.bzs | 30.2 | 0.477 |
Helsinki-NLP/opus-mt-fr-ca
Helsinki-NLP
marian
11
38
transformers
0
translation
true
true
false
apache-2.0
['fr', 'ca']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,983
### fra-cat * source group: French * target group: Catalan * OPUS readme: [fra-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-cat/README.md) * model: transformer-align * source language(s): fra * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.fra.cat | 43.4 | 0.645 | ### System Info: - hf_name: fra-cat - source_languages: fra - target_languages: cat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-cat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'ca'] - src_constituents: {'fra'} - tgt_constituents: {'cat'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.test.txt - src_alpha3: fra - tgt_alpha3: cat - short_pair: fr-ca - chrF2_score: 0.645 - bleu: 43.4 - brevity_penalty: 0.982 - ref_len: 5214.0 - src_name: French - tgt_name: Catalan - train_date: 2020-06-16 - src_alpha2: fr - tgt_alpha2: ca - prefer_old: False - long_pair: fra-cat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-fr-ceb
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-ceb * source languages: fr * target languages: ceb * OPUS readme: [fr-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ceb/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ceb/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ceb/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ceb/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ceb | 32.8 | 0.543 |
Helsinki-NLP/opus-mt-fr-crs
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-crs * source languages: fr * target languages: crs * OPUS readme: [fr-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-crs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-crs/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-crs/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-crs/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.crs | 31.6 | 0.492 |
Helsinki-NLP/opus-mt-fr-de
Helsinki-NLP
marian
11
9,538
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
1,161
### opus-mt-fr-de * source languages: fr * target languages: de * OPUS readme: [fr-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | euelections_dev2019.transformer-align.fr | 26.4 | 0.571 | | newssyscomb2009.fr.de | 22.1 | 0.524 | | news-test2008.fr.de | 22.1 | 0.524 | | newstest2009.fr.de | 21.6 | 0.520 | | newstest2010.fr.de | 22.6 | 0.527 | | newstest2011.fr.de | 21.5 | 0.518 | | newstest2012.fr.de | 22.4 | 0.516 | | newstest2013.fr.de | 24.2 | 0.532 | | newstest2019-frde.fr.de | 27.9 | 0.595 | | Tatoeba.fr.de | 49.1 | 0.676 |
Helsinki-NLP/opus-mt-fr-ee
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-ee * source languages: fr * target languages: ee * OPUS readme: [fr-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ee | 26.3 | 0.466 |
Helsinki-NLP/opus-mt-fr-efi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-efi * source languages: fr * target languages: efi * OPUS readme: [fr-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-efi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.efi | 26.9 | 0.462 |
Helsinki-NLP/opus-mt-fr-el
Helsinki-NLP
marian
10
18
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-fr-el * source languages: fr * target languages: el * OPUS readme: [fr-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-el/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-el/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-el/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-el/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.el | 56.2 | 0.719 |
Helsinki-NLP/opus-mt-fr-en
Helsinki-NLP
marian
11
217,517
transformers
11
translation
true
true
true
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
1,202
### opus-mt-fr-en * source languages: fr * target languages: en * OPUS readme: [fr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdiscussdev2015-enfr.fr.en | 33.1 | 0.580 | | newsdiscusstest2015-enfr.fr.en | 38.7 | 0.614 | | newssyscomb2009.fr.en | 30.3 | 0.569 | | news-test2008.fr.en | 26.2 | 0.542 | | newstest2009.fr.en | 30.2 | 0.570 | | newstest2010.fr.en | 32.2 | 0.590 | | newstest2011.fr.en | 33.0 | 0.597 | | newstest2012.fr.en | 32.8 | 0.591 | | newstest2013.fr.en | 33.9 | 0.591 | | newstest2014-fren.fr.en | 37.8 | 0.633 | | Tatoeba.fr.en | 57.5 | 0.720 |
Helsinki-NLP/opus-mt-fr-eo
Helsinki-NLP
marian
10
55
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-fr-eo * source languages: fr * target languages: eo * OPUS readme: [fr-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-eo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.eo | 52.0 | 0.695 |
Helsinki-NLP/opus-mt-fr-es
Helsinki-NLP
marian
10
22,720
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
1,054
### opus-mt-fr-es * source languages: fr * target languages: es * OPUS readme: [fr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-es/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-es/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-es/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.fr.es | 34.3 | 0.601 | | news-test2008.fr.es | 32.5 | 0.583 | | newstest2009.fr.es | 31.6 | 0.586 | | newstest2010.fr.es | 36.5 | 0.616 | | newstest2011.fr.es | 38.3 | 0.622 | | newstest2012.fr.es | 38.1 | 0.619 | | newstest2013.fr.es | 34.0 | 0.587 | | Tatoeba.fr.es | 53.2 | 0.709 |
Helsinki-NLP/opus-mt-fr-fj
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-fj * source languages: fr * target languages: fj * OPUS readme: [fr-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.fj | 27.4 | 0.487 |
Helsinki-NLP/opus-mt-fr-gaa
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-gaa * source languages: fr * target languages: gaa * OPUS readme: [fr-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gaa/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.gaa | 27.8 | 0.473 |
Helsinki-NLP/opus-mt-fr-gil
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-gil * source languages: fr * target languages: gil * OPUS readme: [fr-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.gil | 27.9 | 0.499 |
Helsinki-NLP/opus-mt-fr-guw
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-guw * source languages: fr * target languages: guw * OPUS readme: [fr-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-guw/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.guw | 31.4 | 0.505 |
Helsinki-NLP/opus-mt-fr-ha
Helsinki-NLP
marian
10
28
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-ha * source languages: fr * target languages: ha * OPUS readme: [fr-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ha/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ha | 24.4 | 0.447 |
Helsinki-NLP/opus-mt-fr-he
Helsinki-NLP
marian
12
18
transformers
0
translation
true
true
false
apache-2.0
['fr', 'he']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,007
### fr-he * source group: French * target group: Hebrew * OPUS readme: [fra-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md) * model: transformer * source language(s): fra * target language(s): heb * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip) * test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt) * test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.fra.heb | 39.2 | 0.598 | ### System Info: - hf_name: fr-he - source_languages: fra - target_languages: heb - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'he'] - src_constituents: ('French', {'fra'}) - tgt_constituents: ('Hebrew', {'heb'}) - src_multilingual: False - tgt_multilingual: False - long_pair: fra-heb - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt - src_alpha3: fra - tgt_alpha3: heb - chrF2_score: 0.598 - bleu: 39.2 - brevity_penalty: 1.0 - ref_len: 20655.0 - src_name: French - tgt_name: Hebrew - train_date: 2020-12-10 00:00:00 - src_alpha2: fr - tgt_alpha2: he - prefer_old: False - short_pair: fr-he - helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96 - transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de - port_machine: LM0-400-22516.local - port_time: 2020-12-11-16:02
Helsinki-NLP/opus-mt-fr-hil
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-hil * source languages: fr * target languages: hil * OPUS readme: [fr-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.hil | 34.7 | 0.559 |
Helsinki-NLP/opus-mt-fr-ho
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-ho * source languages: fr * target languages: ho * OPUS readme: [fr-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ho/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ho | 25.4 | 0.480 |
Helsinki-NLP/opus-mt-fr-hr
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-hr * source languages: fr * target languages: hr * OPUS readme: [fr-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.hr | 20.7 | 0.442 |
Helsinki-NLP/opus-mt-fr-ht
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-ht * source languages: fr * target languages: ht * OPUS readme: [fr-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ht | 29.2 | 0.461 |
Helsinki-NLP/opus-mt-fr-hu
Helsinki-NLP
marian
10
45
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-fr-hu * source languages: fr * target languages: hu * OPUS readme: [fr-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.hu | 41.3 | 0.629 |
Helsinki-NLP/opus-mt-fr-id
Helsinki-NLP
marian
10
34
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-fr-id * source languages: fr * target languages: id * OPUS readme: [fr-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-id/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.id | 37.2 | 0.636 |
Helsinki-NLP/opus-mt-fr-ig
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-ig * source languages: fr * target languages: ig * OPUS readme: [fr-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ig/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ig | 29.0 | 0.445 |
Helsinki-NLP/opus-mt-fr-ilo
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-ilo * source languages: fr * target languages: ilo * OPUS readme: [fr-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ilo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ilo | 30.6 | 0.528 |
Helsinki-NLP/opus-mt-fr-iso
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-iso * source languages: fr * target languages: iso * OPUS readme: [fr-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-iso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.iso | 26.7 | 0.429 |
Helsinki-NLP/opus-mt-fr-kg
Helsinki-NLP
marian
10
24
transformers
2
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-kg * source languages: fr * target languages: kg * OPUS readme: [fr-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.kg | 30.4 | 0.523 |
Helsinki-NLP/opus-mt-fr-kqn
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-kqn * source languages: fr * target languages: kqn * OPUS readme: [fr-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kqn/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.kqn | 23.3 | 0.469 |
Helsinki-NLP/opus-mt-fr-kwy
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-kwy * source languages: fr * target languages: kwy * OPUS readme: [fr-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kwy/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.kwy | 22.5 | 0.428 |
Helsinki-NLP/opus-mt-fr-lg
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-lg * source languages: fr * target languages: lg * OPUS readme: [fr-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.lg | 21.7 | 0.454 |
Helsinki-NLP/opus-mt-fr-ln
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-ln * source languages: fr * target languages: ln * OPUS readme: [fr-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ln/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ln | 30.5 | 0.527 |
Helsinki-NLP/opus-mt-fr-loz
Helsinki-NLP
marian
10
11
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-loz * source languages: fr * target languages: loz * OPUS readme: [fr-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-loz/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.loz | 30.0 | 0.498 |
Helsinki-NLP/opus-mt-fr-lu
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-lu * source languages: fr * target languages: lu * OPUS readme: [fr-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.lu | 25.5 | 0.471 |
Helsinki-NLP/opus-mt-fr-lua
Helsinki-NLP
marian
10
12
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-lua * source languages: fr * target languages: lua * OPUS readme: [fr-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lua/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.lua | 27.3 | 0.496 |
Helsinki-NLP/opus-mt-fr-lue
Helsinki-NLP
marian
10
14
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-lue * source languages: fr * target languages: lue * OPUS readme: [fr-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lue/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.lue | 23.1 | 0.485 |
Helsinki-NLP/opus-mt-fr-lus
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-lus * source languages: fr * target languages: lus * OPUS readme: [fr-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lus/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.lus | 25.5 | 0.455 |
Helsinki-NLP/opus-mt-fr-mfe
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-mfe * source languages: fr * target languages: mfe * OPUS readme: [fr-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mfe/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.mfe | 26.1 | 0.451 |
Helsinki-NLP/opus-mt-fr-mh
Helsinki-NLP
marian
10
25
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-mh * source languages: fr * target languages: mh * OPUS readme: [fr-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mh/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.mh | 21.7 | 0.399 |
Helsinki-NLP/opus-mt-fr-mos
Helsinki-NLP
marian
10
10
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-fr-mos * source languages: fr * target languages: mos * OPUS readme: [fr-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mos/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.mos | 21.1 | 0.353 |
Helsinki-NLP/opus-mt-fr-ms
Helsinki-NLP
marian
11
8
transformers
0
translation
true
true
false
apache-2.0
['fr', 'ms']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,167
### fra-msa * source group: French * target group: Malay (macrolanguage) * OPUS readme: [fra-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md) * model: transformer-align * source language(s): fra * target language(s): ind zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.fra.msa | 35.3 | 0.617 | ### System Info: - hf_name: fra-msa - source_languages: fra - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'ms'] - src_constituents: {'fra'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt - src_alpha3: fra - tgt_alpha3: msa - short_pair: fr-ms - chrF2_score: 0.617 - bleu: 35.3 - brevity_penalty: 0.978 - ref_len: 6696.0 - src_name: French - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: fr - tgt_alpha2: ms - prefer_old: False - long_pair: fra-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-fr-mt
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-fr-mt * source languages: fr * target languages: mt * OPUS readme: [fr-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mt/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.mt | 28.7 | 0.466 |