Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-sv-yap
* source languages: sv
* target languages: yap
* OPUS readme: [sv-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-yap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-yap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.yap | 27.3 | 0.461 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sv-yap | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"yap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-sv-yo
* source languages: sv
* target languages: yo
* OPUS readme: [sv-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-yo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-yo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-yo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.yo | 26.4 | 0.432 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sv-yo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"yo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-sv-zne
* source languages: sv
* target languages: zne
* OPUS readme: [sv-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-zne/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.zne | 23.8 | 0.474 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sv-zne | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"zne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-swc-en
* source languages: swc
* target languages: en
* OPUS readme: [swc-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.en | 41.1 | 0.569 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-swc-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"swc",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-swc-es
* source languages: swc
* target languages: es
* OPUS readme: [swc-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.es | 27.4 | 0.458 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-swc-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"swc",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-swc-fi
* source languages: swc
* target languages: fi
* OPUS readme: [swc-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.fi | 26.0 | 0.489 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-swc-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"swc",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-swc-fr
* source languages: swc
* target languages: fr
* OPUS readme: [swc-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.fr | 28.6 | 0.470 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-swc-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"swc",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-swc-sv
* source languages: swc
* target languages: sv
* OPUS readme: [swc-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/swc-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/swc-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/swc-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.swc.sv | 30.7 | 0.495 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-swc-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"swc",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### taw-eng
* source group: Tai
* target group: English
* OPUS readme: [taw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/taw-eng/README.md)
* model: transformer
* source language(s): lao tha
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.zip)
* test set translations: [opus-2020-06-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.test.txt)
* test set scores: [opus-2020-06-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.lao-eng.lao.eng | 1.1 | 0.133 |
| Tatoeba-test.multi.eng | 38.9 | 0.572 |
| Tatoeba-test.tha-eng.tha.eng | 40.6 | 0.588 |
### System Info:
- hf_name: taw-eng
- source_languages: taw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/taw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lo', 'th', 'taw', 'en']
- src_constituents: {'lao', 'tha'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/taw-eng/opus-2020-06-28.test.txt
- src_alpha3: taw
- tgt_alpha3: eng
- short_pair: taw-en
- chrF2_score: 0.5720000000000001
- bleu: 38.9
- brevity_penalty: 1.0
- ref_len: 7630.0
- src_name: Tai
- tgt_name: English
- train_date: 2020-06-28
- src_alpha2: taw
- tgt_alpha2: en
- prefer_old: False
- long_pair: taw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["lo", "th", "taw", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-taw-en | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"lo",
"th",
"taw",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | # opus-mt-tc-base-gmw-gmw
Neural machine translation model for translating from West Germanic languages (gmw) to West Germanic languages (gmw).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2021-02-23
* source language(s): afr deu eng fry gos hrx ltz nds nld pdc yid
* target language(s): afr deu eng fry nds nld
* valid target language labels: >>afr<< >>ang_Latn<< >>deu<< >>eng<< >>fry<< >>ltz<< >>nds<< >>nld<< >>sco<< >>yid<<
* model: transformer (base)
* data: opus ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opus-2021-02-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.zip)
* more information released models: [OPUS-MT gmw-gmw README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>afr<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>nld<< You need help.",
">>afr<< I love your son."
]
model_name = "pytorch-models/opus-mt-tc-base-gmw-gmw"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Je hebt hulp nodig.
# Ek is lief vir jou seun.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-gmw-gmw")
print(pipe(>>nld<< You need help.))
# expected output: Je hebt hulp nodig.
```
## Benchmarks
* test set translations: [opus-2021-02-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.test.txt)
* test set scores: [opus-2021-02-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| afr-deu | tatoeba-test-v2021-08-07 | 0.674 | 48.1 | 1583 | 9105 |
| afr-eng | tatoeba-test-v2021-08-07 | 0.728 | 58.8 | 1374 | 9622 |
| afr-nld | tatoeba-test-v2021-08-07 | 0.711 | 54.5 | 1056 | 6710 |
| deu-afr | tatoeba-test-v2021-08-07 | 0.696 | 52.4 | 1583 | 9507 |
| deu-eng | tatoeba-test-v2021-08-07 | 0.609 | 42.1 | 17565 | 149462 |
| deu-nds | tatoeba-test-v2021-08-07 | 0.442 | 18.6 | 9999 | 76137 |
| deu-nld | tatoeba-test-v2021-08-07 | 0.672 | 48.7 | 10218 | 75235 |
| eng-afr | tatoeba-test-v2021-08-07 | 0.735 | 56.5 | 1374 | 10317 |
| eng-deu | tatoeba-test-v2021-08-07 | 0.580 | 35.9 | 17565 | 151568 |
| eng-nds | tatoeba-test-v2021-08-07 | 0.412 | 16.6 | 2500 | 18264 |
| eng-nld | tatoeba-test-v2021-08-07 | 0.663 | 48.3 | 12696 | 91796 |
| fry-eng | tatoeba-test-v2021-08-07 | 0.500 | 32.5 | 220 | 1573 |
| fry-nld | tatoeba-test-v2021-08-07 | 0.633 | 43.1 | 260 | 1854 |
| gos-nld | tatoeba-test-v2021-08-07 | 0.405 | 15.6 | 1852 | 9903 |
| hrx-deu | tatoeba-test-v2021-08-07 | 0.484 | 24.7 | 471 | 2805 |
| hrx-eng | tatoeba-test-v2021-08-07 | 0.362 | 20.4 | 221 | 1235 |
| ltz-deu | tatoeba-test-v2021-08-07 | 0.556 | 37.2 | 347 | 2208 |
| ltz-eng | tatoeba-test-v2021-08-07 | 0.485 | 32.4 | 293 | 1840 |
| ltz-nld | tatoeba-test-v2021-08-07 | 0.534 | 39.3 | 292 | 1685 |
| nds-deu | tatoeba-test-v2021-08-07 | 0.572 | 34.5 | 9999 | 74564 |
| nds-eng | tatoeba-test-v2021-08-07 | 0.493 | 29.9 | 2500 | 17589 |
| nds-nld | tatoeba-test-v2021-08-07 | 0.621 | 42.3 | 1657 | 11490 |
| nld-afr | tatoeba-test-v2021-08-07 | 0.755 | 58.8 | 1056 | 6823 |
| nld-deu | tatoeba-test-v2021-08-07 | 0.686 | 50.4 | 10218 | 74131 |
| nld-eng | tatoeba-test-v2021-08-07 | 0.690 | 53.1 | 12696 | 89978 |
| nld-fry | tatoeba-test-v2021-08-07 | 0.478 | 25.1 | 260 | 1857 |
| nld-nds | tatoeba-test-v2021-08-07 | 0.462 | 21.4 | 1657 | 11711 |
| afr-deu | flores101-devtest | 0.524 | 21.6 | 1012 | 25094 |
| afr-eng | flores101-devtest | 0.693 | 46.8 | 1012 | 24721 |
| afr-nld | flores101-devtest | 0.509 | 18.4 | 1012 | 25467 |
| deu-afr | flores101-devtest | 0.534 | 21.4 | 1012 | 25740 |
| deu-eng | flores101-devtest | 0.616 | 33.8 | 1012 | 24721 |
| deu-nld | flores101-devtest | 0.516 | 19.2 | 1012 | 25467 |
| eng-afr | flores101-devtest | 0.628 | 33.8 | 1012 | 25740 |
| eng-deu | flores101-devtest | 0.581 | 29.1 | 1012 | 25094 |
| eng-nld | flores101-devtest | 0.533 | 21.0 | 1012 | 25467 |
| ltz-afr | flores101-devtest | 0.430 | 12.9 | 1012 | 25740 |
| ltz-deu | flores101-devtest | 0.482 | 17.1 | 1012 | 25094 |
| ltz-eng | flores101-devtest | 0.468 | 18.8 | 1012 | 24721 |
| ltz-nld | flores101-devtest | 0.409 | 10.7 | 1012 | 25467 |
| nld-afr | flores101-devtest | 0.494 | 16.8 | 1012 | 25740 |
| nld-deu | flores101-devtest | 0.501 | 17.9 | 1012 | 25094 |
| nld-eng | flores101-devtest | 0.551 | 25.6 | 1012 | 24721 |
| deu-eng | multi30k_test_2016_flickr | 0.546 | 32.2 | 1000 | 12955 |
| eng-deu | multi30k_test_2016_flickr | 0.582 | 28.8 | 1000 | 12106 |
| deu-eng | multi30k_test_2017_flickr | 0.561 | 32.7 | 1000 | 11374 |
| eng-deu | multi30k_test_2017_flickr | 0.573 | 27.6 | 1000 | 10755 |
| deu-eng | multi30k_test_2017_mscoco | 0.499 | 25.5 | 461 | 5231 |
| eng-deu | multi30k_test_2017_mscoco | 0.514 | 22.0 | 461 | 5158 |
| deu-eng | multi30k_test_2018_flickr | 0.535 | 30.0 | 1071 | 14689 |
| eng-deu | multi30k_test_2018_flickr | 0.547 | 25.3 | 1071 | 13703 |
| deu-eng | newssyscomb2009 | 0.527 | 25.4 | 502 | 11818 |
| eng-deu | newssyscomb2009 | 0.504 | 19.3 | 502 | 11271 |
| deu-eng | news-test2008 | 0.518 | 23.8 | 2051 | 49380 |
| eng-deu | news-test2008 | 0.492 | 19.3 | 2051 | 47447 |
| deu-eng | newstest2009 | 0.516 | 23.4 | 2525 | 65399 |
| eng-deu | newstest2009 | 0.498 | 18.8 | 2525 | 62816 |
| deu-eng | newstest2010 | 0.546 | 25.8 | 2489 | 61711 |
| eng-deu | newstest2010 | 0.508 | 20.7 | 2489 | 61503 |
| deu-eng | newstest2011 | 0.524 | 23.7 | 3003 | 74681 |
| eng-deu | newstest2011 | 0.493 | 19.2 | 3003 | 72981 |
| deu-eng | newstest2012 | 0.532 | 24.8 | 3003 | 72812 |
| eng-deu | newstest2012 | 0.493 | 19.5 | 3003 | 72886 |
| deu-eng | newstest2013 | 0.548 | 27.7 | 3000 | 64505 |
| eng-deu | newstest2013 | 0.517 | 22.5 | 3000 | 63737 |
| deu-eng | newstest2014-deen | 0.548 | 27.3 | 3003 | 67337 |
| eng-deu | newstest2014-deen | 0.532 | 22.0 | 3003 | 62688 |
| deu-eng | newstest2015-deen | 0.553 | 28.6 | 2169 | 46443 |
| eng-deu | newstest2015-ende | 0.544 | 25.7 | 2169 | 44260 |
| deu-eng | newstest2016-deen | 0.596 | 33.3 | 2999 | 64119 |
| eng-deu | newstest2016-ende | 0.580 | 30.0 | 2999 | 62669 |
| deu-eng | newstest2017-deen | 0.561 | 29.5 | 3004 | 64399 |
| eng-deu | newstest2017-ende | 0.535 | 24.1 | 3004 | 61287 |
| deu-eng | newstest2018-deen | 0.610 | 36.1 | 2998 | 67012 |
| eng-deu | newstest2018-ende | 0.613 | 35.4 | 2998 | 64276 |
| deu-eng | newstest2019-deen | 0.582 | 32.3 | 2000 | 39227 |
| eng-deu | newstest2019-ende | 0.583 | 31.2 | 1997 | 48746 |
| deu-eng | newstest2020-deen | 0.604 | 32.0 | 785 | 38220 |
| eng-deu | newstest2020-ende | 0.542 | 23.9 | 1418 | 52383 |
| deu-eng | newstestB2020-deen | 0.598 | 31.2 | 785 | 37696 |
| eng-deu | newstestB2020-ende | 0.532 | 23.3 | 1418 | 53092 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.12.3
* OPUS-MT git hash: e56a06b
* port time: Sun Feb 13 14:42:10 EET 2022
* port machine: LM0-400-22516.local
| {"language": ["af", "de", "en", "fy", "gmw", "gos", "hrx", "lb", "nds", "nl", "pdc", "yi"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"], "model-index": [{"name": "opus-mt-tc-base-gmw-gmw", "results": [{"task": {"type": "translation", "name": "Translation afr-deu"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "afr deu devtest"}, "metrics": [{"type": "bleu", "value": 21.6, "name": "BLEU"}, {"type": "bleu", "value": 46.8, "name": "BLEU"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "bleu", "value": 33.8, "name": "BLEU"}, {"type": "bleu", "value": 33.8, "name": "BLEU"}, {"type": "bleu", "value": 29.1, "name": "BLEU"}, {"type": "bleu", "value": 21.0, "name": "BLEU"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "multi30k_test_2016_flickr", "type": "multi30k-2016_flickr", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 32.2, "name": "BLEU"}, {"type": "bleu", "value": 28.8, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "multi30k_test_2017_flickr", "type": "multi30k-2017_flickr", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 32.7, "name": "BLEU"}, {"type": "bleu", "value": 27.6, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "multi30k_test_2017_mscoco", "type": "multi30k-2017_mscoco", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 25.5, "name": "BLEU"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "multi30k_test_2018_flickr", "type": "multi30k-2018_flickr", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 30.0, "name": "BLEU"}, {"type": "bleu", "value": 25.3, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "news-test2008", "type": "news-test2008", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 23.8, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation afr-deu"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "afr-deu"}, "metrics": [{"type": "bleu", "value": 48.1, "name": "BLEU"}, {"type": "bleu", "value": 58.8, "name": "BLEU"}, {"type": "bleu", "value": 54.5, "name": "BLEU"}, {"type": "bleu", "value": 52.4, "name": "BLEU"}, {"type": "bleu", "value": 42.1, "name": "BLEU"}, {"type": "bleu", "value": 48.7, "name": "BLEU"}, {"type": "bleu", "value": 56.5, "name": "BLEU"}, {"type": "bleu", "value": 35.9, "name": "BLEU"}, {"type": "bleu", "value": 48.3, "name": "BLEU"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "bleu", "value": 43.1, "name": "BLEU"}, {"type": "bleu", "value": 24.7, "name": "BLEU"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "bleu", "value": 37.2, "name": "BLEU"}, {"type": "bleu", "value": 32.4, "name": "BLEU"}, {"type": "bleu", "value": 39.3, "name": "BLEU"}, {"type": "bleu", "value": 34.5, "name": "BLEU"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "bleu", "value": 42.3, "name": "BLEU"}, {"type": "bleu", "value": 58.8, "name": "BLEU"}, {"type": "bleu", "value": 50.4, "name": "BLEU"}, {"type": "bleu", "value": 53.1, "name": "BLEU"}, {"type": "bleu", "value": 25.1, "name": "BLEU"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2009", "type": "wmt-2009-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 23.4, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2010", "type": "wmt-2010-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 25.8, "name": "BLEU"}, {"type": "bleu", "value": 20.7, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2011", "type": "wmt-2011-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 23.7, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2012", "type": "wmt-2012-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 24.8, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2013", "type": "wmt-2013-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 27.7, "name": "BLEU"}, {"type": "bleu", "value": 22.5, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2014-deen", "type": "wmt-2014-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 27.3, "name": "BLEU"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2015-deen", "type": "wmt-2015-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 28.6, "name": "BLEU"}, {"type": "bleu", "value": 25.7, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2016-deen", "type": "wmt-2016-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 33.3, "name": "BLEU"}, {"type": "bleu", "value": 30.0, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2017-deen", "type": "wmt-2017-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 29.5, "name": "BLEU"}, {"type": "bleu", "value": 24.1, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2018-deen", "type": "wmt-2018-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 36.1, "name": "BLEU"}, {"type": "bleu", "value": 35.4, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2019-deen", "type": "wmt-2019-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 32.3, "name": "BLEU"}, {"type": "bleu", "value": 31.2, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2020-deen", "type": "wmt-2020-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 32.0, "name": "BLEU"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}]}]}]} | Helsinki-NLP/opus-mt-tc-base-gmw-gmw | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"af",
"de",
"en",
"fy",
"gmw",
"gos",
"hrx",
"lb",
"nds",
"nl",
"pdc",
"yi",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tha-eng
* source group: Thai
* target group: English
* OPUS readme: [tha-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md)
* model: transformer-align
* source language(s): tha
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tha.eng | 48.1 | 0.644 |
### System Info:
- hf_name: tha-eng
- source_languages: tha
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['th', 'en']
- src_constituents: {'tha'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt
- src_alpha3: tha
- tgt_alpha3: eng
- short_pair: th-en
- chrF2_score: 0.644
- bleu: 48.1
- brevity_penalty: 0.9740000000000001
- ref_len: 7407.0
- src_name: Thai
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: th
- tgt_alpha2: en
- prefer_old: False
- long_pair: tha-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["th", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-th-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"th",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-th-fr
* source languages: th
* target languages: fr
* OPUS readme: [th-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/th-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.th.fr | 20.4 | 0.363 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-th-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"th",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ti-en
* source languages: ti
* target languages: en
* OPUS readme: [ti-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ti-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ti.en | 30.4 | 0.461 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ti-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ti",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tiv-en
* source languages: tiv
* target languages: en
* OPUS readme: [tiv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tiv.en | 31.5 | 0.473 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tiv-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tiv",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tiv-fr
* source languages: tiv
* target languages: fr
* OPUS readme: [tiv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tiv.fr | 22.3 | 0.389 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tiv-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tiv",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tiv-sv
* source languages: tiv
* target languages: sv
* OPUS readme: [tiv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tiv.sv | 23.7 | 0.416 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tiv-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tiv",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tgl-deu
* source group: Tagalog
* target group: German
* OPUS readme: [tgl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.deu | 22.7 | 0.473 |
### System Info:
- hf_name: tgl-deu
- source_languages: tgl
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'de']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: deu
- short_pair: tl-de
- chrF2_score: 0.473
- bleu: 22.7
- brevity_penalty: 0.9690000000000001
- ref_len: 2453.0
- src_name: Tagalog
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: de
- prefer_old: False
- long_pair: tgl-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tl", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tl-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tl",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tgl-eng
* source group: Tagalog
* target group: English
* OPUS readme: [tgl-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.eng | 35.0 | 0.542 |
### System Info:
- hf_name: tgl-eng
- source_languages: tgl
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'en']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: eng
- short_pair: tl-en
- chrF2_score: 0.542
- bleu: 35.0
- brevity_penalty: 0.975
- ref_len: 18168.0
- src_name: Tagalog
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: en
- prefer_old: False
- long_pair: tgl-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tl", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tl-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tgl-spa
* source group: Tagalog
* target group: Spanish
* OPUS readme: [tgl-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.spa | 31.6 | 0.531 |
### System Info:
- hf_name: tgl-spa
- source_languages: tgl
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'es']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: spa
- short_pair: tl-es
- chrF2_score: 0.531
- bleu: 31.6
- brevity_penalty: 0.997
- ref_len: 4327.0
- src_name: Tagalog
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: es
- prefer_old: False
- long_pair: tgl-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tl", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tgl-por
* source group: Tagalog
* target group: Portuguese
* OPUS readme: [tgl-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.por | 28.8 | 0.522 |
### System Info:
- hf_name: tgl-por
- source_languages: tgl
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'pt']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: por
- short_pair: tl-pt
- chrF2_score: 0.522
- bleu: 28.8
- brevity_penalty: 0.981
- ref_len: 12826.0
- src_name: Tagalog
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: pt
- prefer_old: False
- long_pair: tgl-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tl", "pt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tl-pt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tl",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tll-en
* source languages: tll
* target languages: en
* OPUS readme: [tll-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.en | 34.5 | 0.500 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tll-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tll",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tll-es
* source languages: tll
* target languages: es
* OPUS readme: [tll-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.es | 22.9 | 0.403 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tll-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tll",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tll-fi
* source languages: tll
* target languages: fi
* OPUS readme: [tll-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.fi | 22.4 | 0.441 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tll-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tll",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tll-fr
* source languages: tll
* target languages: fr
* OPUS readme: [tll-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.fr | 25.2 | 0.426 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tll-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tll",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tll-sv
* source languages: tll
* target languages: sv
* OPUS readme: [tll-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tll.sv | 25.6 | 0.436 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tll-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tll",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tn-en
* source languages: tn
* target languages: en
* OPUS readme: [tn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.en | 43.4 | 0.589 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tn-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tn",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tn-es
* source languages: tn
* target languages: es
* OPUS readme: [tn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.es | 29.1 | 0.479 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tn-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tn",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tn-fr
* source languages: tn
* target languages: fr
* OPUS readme: [tn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.fr | 29.0 | 0.474 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tn-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tn",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tn-sv
* source languages: tn
* target languages: sv
* OPUS readme: [tn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.sv | 32.0 | 0.508 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tn-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tn",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-to-en
* source languages: to
* target languages: en
* OPUS readme: [to-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.to.en | 49.3 | 0.627 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-to-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"to",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-to-es
* source languages: to
* target languages: es
* OPUS readme: [to-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.to.es | 26.6 | 0.447 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-to-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"to",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-to-fr
* source languages: to
* target languages: fr
* OPUS readme: [to-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.to.fr | 27.9 | 0.456 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-to-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"to",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-to-sv
* source languages: to
* target languages: sv
* OPUS readme: [to-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.to.sv | 30.7 | 0.493 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-to-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"to",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-toi-en
* source languages: toi
* target languages: en
* OPUS readme: [toi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.en | 39.0 | 0.539 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-toi-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"toi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-toi-es
* source languages: toi
* target languages: es
* OPUS readme: [toi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.es | 24.6 | 0.416 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-toi-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"toi",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-toi-fi
* source languages: toi
* target languages: fi
* OPUS readme: [toi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.fi | 24.5 | 0.464 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-toi-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"toi",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-toi-fr
* source languages: toi
* target languages: fr
* OPUS readme: [toi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.fr | 26.5 | 0.432 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-toi-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"toi",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-toi-sv
* source languages: toi
* target languages: sv
* OPUS readme: [toi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.sv | 27.0 | 0.448 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-toi-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"toi",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tpi-en
* source languages: tpi
* target languages: en
* OPUS readme: [tpi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tpi.en | 29.1 | 0.448 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tpi-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tpi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tpi-sv
* source languages: tpi
* target languages: sv
* OPUS readme: [tpi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tpi.sv | 21.6 | 0.396 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tpi-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tpi",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tur-ara
* source group: Turkish
* target group: Arabic
* OPUS readme: [tur-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md)
* model: transformer
* source language(s): tur
* target language(s): apc_Latn ara ara_Latn arq_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.ara | 14.9 | 0.455 |
### System Info:
- hf_name: tur-ara
- source_languages: tur
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'ar']
- src_constituents: {'tur'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt
- src_alpha3: tur
- tgt_alpha3: ara
- short_pair: tr-ar
- chrF2_score: 0.455
- bleu: 14.9
- brevity_penalty: 0.988
- ref_len: 6944.0
- src_name: Turkish
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: tr
- tgt_alpha2: ar
- prefer_old: False
- long_pair: tur-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tr", "ar"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-ar | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tur-aze
* source group: Turkish
* target group: Azerbaijani
* OPUS readme: [tur-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md)
* model: transformer-align
* source language(s): tur
* target language(s): aze_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.aze | 27.7 | 0.551 |
### System Info:
- hf_name: tur-aze
- source_languages: tur
- target_languages: aze
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'az']
- src_constituents: {'tur'}
- tgt_constituents: {'aze_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt
- src_alpha3: tur
- tgt_alpha3: aze
- short_pair: tr-az
- chrF2_score: 0.551
- bleu: 27.7
- brevity_penalty: 1.0
- ref_len: 5436.0
- src_name: Turkish
- tgt_name: Azerbaijani
- train_date: 2020-06-16
- src_alpha2: tr
- tgt_alpha2: az
- prefer_old: False
- long_pair: tur-aze
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tr", "az"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-az | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"az",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tr-en
* source languages: tr
* target languages: en
* OPUS readme: [tr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr.tr.en | 27.6 | 0.548 |
| newstest2016-entr.tr.en | 25.2 | 0.532 |
| newstest2017-entr.tr.en | 24.7 | 0.530 |
| newstest2018-entr.tr.en | 27.0 | 0.547 |
| Tatoeba.tr.en | 63.5 | 0.760 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tur-epo
* source group: Turkish
* target group: Esperanto
* OPUS readme: [tur-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md)
* model: transformer-align
* source language(s): tur
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.epo | 17.0 | 0.373 |
### System Info:
- hf_name: tur-epo
- source_languages: tur
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'eo']
- src_constituents: {'tur'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt
- src_alpha3: tur
- tgt_alpha3: epo
- short_pair: tr-eo
- chrF2_score: 0.373
- bleu: 17.0
- brevity_penalty: 0.8809999999999999
- ref_len: 33762.0
- src_name: Turkish
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: tr
- tgt_alpha2: eo
- prefer_old: False
- long_pair: tur-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tr", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tr-es
* source languages: tr
* target languages: es
* OPUS readme: [tr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.tr.es | 56.3 | 0.722 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tr-fr
* source languages: tr
* target languages: fr
* OPUS readme: [tr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.tr.fr | 45.3 | 0.627 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tur-lit
* source group: Turkish
* target group: Lithuanian
* OPUS readme: [tur-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md)
* model: transformer-align
* source language(s): tur
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.lit | 35.6 | 0.631 |
### System Info:
- hf_name: tur-lit
- source_languages: tur
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'lt']
- src_constituents: {'tur'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt
- src_alpha3: tur
- tgt_alpha3: lit
- short_pair: tr-lt
- chrF2_score: 0.631
- bleu: 35.6
- brevity_penalty: 0.9490000000000001
- ref_len: 8285.0
- src_name: Turkish
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: tr
- tgt_alpha2: lt
- prefer_old: False
- long_pair: tur-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tr", "lt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-lt | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"tr",
"lt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tr-sv
* source languages: tr
* target languages: sv
* OPUS readme: [tr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tr.sv | 26.3 | 0.478 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### tur-ukr
* source group: Turkish
* target group: Ukrainian
* OPUS readme: [tur-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md)
* model: transformer-align
* source language(s): tur
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tur.ukr | 42.5 | 0.624 |
### System Info:
- hf_name: tur-ukr
- source_languages: tur
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tr', 'uk']
- src_constituents: {'tur'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt
- src_alpha3: tur
- tgt_alpha3: ukr
- short_pair: tr-uk
- chrF2_score: 0.624
- bleu: 42.5
- brevity_penalty: 0.983
- ref_len: 12988.0
- src_name: Turkish
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: tr
- tgt_alpha2: uk
- prefer_old: False
- long_pair: tur-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tr", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tr-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tr",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### trk-eng
* source group: Turkic languages
* target group: English
* OPUS readme: [trk-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md)
* model: transformer
* source language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-tureng.tur.eng | 5.0 | 0.242 |
| newstest2016-entr-tureng.tur.eng | 3.7 | 0.231 |
| newstest2017-entr-tureng.tur.eng | 3.7 | 0.229 |
| newstest2018-entr-tureng.tur.eng | 4.1 | 0.230 |
| Tatoeba-test.aze-eng.aze.eng | 15.1 | 0.330 |
| Tatoeba-test.bak-eng.bak.eng | 3.3 | 0.185 |
| Tatoeba-test.chv-eng.chv.eng | 1.3 | 0.161 |
| Tatoeba-test.crh-eng.crh.eng | 10.8 | 0.325 |
| Tatoeba-test.kaz-eng.kaz.eng | 9.6 | 0.264 |
| Tatoeba-test.kir-eng.kir.eng | 15.3 | 0.328 |
| Tatoeba-test.kjh-eng.kjh.eng | 1.8 | 0.121 |
| Tatoeba-test.kum-eng.kum.eng | 16.1 | 0.277 |
| Tatoeba-test.multi.eng | 12.0 | 0.304 |
| Tatoeba-test.ota-eng.ota.eng | 2.0 | 0.149 |
| Tatoeba-test.sah-eng.sah.eng | 0.7 | 0.140 |
| Tatoeba-test.tat-eng.tat.eng | 4.0 | 0.215 |
| Tatoeba-test.tuk-eng.tuk.eng | 5.5 | 0.243 |
| Tatoeba-test.tur-eng.tur.eng | 26.8 | 0.443 |
| Tatoeba-test.tyv-eng.tyv.eng | 1.3 | 0.111 |
| Tatoeba-test.uig-eng.uig.eng | 0.2 | 0.111 |
| Tatoeba-test.uzb-eng.uzb.eng | 4.6 | 0.195 |
### System Info:
- hf_name: trk-eng
- source_languages: trk
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en']
- src_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt
- src_alpha3: trk
- tgt_alpha3: eng
- short_pair: trk-en
- chrF2_score: 0.304
- bleu: 12.0
- brevity_penalty: 1.0
- ref_len: 18733.0
- src_name: Turkic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: trk
- tgt_alpha2: en
- prefer_old: False
- long_pair: trk-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["tt", "cv", "tk", "tr", "ba", "trk", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-trk-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tt",
"cv",
"tk",
"tr",
"ba",
"trk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ts-en
* source languages: ts
* target languages: en
* OPUS readme: [ts-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.en | 44.0 | 0.590 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ts-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ts",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ts-es
* source languages: ts
* target languages: es
* OPUS readme: [ts-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.es | 28.1 | 0.468 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ts-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ts",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ts-fi
* source languages: ts
* target languages: fi
* OPUS readme: [ts-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.fi | 27.7 | 0.509 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ts-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ts",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ts-fr
* source languages: ts
* target languages: fr
* OPUS readme: [ts-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.fr | 29.9 | 0.475 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ts-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ts",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ts-sv
* source languages: ts
* target languages: sv
* OPUS readme: [ts-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.sv | 32.6 | 0.510 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ts-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ts",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tum-en
* source languages: tum
* target languages: en
* OPUS readme: [tum-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tum.en | 31.7 | 0.470 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tum-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tum",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tum-es
* source languages: tum
* target languages: es
* OPUS readme: [tum-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tum.es | 22.6 | 0.390 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tum-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tum",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tum-fr
* source languages: tum
* target languages: fr
* OPUS readme: [tum-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tum.fr | 24.0 | 0.403 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tum-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tum",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tum-sv
* source languages: tum
* target languages: sv
* OPUS readme: [tum-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tum.sv | 23.3 | 0.410 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tum-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tum",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tvl-en
* source languages: tvl
* target languages: en
* OPUS readme: [tvl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.en | 37.3 | 0.528 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tvl-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tvl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tvl-es
* source languages: tvl
* target languages: es
* OPUS readme: [tvl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.es | 21.0 | 0.388 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tvl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tvl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tvl-fi
* source languages: tvl
* target languages: fi
* OPUS readme: [tvl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.fi | 22.0 | 0.439 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tvl-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tvl",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tvl-fr
* source languages: tvl
* target languages: fr
* OPUS readme: [tvl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.fr | 24.0 | 0.410 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tvl-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tvl",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tvl-sv
* source languages: tvl
* target languages: sv
* OPUS readme: [tvl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tvl.sv | 24.7 | 0.427 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tvl-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tvl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tw-es
* source languages: tw
* target languages: es
* OPUS readme: [tw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.es | 25.9 | 0.441 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tw-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tw",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tw-fi
* source languages: tw
* target languages: fi
* OPUS readme: [tw-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.fi | 25.6 | 0.488 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tw-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tw",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tw-fr
* source languages: tw
* target languages: fr
* OPUS readme: [tw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.fr | 26.7 | 0.442 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tw-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tw",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tw-sv
* source languages: tw
* target languages: sv
* OPUS readme: [tw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tw.sv | 29.0 | 0.471 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tw-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tw",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ty-es
* source languages: ty
* target languages: es
* OPUS readme: [ty-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ty.es | 27.3 | 0.457 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ty-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ty",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ty-fi
* source languages: ty
* target languages: fi
* OPUS readme: [ty-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ty.fi | 21.7 | 0.451 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ty-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ty",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ty-fr
* source languages: ty
* target languages: fr
* OPUS readme: [ty-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ty.fr | 30.2 | 0.480 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ty-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ty",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ty-sv
* source languages: ty
* target languages: sv
* OPUS readme: [ty-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ty.sv | 28.9 | 0.472 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ty-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ty",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-tzo-es
* source languages: tzo
* target languages: es
* OPUS readme: [tzo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tzo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tzo.es | 20.8 | 0.381 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-tzo-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"tzo",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-bul
* source group: Ukrainian
* target group: Bulgarian
* OPUS readme: [ukr-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.bul | 55.7 | 0.734 |
### System Info:
- hf_name: ukr-bul
- source_languages: ukr
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'bg']
- src_constituents: {'ukr'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: bul
- short_pair: uk-bg
- chrF2_score: 0.7340000000000001
- bleu: 55.7
- brevity_penalty: 0.976
- ref_len: 5181.0
- src_name: Ukrainian
- tgt_name: Bulgarian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: bg
- prefer_old: False
- long_pair: ukr-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-cat
* source group: Ukrainian
* target group: Catalan
* OPUS readme: [ukr-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.cat | 33.7 | 0.538 |
### System Info:
- hf_name: ukr-cat
- source_languages: ukr
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'ca']
- src_constituents: {'ukr'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt
- src_alpha3: ukr
- tgt_alpha3: cat
- short_pair: uk-ca
- chrF2_score: 0.5379999999999999
- bleu: 33.7
- brevity_penalty: 0.972
- ref_len: 2670.0
- src_name: Ukrainian
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: uk
- tgt_alpha2: ca
- prefer_old: False
- long_pair: ukr-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "ca"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-ca | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-ces
* source group: Ukrainian
* target group: Czech
* OPUS readme: [ukr-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ces/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): ces
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.ces | 52.0 | 0.686 |
### System Info:
- hf_name: ukr-ces
- source_languages: ukr
- target_languages: ces
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ces/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'cs']
- src_constituents: {'ukr'}
- tgt_constituents: {'ces'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: ces
- short_pair: uk-cs
- chrF2_score: 0.6859999999999999
- bleu: 52.0
- brevity_penalty: 0.993
- ref_len: 8550.0
- src_name: Ukrainian
- tgt_name: Czech
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: cs
- prefer_old: False
- long_pair: ukr-ces
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "cs"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-cs | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-deu
* source group: Ukrainian
* target group: German
* OPUS readme: [ukr-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-deu/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.deu | 48.2 | 0.661 |
### System Info:
- hf_name: ukr-deu
- source_languages: ukr
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'de']
- src_constituents: {'ukr'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: deu
- short_pair: uk-de
- chrF2_score: 0.6609999999999999
- bleu: 48.2
- brevity_penalty: 0.98
- ref_len: 62298.0
- src_name: Ukrainian
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: de
- prefer_old: False
- long_pair: ukr-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-uk-en
* source languages: uk
* target languages: en
* OPUS readme: [uk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.uk.en | 64.1 | 0.757 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-uk-es
* source languages: uk
* target languages: es
* OPUS readme: [uk-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.uk.es | 50.4 | 0.680 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-uk-fi
* source languages: uk
* target languages: fi
* OPUS readme: [uk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.uk.fi | 24.4 | 0.490 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-fi | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"uk",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-uk-fr
* source languages: uk
* target languages: fr
* OPUS readme: [uk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.uk.fr | 52.1 | 0.681 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-heb
* source group: Ukrainian
* target group: Hebrew
* OPUS readme: [ukr-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.heb | 35.7 | 0.557 |
### System Info:
- hf_name: ukr-heb
- source_languages: ukr
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'he']
- src_constituents: {'ukr'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: heb
- short_pair: uk-he
- chrF2_score: 0.557
- bleu: 35.7
- brevity_penalty: 1.0
- ref_len: 4765.0
- src_name: Ukrainian
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: he
- prefer_old: False
- long_pair: ukr-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-hun
* source group: Ukrainian
* target group: Hungarian
* OPUS readme: [ukr-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): hun
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.hun | 41.4 | 0.649 |
### System Info:
- hf_name: ukr-hun
- source_languages: ukr
- target_languages: hun
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'hu']
- src_constituents: {'ukr'}
- tgt_constituents: {'hun'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: hun
- short_pair: uk-hu
- chrF2_score: 0.649
- bleu: 41.4
- brevity_penalty: 0.9740000000000001
- ref_len: 2433.0
- src_name: Ukrainian
- tgt_name: Hungarian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: hu
- prefer_old: False
- long_pair: ukr-hun
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "hu"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-hu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-ita
* source group: Ukrainian
* target group: Italian
* OPUS readme: [ukr-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.ita | 46.0 | 0.662 |
### System Info:
- hf_name: ukr-ita
- source_languages: ukr
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'it']
- src_constituents: {'ukr'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: ita
- short_pair: uk-it
- chrF2_score: 0.662
- bleu: 46.0
- brevity_penalty: 0.9490000000000001
- ref_len: 27846.0
- src_name: Ukrainian
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: it
- prefer_old: False
- long_pair: ukr-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-nld
* source group: Ukrainian
* target group: Dutch
* OPUS readme: [ukr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nld/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.nld | 48.7 | 0.656 |
### System Info:
- hf_name: ukr-nld
- source_languages: ukr
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'nl']
- src_constituents: {'ukr'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: nld
- short_pair: uk-nl
- chrF2_score: 0.6559999999999999
- bleu: 48.7
- brevity_penalty: 0.985
- ref_len: 59943.0
- src_name: Ukrainian
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: nl
- prefer_old: False
- long_pair: ukr-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "nl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-nl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-nor
* source group: Ukrainian
* target group: Norwegian
* OPUS readme: [ukr-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.nor | 21.3 | 0.397 |
### System Info:
- hf_name: ukr-nor
- source_languages: ukr
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'no']
- src_constituents: {'ukr'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: nor
- short_pair: uk-no
- chrF2_score: 0.397
- bleu: 21.3
- brevity_penalty: 0.966
- ref_len: 4378.0
- src_name: Ukrainian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: no
- prefer_old: False
- long_pair: ukr-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", false], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-no | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-pol
* source group: Ukrainian
* target group: Polish
* OPUS readme: [ukr-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-pol/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.pol | 49.9 | 0.685 |
### System Info:
- hf_name: ukr-pol
- source_languages: ukr
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'pl']
- src_constituents: {'ukr'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: pol
- short_pair: uk-pl
- chrF2_score: 0.685
- bleu: 49.9
- brevity_penalty: 0.9470000000000001
- ref_len: 13098.0
- src_name: Ukrainian
- tgt_name: Polish
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: pl
- prefer_old: False
- long_pair: ukr-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "pl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-pl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-por
* source group: Ukrainian
* target group: Portuguese
* OPUS readme: [ukr-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-por/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.por | 38.1 | 0.601 |
### System Info:
- hf_name: ukr-por
- source_languages: ukr
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'pt']
- src_constituents: {'ukr'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: por
- short_pair: uk-pt
- chrF2_score: 0.601
- bleu: 38.1
- brevity_penalty: 0.981
- ref_len: 21315.0
- src_name: Ukrainian
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: pt
- prefer_old: False
- long_pair: ukr-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "pt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-pt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-rus
* source group: Ukrainian
* target group: Russian
* OPUS readme: [ukr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-rus/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.rus | 69.2 | 0.826 |
### System Info:
- hf_name: ukr-rus
- source_languages: ukr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'ru']
- src_constituents: {'ukr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: rus
- short_pair: uk-ru
- chrF2_score: 0.826
- bleu: 69.2
- brevity_penalty: 0.992
- ref_len: 60387.0
- src_name: Ukrainian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: ru
- prefer_old: False
- long_pair: ukr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-hbs
* source group: Ukrainian
* target group: Serbo-Croatian
* OPUS readme: [ukr-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hbs/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): hrv srp_Cyrl srp_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.hbs | 42.8 | 0.631 |
### System Info:
- hf_name: ukr-hbs
- source_languages: ukr
- target_languages: hbs
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hbs/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'sh']
- src_constituents: {'ukr'}
- tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: hbs
- short_pair: uk-sh
- chrF2_score: 0.631
- bleu: 42.8
- brevity_penalty: 0.96
- ref_len: 5128.0
- src_name: Ukrainian
- tgt_name: Serbo-Croatian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: sh
- prefer_old: False
- long_pair: ukr-hbs
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "sh"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-sh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"sh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-slv
* source group: Ukrainian
* target group: Slovenian
* OPUS readme: [ukr-slv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-slv/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): slv
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.slv | 11.8 | 0.280 |
### System Info:
- hf_name: ukr-slv
- source_languages: ukr
- target_languages: slv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-slv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'sl']
- src_constituents: {'ukr'}
- tgt_constituents: {'slv'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: slv
- short_pair: uk-sl
- chrF2_score: 0.28
- bleu: 11.8
- brevity_penalty: 1.0
- ref_len: 3823.0
- src_name: Ukrainian
- tgt_name: Slovenian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: sl
- prefer_old: False
- long_pair: ukr-slv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "sl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-sl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"sl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-uk-sv
* source languages: uk
* target languages: sv
* OPUS readme: [uk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.uk.sv | 27.8 | 0.474 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### ukr-tur
* source group: Ukrainian
* target group: Turkish
* OPUS readme: [ukr-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.tur | 39.3 | 0.655 |
### System Info:
- hf_name: ukr-tur
- source_languages: ukr
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'tr']
- src_constituents: {'ukr'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: tur
- short_pair: uk-tr
- chrF2_score: 0.655
- bleu: 39.3
- brevity_penalty: 0.934
- ref_len: 11844.0
- src_name: Ukrainian
- tgt_name: Turkish
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: tr
- prefer_old: False
- long_pair: ukr-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["uk", "tr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-uk-tr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"uk",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-umb-en
* source languages: umb
* target languages: en
* OPUS readme: [umb-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/umb-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.umb.en | 27.5 | 0.425 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-umb-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"umb",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### urd-eng
* source group: Urdu
* target group: English
* OPUS readme: [urd-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md)
* model: transformer-align
* source language(s): urd
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.urd.eng | 23.2 | 0.435 |
### System Info:
- hf_name: urd-eng
- source_languages: urd
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ur', 'en']
- src_constituents: {'urd'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt
- src_alpha3: urd
- tgt_alpha3: eng
- short_pair: ur-en
- chrF2_score: 0.435
- bleu: 23.2
- brevity_penalty: 0.975
- ref_len: 12029.0
- src_name: Urdu
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ur
- tgt_alpha2: en
- prefer_old: False
- long_pair: urd-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ur", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ur-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ur",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### urj-eng
* source group: Uralic languages
* target group: English
* OPUS readme: [urj-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-eng/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-fineng.fin.eng | 22.7 | 0.511 |
| newsdev2018-enet-esteng.est.eng | 26.6 | 0.545 |
| newssyscomb2009-huneng.hun.eng | 21.3 | 0.493 |
| newstest2009-huneng.hun.eng | 20.1 | 0.487 |
| newstest2015-enfi-fineng.fin.eng | 23.9 | 0.521 |
| newstest2016-enfi-fineng.fin.eng | 25.8 | 0.542 |
| newstest2017-enfi-fineng.fin.eng | 28.9 | 0.562 |
| newstest2018-enet-esteng.est.eng | 27.0 | 0.552 |
| newstest2018-enfi-fineng.fin.eng | 21.2 | 0.492 |
| newstest2019-fien-fineng.fin.eng | 25.3 | 0.531 |
| newstestB2016-enfi-fineng.fin.eng | 21.3 | 0.500 |
| newstestB2017-enfi-fineng.fin.eng | 24.4 | 0.528 |
| newstestB2017-fien-fineng.fin.eng | 24.4 | 0.528 |
| Tatoeba-test.chm-eng.chm.eng | 0.8 | 0.131 |
| Tatoeba-test.est-eng.est.eng | 34.5 | 0.526 |
| Tatoeba-test.fin-eng.fin.eng | 28.1 | 0.485 |
| Tatoeba-test.fkv-eng.fkv.eng | 6.8 | 0.335 |
| Tatoeba-test.hun-eng.hun.eng | 25.1 | 0.452 |
| Tatoeba-test.izh-eng.izh.eng | 11.6 | 0.224 |
| Tatoeba-test.kom-eng.kom.eng | 2.4 | 0.110 |
| Tatoeba-test.krl-eng.krl.eng | 18.6 | 0.365 |
| Tatoeba-test.liv-eng.liv.eng | 0.5 | 0.078 |
| Tatoeba-test.mdf-eng.mdf.eng | 1.5 | 0.117 |
| Tatoeba-test.multi.eng | 47.8 | 0.646 |
| Tatoeba-test.myv-eng.myv.eng | 0.5 | 0.101 |
| Tatoeba-test.sma-eng.sma.eng | 1.2 | 0.110 |
| Tatoeba-test.sme-eng.sme.eng | 1.5 | 0.147 |
| Tatoeba-test.udm-eng.udm.eng | 1.0 | 0.130 |
### System Info:
- hf_name: urj-eng
- source_languages: urj
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'urj', 'en']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.test.txt
- src_alpha3: urj
- tgt_alpha3: eng
- short_pair: urj-en
- chrF2_score: 0.6459999999999999
- bleu: 47.8
- brevity_penalty: 0.993
- ref_len: 70882.0
- src_name: Uralic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: urj
- tgt_alpha2: en
- prefer_old: False
- long_pair: urj-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["se", "fi", "hu", "et", "urj", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-urj-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"se",
"fi",
"hu",
"et",
"urj",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### urj-urj
* source group: Uralic languages
* target group: Uralic languages
* OPUS readme: [urj-urj](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.est-est.est.est | 5.1 | 0.288 |
| Tatoeba-test.est-fin.est.fin | 50.9 | 0.709 |
| Tatoeba-test.est-fkv.est.fkv | 0.7 | 0.215 |
| Tatoeba-test.est-vep.est.vep | 1.0 | 0.154 |
| Tatoeba-test.fin-est.fin.est | 55.5 | 0.718 |
| Tatoeba-test.fin-fkv.fin.fkv | 1.8 | 0.254 |
| Tatoeba-test.fin-hun.fin.hun | 45.0 | 0.672 |
| Tatoeba-test.fin-izh.fin.izh | 7.1 | 0.492 |
| Tatoeba-test.fin-krl.fin.krl | 2.6 | 0.278 |
| Tatoeba-test.fkv-est.fkv.est | 0.6 | 0.099 |
| Tatoeba-test.fkv-fin.fkv.fin | 15.5 | 0.444 |
| Tatoeba-test.fkv-liv.fkv.liv | 0.6 | 0.101 |
| Tatoeba-test.fkv-vep.fkv.vep | 0.6 | 0.113 |
| Tatoeba-test.hun-fin.hun.fin | 46.3 | 0.675 |
| Tatoeba-test.izh-fin.izh.fin | 13.4 | 0.431 |
| Tatoeba-test.izh-krl.izh.krl | 2.9 | 0.078 |
| Tatoeba-test.krl-fin.krl.fin | 14.1 | 0.439 |
| Tatoeba-test.krl-izh.krl.izh | 1.0 | 0.125 |
| Tatoeba-test.liv-fkv.liv.fkv | 0.9 | 0.170 |
| Tatoeba-test.liv-vep.liv.vep | 2.6 | 0.176 |
| Tatoeba-test.multi.multi | 32.9 | 0.580 |
| Tatoeba-test.vep-est.vep.est | 3.4 | 0.265 |
| Tatoeba-test.vep-fkv.vep.fkv | 0.9 | 0.239 |
| Tatoeba-test.vep-liv.vep.liv | 2.6 | 0.190 |
### System Info:
- hf_name: urj-urj
- source_languages: urj
- target_languages: urj
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'urj']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt
- src_alpha3: urj
- tgt_alpha3: urj
- short_pair: urj-urj
- chrF2_score: 0.58
- bleu: 32.9
- brevity_penalty: 1.0
- ref_len: 19444.0
- src_name: Uralic languages
- tgt_name: Uralic languages
- train_date: 2020-07-27
- src_alpha2: urj
- tgt_alpha2: urj
- prefer_old: False
- long_pair: urj-urj
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["se", "fi", "hu", "et", "urj"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-urj-urj | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"se",
"fi",
"hu",
"et",
"urj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ve-en
* source languages: ve
* target languages: en
* OPUS readme: [ve-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ve-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ve.en | 41.3 | 0.566 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ve-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ve",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-ve-es
* source languages: ve
* target languages: es
* OPUS readme: [ve-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ve-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ve-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ve.es | 23.1 | 0.413 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ve-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ve",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### vie-deu
* source group: Vietnamese
* target group: German
* OPUS readme: [vie-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-deu/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.deu | 27.6 | 0.484 |
### System Info:
- hf_name: vie-deu
- source_languages: vie
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'de']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: deu
- short_pair: vi-de
- chrF2_score: 0.484
- bleu: 27.6
- brevity_penalty: 0.958
- ref_len: 3365.0
- src_name: Vietnamese
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: de
- prefer_old: False
- long_pair: vie-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vi-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.