Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-classification
transformers
This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
{"language": "kn", "license": "apache-2.0"}
Hate-speech-CNERG/deoffxlmr-mono-kannada
null
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "kn", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
{"language": "ml", "license": "apache-2.0"}
Hate-speech-CNERG/deoffxlmr-mono-malyalam
null
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "ml", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
{"language": "ta", "license": "apache-2.0"}
Hate-speech-CNERG/deoffxlmr-mono-tamil
null
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "ta", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Haus11/DialoGPT-small-obiwan
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
HavokMaster/DialogGPT-small-Natasha
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
HavokMaster/Natasha
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Rick Sanchez DialoGPT Model
{"tags": ["conversational"]}
Havokx/DialoGPT-small-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Hawvesy/Antonella
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Hax/filipino-text-version1
null
[ "transformers", "pytorch", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
HayeonLee/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Haz/PL
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Heavenborne/DialoGPT-medium-hermione
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Heavenborne/hermioneBot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Heerak/style_transfer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Hei/Heiinlei
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
null
# My Awesome Model
{"tags": ["conversational"]}
Heldhy/DialoGPT-small-tony
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
Heldhy/testingAgain
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4568 - Wer: 0.3422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3896 | 4.0 | 500 | 1.1573 | 0.8886 | | 0.5667 | 8.0 | 1000 | 0.4841 | 0.4470 | | 0.2126 | 12.0 | 1500 | 0.4201 | 0.3852 | | 0.1235 | 16.0 | 2000 | 0.4381 | 0.3623 | | 0.0909 | 20.0 | 2500 | 0.4784 | 0.3748 | | 0.0611 | 24.0 | 3000 | 0.4390 | 0.3577 | | 0.0454 | 28.0 | 3500 | 0.4568 | 0.3422 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
Heldhy/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
https://imgs.xkcd.com/comics/reassuring.png
{}
Hellisotherpeople/T5_Reassuring_Parables
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
fasttext
# debate2vec Word-vectors created from a large corpus of competitive debate evidence, and data extraction / processing scripts #usage ``` import fasttext.util ft = fasttext.load_model('debate2vec.bin') ft.get_word_vector('dialectics') ``` # Download Link Github won't let me store large files in their repos. * [FastText Vectors Here](https://drive.google.com/file/d/1m-CwPcaIUun4qvg69Hx2gom9dMScuQwS/view?usp=sharing) (~260mb) # About Created from all publically available Cross Examination Competitive debate evidence posted by the community on [Open Evidence](https://openev.debatecoaches.org/) (From 2013-2020) Search through the original evidence by going to [debate.cards](http://debate.cards/) Stats about this corpus: * 222485 unique documents larger than 200 words (DebateSum plus some additional debate docs that weren't well-formed enough for inclusion into DebateSum) * 107555 unique words (showing up more than 10 times in the corpus) * 101 million total words Stats about debate2vec vectors: * 300 dimensions, minimum number of appearances of a word was 10, trained for 100 epochs with lr set to 0.10 using FastText * lowercased (will release cased) * No subword information The corpus includes the following topics * 2013-2014 Cuba/Mexico/Venezuela Economic Engagement * 2014-2015 Oceans * 2015-2016 Domestic Surveillance * 2016-2017 China * 2017-2018 Education * 2018-2019 Immigration * 2019-2020 Reducing Arms Sales Other topics that this word vector model will handle extremely well * Philosophy (Especially Left-Wing / Post-modernist) * Law * Government * Politics Initial release is of fasttext vectors without subword information. Future releases will include fine-tuned GPT-2 and other high end models as my GPU compute allows. # Screenshots ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec.jpg) ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec2.jpg) ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec3.jpg)
{"library_name": "fasttext", "tags": ["text-classification"], "widget": [{"text": "dialectics", "example_title": "dialectics"}, {"text": "schizoanalysis", "example_title": "schizoanalysis"}, {"text": "praxis", "example_title": "praxis"}, {"text": "topicality", "example_title": "topicality"}]}
Hellisotherpeople/debate2vec
null
[ "fasttext", "text-classification", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
HelloRusk/t5-base-parasci
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
HelloRusk/t5-small-parasci
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-NORTH_EU-NORTH_EU * source languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv * target languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv * OPUS readme: [de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.sv | 48.1 | 0.663 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "multilingual", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ROMANCE-en * source languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la * target languages: en * OPUS readme: [fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-01.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.zip) * test set translations: [opus-2020-04-01.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.test.txt) * test set scores: [opus-2020-04-01.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.en | 62.2 | 0.750 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ROMANCE-en
null
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "roa", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-SCANDINAVIA-SCANDINAVIA * source languages: da,fo,is,no,nb,nn,sv * target languages: da,fo,is,no,nb,nn,sv * OPUS readme: [da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.da.sv | 69.2 | 0.811 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "scandinavia", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### aav-eng * source group: Austro-Asiatic languages * target group: English * OPUS readme: [aav-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md) * model: transformer * source language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.hoc-eng.hoc.eng | 0.3 | 0.095 | | Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.115 | | Tatoeba-test.khm-eng.khm.eng | 8.9 | 0.271 | | Tatoeba-test.mnw-eng.mnw.eng | 0.8 | 0.118 | | Tatoeba-test.multi.eng | 24.8 | 0.391 | | Tatoeba-test.vie-eng.vie.eng | 38.7 | 0.567 | ### System Info: - hf_name: aav-eng - source_languages: aav - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'km', 'aav', 'en'] - src_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt - src_alpha3: aav - tgt_alpha3: eng - short_pair: aav-en - chrF2_score: 0.391 - bleu: 24.8 - brevity_penalty: 0.968 - ref_len: 36693.0 - src_name: Austro-Asiatic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: aav - tgt_alpha2: en - prefer_old: False - long_pair: aav-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "km", "aav", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-aav-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "km", "aav", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-aed-es * source languages: aed * target languages: es * OPUS readme: [aed-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/aed-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.aed.es | 89.1 | 0.915 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-aed-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "aed", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-af-de * source languages: af * target languages: de * OPUS readme: [af-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-19.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.zip) * test set translations: [opus-2020-01-19.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.test.txt) * test set scores: [opus-2020-01-19.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.af.de | 48.6 | 0.681 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-af-en * source languages: af * target languages: en * OPUS readme: [af-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.af.en | 60.8 | 0.736 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### afr-epo * source group: Afrikaans * target group: Esperanto * OPUS readme: [afr-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md) * model: transformer-align * source language(s): afr * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.epo | 18.3 | 0.411 | ### System Info: - hf_name: afr-epo - source_languages: afr - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'eo'] - src_constituents: {'afr'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt - src_alpha3: afr - tgt_alpha3: epo - short_pair: af-eo - chrF2_score: 0.41100000000000003 - bleu: 18.3 - brevity_penalty: 0.995 - ref_len: 7517.0 - src_name: Afrikaans - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: af - tgt_alpha2: eo - prefer_old: False - long_pair: afr-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["af", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### afr-spa * source group: Afrikaans * target group: Spanish * OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md) * model: transformer-align * source language(s): afr * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.spa | 49.9 | 0.680 | ### System Info: - hf_name: afr-spa - source_languages: afr - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'es'] - src_constituents: {'afr'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: spa - short_pair: af-es - chrF2_score: 0.68 - bleu: 49.9 - brevity_penalty: 1.0 - ref_len: 2783.0 - src_name: Afrikaans - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: es - prefer_old: False - long_pair: afr-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["af", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-af-fi * source languages: af * target languages: fi * OPUS readme: [af-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.fi | 32.3 | 0.576 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-af-fr * source languages: af * target languages: fr * OPUS readme: [af-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.fr | 35.3 | 0.543 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### afr-nld * source group: Afrikaans * target group: Dutch * OPUS readme: [afr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md) * model: transformer-align * source language(s): afr * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.nld | 55.2 | 0.715 | ### System Info: - hf_name: afr-nld - source_languages: afr - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'nl'] - src_constituents: {'afr'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: nld - short_pair: af-nl - chrF2_score: 0.715 - bleu: 55.2 - brevity_penalty: 0.995 - ref_len: 6710.0 - src_name: Afrikaans - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: nl - prefer_old: False - long_pair: afr-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["af", "nl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-nl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "nl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### afr-rus * source group: Afrikaans * target group: Russian * OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md) * model: transformer-align * source language(s): afr * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.rus | 38.2 | 0.580 | ### System Info: - hf_name: afr-rus - source_languages: afr - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'ru'] - src_constituents: {'afr'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: rus - short_pair: af-ru - chrF2_score: 0.58 - bleu: 38.2 - brevity_penalty: 0.992 - ref_len: 1213.0 - src_name: Afrikaans - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: ru - prefer_old: False - long_pair: afr-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["af", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-af-sv * source languages: af * target languages: sv * OPUS readme: [af-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.sv | 40.4 | 0.599 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-af-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### afa-afa * source group: Afro-Asiatic languages * target group: Afro-Asiatic languages * OPUS readme: [afa-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md) * model: transformer * source language(s): apc ara arq arz heb kab mlt shy_Latn thv * target language(s): apc ara arq arz heb kab mlt shy_Latn thv * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara-ara.ara.ara | 4.3 | 0.148 | | Tatoeba-test.ara-heb.ara.heb | 31.9 | 0.525 | | Tatoeba-test.ara-kab.ara.kab | 0.3 | 0.120 | | Tatoeba-test.ara-mlt.ara.mlt | 14.0 | 0.428 | | Tatoeba-test.ara-shy.ara.shy | 1.3 | 0.050 | | Tatoeba-test.heb-ara.heb.ara | 17.0 | 0.464 | | Tatoeba-test.heb-kab.heb.kab | 1.9 | 0.104 | | Tatoeba-test.kab-ara.kab.ara | 0.3 | 0.044 | | Tatoeba-test.kab-heb.kab.heb | 5.1 | 0.099 | | Tatoeba-test.kab-shy.kab.shy | 2.2 | 0.009 | | Tatoeba-test.kab-tmh.kab.tmh | 10.7 | 0.007 | | Tatoeba-test.mlt-ara.mlt.ara | 29.1 | 0.498 | | Tatoeba-test.multi.multi | 20.8 | 0.434 | | Tatoeba-test.shy-ara.shy.ara | 1.2 | 0.053 | | Tatoeba-test.shy-kab.shy.kab | 2.0 | 0.134 | | Tatoeba-test.tmh-kab.tmh.kab | 0.0 | 0.047 | ### System Info: - hf_name: afa-afa - source_languages: afa - target_languages: afa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa'] - src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt - src_alpha3: afa - tgt_alpha3: afa - short_pair: afa-afa - chrF2_score: 0.434 - bleu: 20.8 - brevity_penalty: 1.0 - ref_len: 15215.0 - src_name: Afro-Asiatic languages - tgt_name: Afro-Asiatic languages - train_date: 2020-07-26 - src_alpha2: afa - tgt_alpha2: afa - prefer_old: False - long_pair: afa-afa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["so", "ti", "am", "he", "mt", "ar", "afa"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-afa-afa
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "so", "ti", "am", "he", "mt", "ar", "afa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### afa-eng * source group: Afro-Asiatic languages * target group: English * OPUS readme: [afa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md) * model: transformer * source language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.amh-eng.amh.eng | 35.9 | 0.550 | | Tatoeba-test.ara-eng.ara.eng | 36.6 | 0.543 | | Tatoeba-test.hau-eng.hau.eng | 11.9 | 0.327 | | Tatoeba-test.heb-eng.heb.eng | 42.7 | 0.591 | | Tatoeba-test.kab-eng.kab.eng | 4.3 | 0.213 | | Tatoeba-test.mlt-eng.mlt.eng | 44.3 | 0.618 | | Tatoeba-test.multi.eng | 27.1 | 0.464 | | Tatoeba-test.rif-eng.rif.eng | 3.5 | 0.141 | | Tatoeba-test.shy-eng.shy.eng | 0.6 | 0.125 | | Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 | | Tatoeba-test.tir-eng.tir.eng | 13.1 | 0.328 | ### System Info: - hf_name: afa-eng - source_languages: afa - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en'] - src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt - src_alpha3: afa - tgt_alpha3: eng - short_pair: afa-en - chrF2_score: 0.46399999999999997 - bleu: 27.1 - brevity_penalty: 1.0 - ref_len: 69373.0 - src_name: Afro-Asiatic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: afa - tgt_alpha2: en - prefer_old: False - long_pair: afa-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["so", "ti", "am", "he", "mt", "ar", "afa", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-afa-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "so", "ti", "am", "he", "mt", "ar", "afa", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### alv-eng * source group: Atlantic-Congo languages * target group: English * OPUS readme: [alv-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md) * model: transformer * source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ewe-eng.ewe.eng | 6.3 | 0.328 | | Tatoeba-test.ful-eng.ful.eng | 0.4 | 0.108 | | Tatoeba-test.ibo-eng.ibo.eng | 4.5 | 0.196 | | Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.511 | | Tatoeba-test.lin-eng.lin.eng | 2.8 | 0.213 | | Tatoeba-test.lug-eng.lug.eng | 3.4 | 0.140 | | Tatoeba-test.multi.eng | 20.9 | 0.376 | | Tatoeba-test.nya-eng.nya.eng | 38.7 | 0.492 | | Tatoeba-test.run-eng.run.eng | 24.5 | 0.417 | | Tatoeba-test.sag-eng.sag.eng | 5.5 | 0.177 | | Tatoeba-test.sna-eng.sna.eng | 26.9 | 0.412 | | Tatoeba-test.swa-eng.swa.eng | 4.9 | 0.196 | | Tatoeba-test.toi-eng.toi.eng | 3.9 | 0.147 | | Tatoeba-test.tso-eng.tso.eng | 76.7 | 0.957 | | Tatoeba-test.umb-eng.umb.eng | 4.0 | 0.195 | | Tatoeba-test.wol-eng.wol.eng | 3.7 | 0.170 | | Tatoeba-test.xho-eng.xho.eng | 38.9 | 0.556 | | Tatoeba-test.yor-eng.yor.eng | 25.1 | 0.412 | | Tatoeba-test.zul-eng.zul.eng | 46.1 | 0.623 | ### System Info: - hf_name: alv-eng - source_languages: alv - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en'] - src_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt - src_alpha3: alv - tgt_alpha3: eng - short_pair: alv-en - chrF2_score: 0.376 - bleu: 20.9 - brevity_penalty: 1.0 - ref_len: 15208.0 - src_name: Atlantic-Congo languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: alv - tgt_alpha2: en - prefer_old: False - long_pair: alv-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "alv", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-alv-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "alv", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-am-sv * source languages: am * target languages: sv * OPUS readme: [am-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/am-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.am.sv | 21.0 | 0.377 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-am-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "am", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-deu * source group: Arabic * target group: German * OPUS readme: [ara-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md) * model: transformer-align * source language(s): afb apc ara ara_Latn arq arz * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.deu | 44.7 | 0.629 | ### System Info: - hf_name: ara-deu - source_languages: ara - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'de'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: deu - short_pair: ar-de - chrF2_score: 0.629 - bleu: 44.7 - brevity_penalty: 0.986 - ref_len: 8371.0 - src_name: Arabic - tgt_name: German - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: de - prefer_old: False - long_pair: ara-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-ell * source group: Arabic * target group: Modern Greek (1453-) * OPUS readme: [ara-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md) * model: transformer-align * source language(s): ara arz * target language(s): ell * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.ell | 43.9 | 0.636 | ### System Info: - hf_name: ara-ell - source_languages: ara - target_languages: ell - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'el'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'ell'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: ell - short_pair: ar-el - chrF2_score: 0.636 - bleu: 43.9 - brevity_penalty: 0.993 - ref_len: 2009.0 - src_name: Arabic - tgt_name: Modern Greek (1453-) - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: el - prefer_old: False - long_pair: ara-ell - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "el"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-el
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "el", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ar-en * source languages: ar * target languages: en * OPUS readme: [ar-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ar.en | 49.4 | 0.661 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-en
null
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "ar", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-epo * source group: Arabic * target group: Esperanto * OPUS readme: [ara-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md) * model: transformer-align * source language(s): apc apc_Latn ara arq arq_Latn arz * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.epo | 18.9 | 0.376 | ### System Info: - hf_name: ara-epo - source_languages: ara - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'eo'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt - src_alpha3: ara - tgt_alpha3: epo - short_pair: ar-eo - chrF2_score: 0.376 - bleu: 18.9 - brevity_penalty: 0.948 - ref_len: 4506.0 - src_name: Arabic - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: ar - tgt_alpha2: eo - prefer_old: False - long_pair: ara-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-spa * source group: Arabic * target group: Spanish * OPUS readme: [ara-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md) * model: transformer * source language(s): apc apc_Latn ara arq * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.spa | 46.0 | 0.641 | ### System Info: - hf_name: ara-spa - source_languages: ara - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'es'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: spa - short_pair: ar-es - chrF2_score: 0.6409999999999999 - bleu: 46.0 - brevity_penalty: 0.9620000000000001 - ref_len: 9708.0 - src_name: Arabic - tgt_name: Spanish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: es - prefer_old: False - long_pair: ara-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ar-fr * source languages: ar * target languages: fr * OPUS readme: [ar-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ar.fr | 43.5 | 0.602 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-heb * source group: Arabic * target group: Hebrew * OPUS readme: [ara-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md) * model: transformer * source language(s): apc apc_Latn ara arq arz * target language(s): heb * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.heb | 40.4 | 0.605 | ### System Info: - hf_name: ara-heb - source_languages: ara - target_languages: heb - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'he'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'heb'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: heb - short_pair: ar-he - chrF2_score: 0.605 - bleu: 40.4 - brevity_penalty: 1.0 - ref_len: 6801.0 - src_name: Arabic - tgt_name: Hebrew - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: he - prefer_old: False - long_pair: ara-heb - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "he"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-he
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "he", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-ita * source group: Arabic * target group: Italian * OPUS readme: [ara-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md) * model: transformer * source language(s): ara * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.ita | 44.2 | 0.658 | ### System Info: - hf_name: ara-ita - source_languages: ara - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'it'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: ita - short_pair: ar-it - chrF2_score: 0.6579999999999999 - bleu: 44.2 - brevity_penalty: 0.9890000000000001 - ref_len: 1495.0 - src_name: Arabic - tgt_name: Italian - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: it - prefer_old: False - long_pair: ara-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "it"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-pol * source group: Arabic * target group: Polish * OPUS readme: [ara-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md) * model: transformer * source language(s): ara arz * target language(s): pol * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.pol | 38.0 | 0.623 | ### System Info: - hf_name: ara-pol - source_languages: ara - target_languages: pol - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'pl'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'pol'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: pol - short_pair: ar-pl - chrF2_score: 0.623 - bleu: 38.0 - brevity_penalty: 0.948 - ref_len: 1171.0 - src_name: Arabic - tgt_name: Polish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: pl - prefer_old: False - long_pair: ara-pol - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "pl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-pl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "pl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-rus * source group: Arabic * target group: Russian * OPUS readme: [ara-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md) * model: transformer * source language(s): apc ara arz * target language(s): rus * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.rus | 42.5 | 0.605 | ### System Info: - hf_name: ara-rus - source_languages: ara - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'ru'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: rus - short_pair: ar-ru - chrF2_score: 0.605 - bleu: 42.5 - brevity_penalty: 0.97 - ref_len: 21830.0 - src_name: Arabic - tgt_name: Russian - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: ru - prefer_old: False - long_pair: ara-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ara-tur * source group: Arabic * target group: Turkish * OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md) * model: transformer * source language(s): apc_Latn ara ara_Latn arq_Latn * target language(s): tur * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.tur | 33.1 | 0.619 | ### System Info: - hf_name: ara-tur - source_languages: ara - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'tr'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: tur - short_pair: ar-tr - chrF2_score: 0.619 - bleu: 33.1 - brevity_penalty: 0.9570000000000001 - ref_len: 6949.0 - src_name: Arabic - tgt_name: Turkish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: tr - prefer_old: False - long_pair: ara-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ar", "tr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ar-tr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### art-eng * source group: Artificial languages * target group: English * OPUS readme: [art-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md) * model: transformer * source language(s): afh_Latn avk_Latn dws_Latn epo ido ido_Latn ile_Latn ina_Latn jbo jbo_Cyrl jbo_Latn ldn_Latn lfn_Cyrl lfn_Latn nov_Latn qya qya_Latn sjn_Latn tlh_Latn tzl tzl_Latn vol_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afh-eng.afh.eng | 1.2 | 0.099 | | Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.105 | | Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.076 | | Tatoeba-test.epo-eng.epo.eng | 34.6 | 0.530 | | Tatoeba-test.ido-eng.ido.eng | 12.7 | 0.310 | | Tatoeba-test.ile-eng.ile.eng | 4.6 | 0.218 | | Tatoeba-test.ina-eng.ina.eng | 5.8 | 0.254 | | Tatoeba-test.jbo-eng.jbo.eng | 0.2 | 0.115 | | Tatoeba-test.ldn-eng.ldn.eng | 0.7 | 0.083 | | Tatoeba-test.lfn-eng.lfn.eng | 1.8 | 0.172 | | Tatoeba-test.multi.eng | 11.6 | 0.287 | | Tatoeba-test.nov-eng.nov.eng | 5.1 | 0.215 | | Tatoeba-test.qya-eng.qya.eng | 0.7 | 0.113 | | Tatoeba-test.sjn-eng.sjn.eng | 0.9 | 0.090 | | Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.124 | | Tatoeba-test.tzl-eng.tzl.eng | 1.4 | 0.109 | | Tatoeba-test.vol-eng.vol.eng | 0.5 | 0.115 | ### System Info: - hf_name: art-eng - source_languages: art - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'io', 'art', 'en'] - src_constituents: {'sjn_Latn', 'tzl', 'vol_Latn', 'qya', 'tlh_Latn', 'ile_Latn', 'ido_Latn', 'tzl_Latn', 'jbo_Cyrl', 'jbo', 'lfn_Latn', 'nov_Latn', 'dws_Latn', 'ldn_Latn', 'avk_Latn', 'lfn_Cyrl', 'ina_Latn', 'jbo_Latn', 'epo', 'afh_Latn', 'qya_Latn', 'ido'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt - src_alpha3: art - tgt_alpha3: eng - short_pair: art-en - chrF2_score: 0.287 - bleu: 11.6 - brevity_penalty: 1.0 - ref_len: 73037.0 - src_name: Artificial languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: art - tgt_alpha2: en - prefer_old: False - long_pair: art-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["eo", "io", "art", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-art-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "eo", "io", "art", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ase-de * source languages: ase * target languages: de * OPUS readme: [ase-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.de | 27.2 | 0.478 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ase-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ase", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ase-en * source languages: ase * target languages: en * OPUS readme: [ase-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.en | 99.5 | 0.997 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ase-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ase", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ase-es * source languages: ase * target languages: es * OPUS readme: [ase-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.es | 31.7 | 0.498 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ase-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ase", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ase-fr * source languages: ase * target languages: fr * OPUS readme: [ase-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.fr | 37.8 | 0.553 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ase-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ase", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ase-sv * source languages: ase * target languages: sv * OPUS readme: [ase-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.sv | 39.7 | 0.576 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ase-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ase", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### aze-eng * source group: Azerbaijani * target group: English * OPUS readme: [aze-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md) * model: transformer-align * source language(s): aze_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.aze.eng | 31.9 | 0.490 | ### System Info: - hf_name: aze-eng - source_languages: aze - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['az', 'en'] - src_constituents: {'aze_Latn'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt - src_alpha3: aze - tgt_alpha3: eng - short_pair: az-en - chrF2_score: 0.49 - bleu: 31.9 - brevity_penalty: 0.997 - ref_len: 16165.0 - src_name: Azerbaijani - tgt_name: English - train_date: 2020-06-16 - src_alpha2: az - tgt_alpha2: en - prefer_old: False - long_pair: aze-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["az", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-az-en
null
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "az", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### aze-spa * source group: Azerbaijani * target group: Spanish * OPUS readme: [aze-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md) * model: transformer-align * source language(s): aze_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.aze.spa | 11.8 | 0.346 | ### System Info: - hf_name: aze-spa - source_languages: aze - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['az', 'es'] - src_constituents: {'aze_Latn'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt - src_alpha3: aze - tgt_alpha3: spa - short_pair: az-es - chrF2_score: 0.34600000000000003 - bleu: 11.8 - brevity_penalty: 1.0 - ref_len: 1144.0 - src_name: Azerbaijani - tgt_name: Spanish - train_date: 2020-06-16 - src_alpha2: az - tgt_alpha2: es - prefer_old: False - long_pair: aze-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["az", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-az-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "az", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### aze-tur * source group: Azerbaijani * target group: Turkish * OPUS readme: [aze-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md) * model: transformer-align * source language(s): aze_Latn * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.aze.tur | 24.4 | 0.529 | ### System Info: - hf_name: aze-tur - source_languages: aze - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['az', 'tr'] - src_constituents: {'aze_Latn'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt - src_alpha3: aze - tgt_alpha3: tur - short_pair: az-tr - chrF2_score: 0.529 - bleu: 24.4 - brevity_penalty: 0.956 - ref_len: 5380.0 - src_name: Azerbaijani - tgt_name: Turkish - train_date: 2020-06-16 - src_alpha2: az - tgt_alpha2: tr - prefer_old: False - long_pair: aze-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["az", "tr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-az-tr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "az", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bat-eng * source group: Baltic languages * target group: English * OPUS readme: [bat-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md) * model: transformer * source language(s): lav lit ltg prg_Latn sgs * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv-laveng.lav.eng | 27.5 | 0.566 | | newsdev2019-enlt-liteng.lit.eng | 27.8 | 0.557 | | newstest2017-enlv-laveng.lav.eng | 21.1 | 0.512 | | newstest2019-lten-liteng.lit.eng | 30.2 | 0.592 | | Tatoeba-test.lav-eng.lav.eng | 51.5 | 0.687 | | Tatoeba-test.lit-eng.lit.eng | 55.1 | 0.703 | | Tatoeba-test.multi.eng | 50.6 | 0.662 | | Tatoeba-test.prg-eng.prg.eng | 1.0 | 0.159 | | Tatoeba-test.sgs-eng.sgs.eng | 16.5 | 0.265 | ### System Info: - hf_name: bat-eng - source_languages: bat - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'lv', 'bat', 'en'] - src_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt - src_alpha3: bat - tgt_alpha3: eng - short_pair: bat-en - chrF2_score: 0.662 - bleu: 50.6 - brevity_penalty: 0.9890000000000001 - ref_len: 30772.0 - src_name: Baltic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: bat - tgt_alpha2: en - prefer_old: False - long_pair: bat-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lt", "lv", "bat", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bat-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "lv", "bat", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bcl-de * source languages: bcl * target languages: de * OPUS readme: [bcl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bcl.de | 30.3 | 0.510 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bcl-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bcl", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bcl-en * source languages: bcl * target languages: en * OPUS readme: [bcl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.zip) * test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.test.txt) * test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bcl.en | 56.1 | 0.697 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bcl-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bcl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bcl-es * source languages: bcl * target languages: es * OPUS readme: [bcl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bcl.es | 37.0 | 0.551 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bcl-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bcl", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bcl-fi * source languages: bcl * target languages: fi * OPUS readme: [bcl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bcl.fi | 33.3 | 0.573 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bcl-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bcl", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bcl-fr * source languages: bcl * target languages: fr * OPUS readme: [bcl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bcl.fr | 35.0 | 0.527 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bcl-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bcl", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bcl-sv * source languages: bcl * target languages: sv * OPUS readme: [bcl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bcl.sv | 38.0 | 0.565 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bcl-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bcl", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bel-spa * source group: Belarusian * target group: Spanish * OPUS readme: [bel-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md) * model: transformer-align * source language(s): bel bel_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bel.spa | 11.8 | 0.272 | ### System Info: - hf_name: bel-spa - source_languages: bel - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['be', 'es'] - src_constituents: {'bel', 'bel_Latn'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt - src_alpha3: bel - tgt_alpha3: spa - short_pair: be-es - chrF2_score: 0.272 - bleu: 11.8 - brevity_penalty: 0.892 - ref_len: 1412.0 - src_name: Belarusian - tgt_name: Spanish - train_date: 2020-06-16 - src_alpha2: be - tgt_alpha2: es - prefer_old: False - long_pair: bel-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["be", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-be-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "be", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bem-en * source languages: bem * target languages: en * OPUS readme: [bem-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.en | 33.4 | 0.491 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bem-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bem", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bem-es * source languages: bem * target languages: es * OPUS readme: [bem-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.es | 22.8 | 0.403 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bem-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bem", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bem-fi * source languages: bem * target languages: fi * OPUS readme: [bem-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.fi | 22.8 | 0.439 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bem-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bem", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bem-fr * source languages: bem * target languages: fr * OPUS readme: [bem-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.fr | 25.0 | 0.417 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bem-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bem", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bem-sv * source languages: bem * target languages: sv * OPUS readme: [bem-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.sv | 25.6 | 0.434 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bem-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bem", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ber-en * source languages: ber * target languages: en * OPUS readme: [ber-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ber.en | 37.3 | 0.566 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ber-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ber", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ber-es * source languages: ber * target languages: es * OPUS readme: [ber-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ber.es | 33.8 | 0.487 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ber-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ber", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ber-fr * source languages: ber * target languages: fr * OPUS readme: [ber-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ber.fr | 60.2 | 0.754 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ber-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ber", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-deu * source group: Bulgarian * target group: German * OPUS readme: [bul-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md) * model: transformer * source language(s): bul * target language(s): deu * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.deu | 49.3 | 0.676 | ### System Info: - hf_name: bul-deu - source_languages: bul - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'de'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: deu - short_pair: bg-de - chrF2_score: 0.6759999999999999 - bleu: 49.3 - brevity_penalty: 1.0 - ref_len: 2218.0 - src_name: Bulgarian - tgt_name: German - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: de - prefer_old: False - long_pair: bul-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bg-en * source languages: bg * target languages: en * OPUS readme: [bg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.bg.en | 59.4 | 0.727 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-epo * source group: Bulgarian * target group: Esperanto * OPUS readme: [bul-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md) * model: transformer-align * source language(s): bul * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.epo | 24.5 | 0.438 | ### System Info: - hf_name: bul-epo - source_languages: bul - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'eo'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt - src_alpha3: bul - tgt_alpha3: epo - short_pair: bg-eo - chrF2_score: 0.43799999999999994 - bleu: 24.5 - brevity_penalty: 0.9670000000000001 - ref_len: 4043.0 - src_name: Bulgarian - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: bg - tgt_alpha2: eo - prefer_old: False - long_pair: bul-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-spa * source group: Bulgarian * target group: Spanish * OPUS readme: [bul-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md) * model: transformer * source language(s): bul * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.spa | 49.1 | 0.661 | ### System Info: - hf_name: bul-spa - source_languages: bul - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'es'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: spa - short_pair: bg-es - chrF2_score: 0.6609999999999999 - bleu: 49.1 - brevity_penalty: 0.992 - ref_len: 1783.0 - src_name: Bulgarian - tgt_name: Spanish - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: es - prefer_old: False - long_pair: bul-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bg-fi * source languages: bg * target languages: fi * OPUS readme: [bg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bg.fi | 23.7 | 0.505 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-fra * source group: Bulgarian * target group: French * OPUS readme: [bul-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md) * model: transformer * source language(s): bul * target language(s): fra * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.fra | 53.7 | 0.693 | ### System Info: - hf_name: bul-fra - source_languages: bul - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'fr'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: fra - short_pair: bg-fr - chrF2_score: 0.693 - bleu: 53.7 - brevity_penalty: 0.977 - ref_len: 3669.0 - src_name: Bulgarian - tgt_name: French - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: fr - prefer_old: False - long_pair: bul-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "fr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-ita * source group: Bulgarian * target group: Italian * OPUS readme: [bul-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md) * model: transformer * source language(s): bul * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.ita | 43.1 | 0.653 | ### System Info: - hf_name: bul-ita - source_languages: bul - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'it'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: ita - short_pair: bg-it - chrF2_score: 0.653 - bleu: 43.1 - brevity_penalty: 0.987 - ref_len: 16951.0 - src_name: Bulgarian - tgt_name: Italian - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: it - prefer_old: False - long_pair: bul-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "it"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-rus * source group: Bulgarian * target group: Russian * OPUS readme: [bul-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md) * model: transformer * source language(s): bul bul_Latn * target language(s): rus * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.rus | 48.5 | 0.691 | ### System Info: - hf_name: bul-rus - source_languages: bul - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'ru'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: rus - short_pair: bg-ru - chrF2_score: 0.691 - bleu: 48.5 - brevity_penalty: 1.0 - ref_len: 7870.0 - src_name: Bulgarian - tgt_name: Russian - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: ru - prefer_old: False - long_pair: bul-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bg-sv * source languages: bg * target languages: sv * OPUS readme: [bg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bg.sv | 29.1 | 0.494 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-tur * source group: Bulgarian * target group: Turkish * OPUS readme: [bul-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md) * model: transformer * source language(s): bul bul_Latn * target language(s): tur * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.tur | 40.9 | 0.687 | ### System Info: - hf_name: bul-tur - source_languages: bul - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'tr'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt - src_alpha3: bul - tgt_alpha3: tur - short_pair: bg-tr - chrF2_score: 0.687 - bleu: 40.9 - brevity_penalty: 0.946 - ref_len: 4948.0 - src_name: Bulgarian - tgt_name: Turkish - train_date: 2020-07-03 - src_alpha2: bg - tgt_alpha2: tr - prefer_old: False - long_pair: bul-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "tr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-tr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bul-ukr * source group: Bulgarian * target group: Ukrainian * OPUS readme: [bul-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md) * model: transformer-align * source language(s): bul * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bul.ukr | 49.2 | 0.683 | ### System Info: - hf_name: bul-ukr - source_languages: bul - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bg', 'uk'] - src_constituents: {'bul', 'bul_Latn'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt - src_alpha3: bul - tgt_alpha3: ukr - short_pair: bg-uk - chrF2_score: 0.6829999999999999 - bleu: 49.2 - brevity_penalty: 0.983 - ref_len: 4932.0 - src_name: Bulgarian - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: bg - tgt_alpha2: uk - prefer_old: False - long_pair: bul-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bg", "uk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bg-uk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bg", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bi-en * source languages: bi * target languages: en * OPUS readme: [bi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.en | 30.3 | 0.458 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bi-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bi-es * source languages: bi * target languages: es * OPUS readme: [bi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.es | 21.1 | 0.388 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bi-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bi", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bi-fr * source languages: bi * target languages: fr * OPUS readme: [bi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.fr | 21.5 | 0.382 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bi-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bi", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bi-sv * source languages: bi * target languages: sv * OPUS readme: [bi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.sv | 22.7 | 0.403 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bi-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bi", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### ben-eng * source group: Bengali * target group: English * OPUS readme: [ben-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md) * model: transformer-align * source language(s): ben * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ben.eng | 49.7 | 0.641 | ### System Info: - hf_name: ben-eng - source_languages: ben - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bn', 'en'] - src_constituents: {'ben'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt - src_alpha3: ben - tgt_alpha3: eng - short_pair: bn-en - chrF2_score: 0.6409999999999999 - bleu: 49.7 - brevity_penalty: 0.976 - ref_len: 13978.0 - src_name: Bengali - tgt_name: English - train_date: 2020-06-17 - src_alpha2: bn - tgt_alpha2: en - prefer_old: False - long_pair: ben-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["bn", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bn-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bn", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### bnt-eng * source group: Bantu languages * target group: English * OPUS readme: [bnt-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md) * model: transformer * source language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kin-eng.kin.eng | 31.7 | 0.481 | | Tatoeba-test.lin-eng.lin.eng | 8.3 | 0.271 | | Tatoeba-test.lug-eng.lug.eng | 5.3 | 0.128 | | Tatoeba-test.multi.eng | 23.1 | 0.394 | | Tatoeba-test.nya-eng.nya.eng | 38.3 | 0.527 | | Tatoeba-test.run-eng.run.eng | 26.6 | 0.431 | | Tatoeba-test.sna-eng.sna.eng | 27.5 | 0.440 | | Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.195 | | Tatoeba-test.toi-eng.toi.eng | 16.2 | 0.342 | | Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 | | Tatoeba-test.umb-eng.umb.eng | 8.4 | 0.231 | | Tatoeba-test.xho-eng.xho.eng | 37.2 | 0.554 | | Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.576 | ### System Info: - hf_name: bnt-eng - source_languages: bnt - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en'] - src_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt - src_alpha3: bnt - tgt_alpha3: eng - short_pair: bnt-en - chrF2_score: 0.39399999999999996 - bleu: 23.1 - brevity_penalty: 1.0 - ref_len: 14565.0 - src_name: Bantu languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: bnt - tgt_alpha2: en - prefer_old: False - long_pair: bnt-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bnt-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bzs-en * source languages: bzs * target languages: en * OPUS readme: [bzs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bzs.en | 44.5 | 0.605 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bzs-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bzs", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bzs-es * source languages: bzs * target languages: es * OPUS readme: [bzs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bzs.es | 28.1 | 0.464 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bzs-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bzs", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bzs-fi * source languages: bzs * target languages: fi * OPUS readme: [bzs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bzs.fi | 24.7 | 0.464 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bzs-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bzs", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bzs-fr * source languages: bzs * target languages: fr * OPUS readme: [bzs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bzs.fr | 30.0 | 0.479 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bzs-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bzs", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-bzs-sv * source languages: bzs * target languages: sv * OPUS readme: [bzs-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bzs.sv | 30.7 | 0.489 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-bzs-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bzs", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### cat-deu * source group: Catalan * target group: German * OPUS readme: [cat-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md) * model: transformer-align * source language(s): cat * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.cat.deu | 39.5 | 0.593 | ### System Info: - hf_name: cat-deu - source_languages: cat - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'de'] - src_constituents: {'cat'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt - src_alpha3: cat - tgt_alpha3: deu - short_pair: ca-de - chrF2_score: 0.593 - bleu: 39.5 - brevity_penalty: 1.0 - ref_len: 5643.0 - src_name: Catalan - tgt_name: German - train_date: 2020-06-16 - src_alpha2: ca - tgt_alpha2: de - prefer_old: False - long_pair: cat-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ca", "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ca-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
translation
transformers
### opus-mt-ca-en * source languages: ca * target languages: en * OPUS readme: [ca-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ca.en | 51.4 | 0.678 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ca-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00