urj-eng

  • source group: Uralic languages

  • target group: English

  • OPUS readme: urj-eng

  • model: transformer

  • source language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro

  • target language(s): eng

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • download original weights: opus2m-2020-08-01.zip

  • test set translations: opus2m-2020-08-01.test.txt

  • test set scores: opus2m-2020-08-01.eval.txt

Benchmarks

testset BLEU chr-F
newsdev2015-enfi-fineng.fin.eng 22.7 0.511
newsdev2018-enet-esteng.est.eng 26.6 0.545
newssyscomb2009-huneng.hun.eng 21.3 0.493
newstest2009-huneng.hun.eng 20.1 0.487
newstest2015-enfi-fineng.fin.eng 23.9 0.521
newstest2016-enfi-fineng.fin.eng 25.8 0.542
newstest2017-enfi-fineng.fin.eng 28.9 0.562
newstest2018-enet-esteng.est.eng 27.0 0.552
newstest2018-enfi-fineng.fin.eng 21.2 0.492
newstest2019-fien-fineng.fin.eng 25.3 0.531
newstestB2016-enfi-fineng.fin.eng 21.3 0.500
newstestB2017-enfi-fineng.fin.eng 24.4 0.528
newstestB2017-fien-fineng.fin.eng 24.4 0.528
Tatoeba-test.chm-eng.chm.eng 0.8 0.131
Tatoeba-test.est-eng.est.eng 34.5 0.526
Tatoeba-test.fin-eng.fin.eng 28.1 0.485
Tatoeba-test.fkv-eng.fkv.eng 6.8 0.335
Tatoeba-test.hun-eng.hun.eng 25.1 0.452
Tatoeba-test.izh-eng.izh.eng 11.6 0.224
Tatoeba-test.kom-eng.kom.eng 2.4 0.110
Tatoeba-test.krl-eng.krl.eng 18.6 0.365
Tatoeba-test.liv-eng.liv.eng 0.5 0.078
Tatoeba-test.mdf-eng.mdf.eng 1.5 0.117
Tatoeba-test.multi.eng 47.8 0.646
Tatoeba-test.myv-eng.myv.eng 0.5 0.101
Tatoeba-test.sma-eng.sma.eng 1.2 0.110
Tatoeba-test.sme-eng.sme.eng 1.5 0.147
Tatoeba-test.udm-eng.udm.eng 1.0 0.130

System Info:

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using Helsinki-NLP/opus-mt-urj-en 7