|
--- |
|
language: sem |
|
tags: |
|
- translation |
|
|
|
license: apache-2.0 |
|
--- |
|
|
|
### sem-sem |
|
|
|
* source languages: sem |
|
* target languages: sem |
|
* OPUS readme: [sem-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md) |
|
|
|
* dataset: opus |
|
* model: transformer |
|
* source language(s): apc ara arq arz heb mlt |
|
* target language(s): apc ara arq arz heb mlt |
|
* model: transformer |
|
* pre-processing: normalization + SentencePiece (spm32k,spm32k) |
|
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) |
|
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip) |
|
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt) |
|
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.eval.txt) |
|
|
|
## Benchmarks |
|
|
|
| testset | BLEU | chr-F | |
|
|-----------------------|-------|-------| |
|
| Tatoeba-test.ara-ara.ara.ara | 4.2 | 0.200 | |
|
| Tatoeba-test.ara-heb.ara.heb | 34.0 | 0.542 | |
|
| Tatoeba-test.ara-mlt.ara.mlt | 16.6 | 0.513 | |
|
| Tatoeba-test.heb-ara.heb.ara | 18.8 | 0.477 | |
|
| Tatoeba-test.mlt-ara.mlt.ara | 20.7 | 0.388 | |
|
| Tatoeba-test.multi.multi | 27.1 | 0.507 | |
|
|
|
|