Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
translation | transformers |
### vie-eng
* source group: Vietnamese
* target group: English
* OPUS readme: [vie-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md)
* model: transformer-align
* source language(s): vie vie_Hani
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.eng | 42.8 | 0.608 |
### System Info:
- hf_name: vie-eng
- source_languages: vie
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'en']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: eng
- short_pair: vi-en
- chrF2_score: 0.608
- bleu: 42.8
- brevity_penalty: 0.955
- ref_len: 20241.0
- src_name: Vietnamese
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: en
- prefer_old: False
- long_pair: vie-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vi-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### vie-epo
* source group: Vietnamese
* target group: Esperanto
* OPUS readme: [vie-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-epo/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.epo | 12.2 | 0.332 |
### System Info:
- hf_name: vie-epo
- source_languages: vie
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'eo']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.test.txt
- src_alpha3: vie
- tgt_alpha3: epo
- short_pair: vi-eo
- chrF2_score: 0.332
- bleu: 12.2
- brevity_penalty: 0.99
- ref_len: 13637.0
- src_name: Vietnamese
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: vi
- tgt_alpha2: eo
- prefer_old: False
- long_pair: vie-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vi-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### vie-spa
* source group: Vietnamese
* target group: Spanish
* OPUS readme: [vie-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-spa/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.spa | 32.9 | 0.540 |
### System Info:
- hf_name: vie-spa
- source_languages: vie
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'es']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: spa
- short_pair: vi-es
- chrF2_score: 0.54
- bleu: 32.9
- brevity_penalty: 0.953
- ref_len: 3832.0
- src_name: Vietnamese
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: es
- prefer_old: False
- long_pair: vie-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vi-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### vie-fra
* source group: Vietnamese
* target group: French
* OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.fra | 34.2 | 0.544 |
### System Info:
- hf_name: vie-fra
- source_languages: vie
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'fr']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: fra
- short_pair: vi-fr
- chrF2_score: 0.544
- bleu: 34.2
- brevity_penalty: 0.955
- ref_len: 11519.0
- src_name: Vietnamese
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: fr
- prefer_old: False
- long_pair: vie-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vi-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### vie-ita
* source group: Vietnamese
* target group: Italian
* OPUS readme: [vie-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.ita | 31.2 | 0.548 |
### System Info:
- hf_name: vie-ita
- source_languages: vie
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'it']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: ita
- short_pair: vi-it
- chrF2_score: 0.5479999999999999
- bleu: 31.2
- brevity_penalty: 0.932
- ref_len: 1774.0
- src_name: Vietnamese
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: it
- prefer_old: False
- long_pair: vie-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vi-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### vie-rus
* source group: Vietnamese
* target group: Russian
* OPUS readme: [vie-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.rus | 16.9 | 0.331 |
### System Info:
- hf_name: vie-rus
- source_languages: vie
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'ru']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: rus
- short_pair: vi-ru
- chrF2_score: 0.331
- bleu: 16.9
- brevity_penalty: 0.878
- ref_len: 2207.0
- src_name: Vietnamese
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: ru
- prefer_old: False
- long_pair: vie-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["vi", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vi-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-vsl-es
* source languages: vsl
* target languages: es
* OPUS readme: [vsl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/vsl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.vsl.es | 91.9 | 0.944 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-vsl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vsl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-wa-en
* source languages: wa
* target languages: en
* OPUS readme: [wa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wa-en/README.md)
* dataset: opus-enwa
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus-enwa-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/wa-en/opus-enwa-2020-03-21.zip)
* test set translations: [opus-enwa-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wa-en/opus-enwa-2020-03-21.test.txt)
* test set scores: [opus-enwa-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wa-en/opus-enwa-2020-03-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| enwa.fr.en | 42.6 | 0.564 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wa-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wa",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-wal-en
* source languages: wal
* target languages: en
* OPUS readme: [wal-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wal-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wal.en | 22.5 | 0.386 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wal-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wal",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### war-eng
* source group: Waray (Philippines)
* target group: English
* OPUS readme: [war-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md)
* model: transformer-align
* source language(s): war
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.war.eng | 12.3 | 0.308 |
### System Info:
- hf_name: war-eng
- source_languages: war
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['war', 'en']
- src_constituents: {'war'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt
- src_alpha3: war
- tgt_alpha3: eng
- short_pair: war-en
- chrF2_score: 0.308
- bleu: 12.3
- brevity_penalty: 1.0
- ref_len: 11345.0
- src_name: Waray (Philippines)
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: war
- tgt_alpha2: en
- prefer_old: False
- long_pair: war-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["war", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-war-es
* source languages: war
* target languages: es
* OPUS readme: [war-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.es | 28.7 | 0.470 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-war-fi
* source languages: war
* target languages: fi
* OPUS readme: [war-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.fi | 26.9 | 0.507 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-war-fr
* source languages: war
* target languages: fr
* OPUS readme: [war-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.fr | 30.2 | 0.482 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-war-sv
* source languages: war
* target languages: sv
* OPUS readme: [war-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.sv | 31.4 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-wls-en
* source languages: wls
* target languages: en
* OPUS readme: [wls-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.en | 31.8 | 0.471 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wls-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-wls-fr
* source languages: wls
* target languages: fr
* OPUS readme: [wls-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.fr | 22.6 | 0.389 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wls-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-wls-sv
* source languages: wls
* target languages: sv
* OPUS readme: [wls-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.sv | 23.8 | 0.408 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wls-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-xh-en
* source languages: xh
* target languages: en
* OPUS readme: [xh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.en | 45.8 | 0.610 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-xh-es
* source languages: xh
* target languages: es
* OPUS readme: [xh-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.es | 32.3 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-xh-fr
* source languages: xh
* target languages: fr
* OPUS readme: [xh-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.fr | 30.6 | 0.487 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-xh-sv
* source languages: xh
* target languages: sv
* OPUS readme: [xh-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.sv | 33.1 | 0.522 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yap-en
* source languages: yap
* target languages: en
* OPUS readme: [yap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.en | 30.2 | 0.452 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yap-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yap",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yap-fr
* source languages: yap
* target languages: fr
* OPUS readme: [yap-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.fr | 22.2 | 0.381 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yap-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yap",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yap-sv
* source languages: yap
* target languages: sv
* OPUS readme: [yap-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.sv | 22.6 | 0.399 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yap-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yap",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yo-en
* source languages: yo
* target languages: en
* OPUS readme: [yo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.en | 33.8 | 0.496 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yo-es
* source languages: yo
* target languages: es
* OPUS readme: [yo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.es | 22.0 | 0.393 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yo-fi
* source languages: yo
* target languages: fi
* OPUS readme: [yo-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.fi | 21.5 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yo-fr
* source languages: yo
* target languages: fr
* OPUS readme: [yo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.fr | 24.1 | 0.408 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-yo-sv
* source languages: yo
* target languages: sv
* OPUS readme: [yo-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.sv | 25.2 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-zai-es
* source languages: zai
* target languages: es
* OPUS readme: [zai-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zai-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zai.es | 20.8 | 0.372 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zai-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zai",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-bul
* source group: Chinese
* target group: Bulgarian
* OPUS readme: [zho-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md)
* model: transformer
* source language(s): cmn cmn_Hans cmn_Hant zho zho_Hans zho_Hant
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cmn_Hani.bul | 29.6 | 0.497 |
| Tatoeba-test.zho.bul | 29.6 | 0.497 |
### System Info:
- hf_name: zho-bul
- source_languages: zho
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'bg']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt
- src_alpha3: zho
- tgt_alpha3: bul
- short_pair: zh-bg
- chrF2_score: 0.49700000000000005
- bleu: 29.6
- brevity_penalty: 0.883
- ref_len: 3113.0
- src_name: Chinese
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: zh
- tgt_alpha2: bg
- prefer_old: False
- long_pair: zho-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-deu
* source group: Chinese
* target group: German
* OPUS readme: [zho-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn lzh_Hani wuu_Hani yue_Hani
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.deu | 32.1 | 0.522 |
### System Info:
- hf_name: zho-deu
- source_languages: zho
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'de']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: deu
- short_pair: zh-de
- chrF2_score: 0.522
- bleu: 32.1
- brevity_penalty: 0.9540000000000001
- ref_len: 19102.0
- src_name: Chinese
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: de
- prefer_old: False
- long_pair: zho-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-eng
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation
- **Language(s):**
- Source Language: Chinese
- Target Language: English
- **License:** CC-BY-4.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Uses
#### Direct Use
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Further details about the dataset for this model can be found in the OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md)
## Training
#### System Information
* helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port_machine: brutasse
* port_time: 2020-08-21-14:41
* src_multilingual: False
* tgt_multilingual: False
#### Training Data
##### Preprocessing
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* ref_len: 82826.0
* dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt)
## Evaluation
#### Results
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.eval.txt)
* brevity_penalty: 0.948
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.eng | 36.1 | 0.548 |
## Citation Information
```bibtex
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
```
| {"language": ["zh", "en"], "license": "cc-by-4.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"zh",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-fin
* source group: Chinese
* target group: Finnish
* OPUS readme: [zho-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md)
* model: transformer-align
* source language(s): cmn_Bopo cmn_Hani cmn_Latn nan_Hani yue yue_Hani
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.fin | 35.1 | 0.579 |
### System Info:
- hf_name: zho-fin
- source_languages: zho
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'fi']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: fin
- short_pair: zh-fi
- chrF2_score: 0.579
- bleu: 35.1
- brevity_penalty: 0.935
- ref_len: 1847.0
- src_name: Chinese
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: fi
- prefer_old: False
- long_pair: zho-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "fi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-heb
* source group: Chinese
* target group: Hebrew
* OPUS readme: [zho-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-heb/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn cmn_Yiii lzh lzh_Bopo lzh_Hang lzh_Hani lzh_Hira lzh_Kana lzh_Yiii
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.heb | 28.5 | 0.469 |
### System Info:
- hf_name: zho-heb
- source_languages: zho
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'he']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: heb
- short_pair: zh-he
- chrF2_score: 0.469
- bleu: 28.5
- brevity_penalty: 0.986
- ref_len: 3654.0
- src_name: Chinese
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: he
- prefer_old: False
- long_pair: zho-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-ita
* source group: Chinese
* target group: Italian
* OPUS readme: [zho-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ita/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn lzh lzh_Hang lzh_Hani lzh_Hira lzh_Yiii wuu_Bopo wuu_Hani wuu_Latn yue_Hani
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.ita | 27.9 | 0.508 |
### System Info:
- hf_name: zho-ita
- source_languages: zho
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'it']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: ita
- short_pair: zh-it
- chrF2_score: 0.508
- bleu: 27.9
- brevity_penalty: 0.935
- ref_len: 19684.0
- src_name: Chinese
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: it
- prefer_old: False
- long_pair: zho-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-msa
* source group: Chinese
* target group: Malay (macrolanguage)
* OPUS readme: [zho-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-msa/README.md)
* model: transformer-align
* source language(s): cmn_Bopo cmn_Hani cmn_Latn hak_Hani yue_Bopo yue_Hani
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.msa | 13.9 | 0.390 |
### System Info:
- hf_name: zho-msa
- source_languages: zho
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'ms']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: msa
- short_pair: zh-ms
- chrF2_score: 0.39
- bleu: 13.9
- brevity_penalty: 0.9229999999999999
- ref_len: 2762.0
- src_name: Chinese
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: ms
- prefer_old: False
- long_pair: zho-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "ms"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-ms | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"ms",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-nld
* source group: Chinese
* target group: Dutch
* OPUS readme: [zho-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hani cmn_Hira cmn_Kana cmn_Latn
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.nld | 31.5 | 0.525 |
### System Info:
- hf_name: zho-nld
- source_languages: zho
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'nl']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: nld
- short_pair: zh-nl
- chrF2_score: 0.525
- bleu: 31.5
- brevity_penalty: 0.9309999999999999
- ref_len: 13575.0
- src_name: Chinese
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: nl
- prefer_old: False
- long_pair: zho-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "nl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-nl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-swe
* source group: Chinese
* target group: Swedish
* OPUS readme: [zho-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-swe/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hani cmn_Latn
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.swe | 46.1 | 0.621 |
### System Info:
- hf_name: zho-swe
- source_languages: zho
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'sv']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: swe
- short_pair: zh-sv
- chrF2_score: 0.621
- bleu: 46.1
- brevity_penalty: 0.956
- ref_len: 6223.0
- src_name: Chinese
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: sv
- prefer_old: False
- long_pair: zho-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "sv"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-ukr
* source group: Chinese
* target group: Ukrainian
* OPUS readme: [zho-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ukr/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Kana cmn_Latn cmn_Yiii yue_Bopo yue_Hang yue_Hani yue_Hira yue_Kana
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.ukr | 10.4 | 0.259 |
### System Info:
- hf_name: zho-ukr
- source_languages: zho
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'uk']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.test.txt
- src_alpha3: zho
- tgt_alpha3: ukr
- short_pair: zh-uk
- chrF2_score: 0.259
- bleu: 10.4
- brevity_penalty: 0.9059999999999999
- ref_len: 9193.0
- src_name: Chinese
- tgt_name: Ukrainian
- train_date: 2020-06-16
- src_alpha2: zh
- tgt_alpha2: uk
- prefer_old: False
- long_pair: zho-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zho-vie
* source group: Chinese
* target group: Vietnamese
* OPUS readme: [zho-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md)
* model: transformer-align
* source language(s): cmn_Hani cmn_Latn
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.vie | 20.0 | 0.385 |
### System Info:
- hf_name: zho-vie
- source_languages: zho
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'vi']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: vie
- short_pair: zh-vi
- chrF2_score: 0.385
- bleu: 20.0
- brevity_penalty: 0.917
- ref_len: 4667.0
- src_name: Chinese
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: vi
- prefer_old: False
- long_pair: zho-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "vi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-vi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zle-eng
* source group: East Slavic languages
* target group: English
* OPUS readme: [zle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-eng/README.md)
* model: transformer
* source language(s): bel bel_Latn orv_Cyrl rue rus ukr
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-ruseng.rus.eng | 31.1 | 0.579 |
| newstest2013-ruseng.rus.eng | 24.9 | 0.522 |
| newstest2014-ruen-ruseng.rus.eng | 27.9 | 0.563 |
| newstest2015-enru-ruseng.rus.eng | 26.8 | 0.541 |
| newstest2016-enru-ruseng.rus.eng | 25.8 | 0.535 |
| newstest2017-enru-ruseng.rus.eng | 29.1 | 0.561 |
| newstest2018-enru-ruseng.rus.eng | 25.4 | 0.537 |
| newstest2019-ruen-ruseng.rus.eng | 26.8 | 0.545 |
| Tatoeba-test.bel-eng.bel.eng | 38.3 | 0.569 |
| Tatoeba-test.multi.eng | 50.1 | 0.656 |
| Tatoeba-test.orv-eng.orv.eng | 6.9 | 0.217 |
| Tatoeba-test.rue-eng.rue.eng | 15.4 | 0.345 |
| Tatoeba-test.rus-eng.rus.eng | 52.5 | 0.674 |
| Tatoeba-test.ukr-eng.ukr.eng | 52.1 | 0.673 |
### System Info:
- hf_name: zle-eng
- source_languages: zle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'ru', 'uk', 'zle', 'en']
- src_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zle
- tgt_alpha3: eng
- short_pair: zle-en
- chrF2_score: 0.6559999999999999
- bleu: 50.1
- brevity_penalty: 0.97
- ref_len: 69599.0
- src_name: East Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zle
- tgt_alpha2: en
- prefer_old: False
- long_pair: zle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["be", "ru", "uk", "zle", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zle-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"ru",
"uk",
"zle",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zle-zle
* source group: East Slavic languages
* target group: East Slavic languages
* OPUS readme: [zle-zle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md)
* model: transformer
* source language(s): bel bel_Latn orv_Cyrl rus ukr
* target language(s): bel bel_Latn orv_Cyrl rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel-rus.bel.rus | 57.1 | 0.758 |
| Tatoeba-test.bel-ukr.bel.ukr | 55.5 | 0.751 |
| Tatoeba-test.multi.multi | 58.0 | 0.742 |
| Tatoeba-test.orv-rus.orv.rus | 5.8 | 0.226 |
| Tatoeba-test.orv-ukr.orv.ukr | 2.5 | 0.161 |
| Tatoeba-test.rus-bel.rus.bel | 50.5 | 0.714 |
| Tatoeba-test.rus-orv.rus.orv | 0.3 | 0.129 |
| Tatoeba-test.rus-ukr.rus.ukr | 63.9 | 0.794 |
| Tatoeba-test.ukr-bel.ukr.bel | 51.3 | 0.719 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.3 | 0.106 |
| Tatoeba-test.ukr-rus.ukr.rus | 68.7 | 0.825 |
### System Info:
- hf_name: zle-zle
- source_languages: zle
- target_languages: zle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'ru', 'uk', 'zle']
- src_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- tgt_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.test.txt
- src_alpha3: zle
- tgt_alpha3: zle
- short_pair: zle-zle
- chrF2_score: 0.742
- bleu: 58.0
- brevity_penalty: 1.0
- ref_len: 62731.0
- src_name: East Slavic languages
- tgt_name: East Slavic languages
- train_date: 2020-07-27
- src_alpha2: zle
- tgt_alpha2: zle
- prefer_old: False
- long_pair: zle-zle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["be", "ru", "uk", "zle"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zle-zle | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"ru",
"uk",
"zle",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zls-eng
* source group: South Slavic languages
* target group: English
* OPUS readme: [zls-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md)
* model: transformer
* source language(s): bos_Latn bul bul_Latn hrv mkd slv srp_Cyrl srp_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul-eng.bul.eng | 54.9 | 0.693 |
| Tatoeba-test.hbs-eng.hbs.eng | 55.7 | 0.700 |
| Tatoeba-test.mkd-eng.mkd.eng | 54.6 | 0.681 |
| Tatoeba-test.multi.eng | 53.6 | 0.676 |
| Tatoeba-test.slv-eng.slv.eng | 25.6 | 0.407 |
### System Info:
- hf_name: zls-eng
- source_languages: zls
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hr', 'mk', 'bg', 'sl', 'zls', 'en']
- src_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zls
- tgt_alpha3: eng
- short_pair: zls-en
- chrF2_score: 0.6759999999999999
- bleu: 53.6
- brevity_penalty: 0.98
- ref_len: 68623.0
- src_name: South Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zls
- tgt_alpha2: en
- prefer_old: False
- long_pair: zls-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hr", "mk", "bg", "sl", "zls", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zls-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"mk",
"bg",
"sl",
"zls",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zls-zls
* source group: South Slavic languages
* target group: South Slavic languages
* OPUS readme: [zls-zls](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zls/README.md)
* model: transformer
* source language(s): bul mkd srp_Cyrl
* target language(s): bul mkd srp_Cyrl
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul-hbs.bul.hbs | 19.3 | 0.514 |
| Tatoeba-test.bul-mkd.bul.mkd | 31.9 | 0.669 |
| Tatoeba-test.hbs-bul.hbs.bul | 18.0 | 0.636 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 19.4 | 0.322 |
| Tatoeba-test.mkd-bul.mkd.bul | 44.6 | 0.679 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 5.5 | 0.152 |
| Tatoeba-test.multi.multi | 26.5 | 0.563 |
### System Info:
- hf_name: zls-zls
- source_languages: zls
- target_languages: zls
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zls/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hr', 'mk', 'bg', 'sl', 'zls']
- src_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- tgt_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.test.txt
- src_alpha3: zls
- tgt_alpha3: zls
- short_pair: zls-zls
- chrF2_score: 0.563
- bleu: 26.5
- brevity_penalty: 1.0
- ref_len: 58.0
- src_name: South Slavic languages
- tgt_name: South Slavic languages
- train_date: 2020-07-27
- src_alpha2: zls
- tgt_alpha2: zls
- prefer_old: False
- long_pair: zls-zls
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hr", "mk", "bg", "sl", "zls"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zls-zls | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"mk",
"bg",
"sl",
"zls",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zlw-eng
* source group: West Slavic languages
* target group: English
* OPUS readme: [zlw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-eng/README.md)
* model: transformer
* source language(s): ces csb_Latn dsb hsb pol
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-ceseng.ces.eng | 25.7 | 0.536 |
| newstest2009-ceseng.ces.eng | 24.6 | 0.530 |
| newstest2010-ceseng.ces.eng | 25.0 | 0.540 |
| newstest2011-ceseng.ces.eng | 25.9 | 0.539 |
| newstest2012-ceseng.ces.eng | 24.8 | 0.533 |
| newstest2013-ceseng.ces.eng | 27.8 | 0.551 |
| newstest2014-csen-ceseng.ces.eng | 30.3 | 0.585 |
| newstest2015-encs-ceseng.ces.eng | 27.5 | 0.542 |
| newstest2016-encs-ceseng.ces.eng | 29.1 | 0.564 |
| newstest2017-encs-ceseng.ces.eng | 26.0 | 0.537 |
| newstest2018-encs-ceseng.ces.eng | 27.3 | 0.544 |
| Tatoeba-test.ces-eng.ces.eng | 53.3 | 0.691 |
| Tatoeba-test.csb-eng.csb.eng | 10.2 | 0.313 |
| Tatoeba-test.dsb-eng.dsb.eng | 11.7 | 0.296 |
| Tatoeba-test.hsb-eng.hsb.eng | 24.6 | 0.426 |
| Tatoeba-test.multi.eng | 51.8 | 0.680 |
| Tatoeba-test.pol-eng.pol.eng | 50.4 | 0.667 |
### System Info:
- hf_name: zlw-eng
- source_languages: zlw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'cs', 'zlw', 'en']
- src_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zlw
- tgt_alpha3: eng
- short_pair: zlw-en
- chrF2_score: 0.68
- bleu: 51.8
- brevity_penalty: 0.9620000000000001
- ref_len: 75742.0
- src_name: West Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zlw
- tgt_alpha2: en
- prefer_old: False
- long_pair: zlw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pl", "cs", "zlw", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zlw-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pl",
"cs",
"zlw",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### zlw-fiu
* source language name: West Slavic languages
* target language name: Finno-Ugrian languages
* OPUS readme: [README.md](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/README.md)
* model: transformer
* source language codes: dsb, cs, csb_Latn, hsb, pl, zlw
* target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu
* dataset: opus
* release date: 2021-02-18
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.zip)
* a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID)
* Training data:
* ces-fin: Tatoeba-train (1000000)
* ces-hun: Tatoeba-train (1000000)
* pol-est: Tatoeba-train (1000000)
* pol-fin: Tatoeba-train (1000000)
* pol-hun: Tatoeba-train (1000000)
* Validation data:
* ces-fin: Tatoeba-dev, 1000
* ces-hun: Tatoeba-dev, 1000
* est-pol: Tatoeba-dev, 1000
* fin-pol: Tatoeba-dev, 1000
* hun-pol: Tatoeba-dev, 1000
* mhr-pol: Tatoeba-dev, 461
* total-size-shuffled: 5426
* devset-selected: top 5000 lines of Tatoeba-dev.src.shuffled!
* Test data:
* newssyscomb2009.ces-hun: 502/9733
* newstest2009.ces-hun: 2525/54965
* Tatoeba-test.ces-fin: 88/408
* Tatoeba-test.ces-hun: 1911/10336
* Tatoeba-test.multi-multi: 4562/25497
* Tatoeba-test.pol-chm: 5/36
* Tatoeba-test.pol-est: 15/98
* Tatoeba-test.pol-fin: 609/3293
* Tatoeba-test.pol-hun: 1934/11285
* test set translations file: [test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.test.txt)
* test set scores file: [eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.eval.txt)
* BLEU-scores
|Test set|score|
|---|---|
|Tatoeba-test.ces-fin|57.2|
|Tatoeba-test.ces-hun|42.6|
|Tatoeba-test.multi-multi|39.4|
|Tatoeba-test.pol-hun|36.6|
|Tatoeba-test.pol-fin|36.1|
|Tatoeba-test.pol-est|20.9|
|newssyscomb2009.ces-hun|13.9|
|newstest2009.ces-hun|13.9|
|Tatoeba-test.pol-chm|2.0|
* chr-F-scores
|Test set|score|
|---|---|
|Tatoeba-test.ces-fin|0.71|
|Tatoeba-test.ces-hun|0.637|
|Tatoeba-test.multi-multi|0.616|
|Tatoeba-test.pol-hun|0.605|
|Tatoeba-test.pol-fin|0.592|
|newssyscomb2009.ces-hun|0.449|
|newstest2009.ces-hun|0.443|
|Tatoeba-test.pol-est|0.372|
|Tatoeba-test.pol-chm|0.007|
### System Info:
* hf_name: zlw-fiu
* source_languages: dsb,cs,csb_Latn,hsb,pl,zlw
* target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu
* opus_readme_url: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/README.md
* original_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']
* src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol']
* tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh']
* src_multilingual: True
* tgt_multilingual: True
* helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d
* transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5
* port_machine: bungle
* port_time: 2021-06-29-15:24 | {"language": ["dsb", "cs", "csb_Latn", "hsb", "pl", "zlw", "hu", "vro", "fi", "liv_Latn", "mdf", "krl", "fkv_Latn", "mhr", "et", "sma", "udm", "vep", "myv", "kpv", "se", "izh", "fiu"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zlw-fiu | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"zlw",
"fiu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### zlw-zlw
* source group: West Slavic languages
* target group: West Slavic languages
* OPUS readme: [zlw-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zlw/README.md)
* model: transformer
* source language(s): ces dsb hsb pol
* target language(s): ces dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ces-hsb.ces.hsb | 2.6 | 0.167 |
| Tatoeba-test.ces-pol.ces.pol | 44.0 | 0.649 |
| Tatoeba-test.dsb-pol.dsb.pol | 8.5 | 0.250 |
| Tatoeba-test.hsb-ces.hsb.ces | 9.6 | 0.276 |
| Tatoeba-test.multi.multi | 38.8 | 0.580 |
| Tatoeba-test.pol-ces.pol.ces | 43.4 | 0.620 |
| Tatoeba-test.pol-dsb.pol.dsb | 2.1 | 0.159 |
### System Info:
- hf_name: zlw-zlw
- source_languages: zlw
- target_languages: zlw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zlw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'cs', 'zlw']
- src_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.test.txt
- src_alpha3: zlw
- tgt_alpha3: zlw
- short_pair: zlw-zlw
- chrF2_score: 0.58
- bleu: 38.8
- brevity_penalty: 0.99
- ref_len: 7792.0
- src_name: West Slavic languages
- tgt_name: West Slavic languages
- train_date: 2020-07-27
- src_alpha2: zlw
- tgt_alpha2: zlw
- prefer_old: False
- long_pair: zlw-zlw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pl", "cs", "zlw"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zlw-zlw | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"pl",
"cs",
"zlw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-zne-es
* source languages: zne
* target languages: es
* OPUS readme: [zne-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.es | 21.1 | 0.382 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-zne-fi
* source languages: zne
* target languages: fi
* OPUS readme: [zne-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fi | 22.8 | 0.432 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-zne-fr
* source languages: zne
* target languages: fr
* OPUS readme: [zne-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fr | 25.3 | 0.416 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers |
### opus-mt-zne-sv
* source languages: zne
* target languages: sv
* OPUS readme: [zne-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.sv | 25.2 | 0.425 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### af-ru
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-09-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.zip)
* test set translations: [opus-2020-09-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.test.txt)
* test set scores: [opus-2020-09-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: af-ru
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: ('Afrikaans', {'afr'})
- tgt_constituents: ('Russian', {'rus'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: afr-rus
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-01-01 00:00:00
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- short_pair: af-ru
- helsinki_git_sha: e8c308a96c1bd0b4ca6a8ce174783f93c3e30f25
- transformers_git_sha: 31245775e5772fbded1ac07ed89fbba3b5af0cb9
- port_machine: LM0-400-22516.local
- port_time: 2021-02-12-14:52 | {"language": ["af", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-af-ru | null | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"af",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### de-ro
* source group: German
* target group: Romanian
* OPUS readme: [deu-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ron/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): mol ron
* raw source language(s): deu
* raw target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* valid language labels: >>mol<< >>ron<<
* download original weights: [opusTCv20210807-2021-10-22.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.zip)
* test set translations: [opusTCv20210807-2021-10-22.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.test.txt)
* test set scores: [opusTCv20210807-2021-10-22.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test-v2021-08-07.deu-ron | 42.0 | 0.636 | 1141 | 7432 | 0.976 |
### System Info:
- hf_name: de-ro
- source_languages: deu
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ro']
- src_constituents: ('German', {'deu'})
- tgt_constituents: ('Romanian', {'ron'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: deu-ron
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.test.txt
- src_alpha3: deu
- tgt_alpha3: ron
- chrF2_score: 0.636
- bleu: 42.0
- src_name: German
- tgt_name: Romanian
- train_date: 2021-10-22 00:00:00
- src_alpha2: de
- tgt_alpha2: ro
- prefer_old: False
- short_pair: de-ro
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
- port_machine: LM0-400-22516.local
- port_time: 2021-11-08-16:45 | {"language": ["de", "ro"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-de-ro | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"de",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### en-ja
* source group: English
* target group: Japanese
* OPUS readme: [eng-jpn](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): jpn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip)
* test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt)
* test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.eng-jpn | 15.2 | 0.258 | 10000 | 99206 | 1.000 |
### System Info:
- hf_name: en-ja
- source_languages: eng
- target_languages: jpn
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ja']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Japanese', {'jpn', 'jpn_Latn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hira', 'jpn_Hang', 'jpn_Bopo', 'jpn_Hani'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-jpn
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt
- src_alpha3: eng
- tgt_alpha3: jpn
- chrF2_score: 0.258
- bleu: 15.2
- src_name: English
- tgt_name: Japanese
- train_date: 2021-04-10 00:00:00
- src_alpha2: en
- tgt_alpha2: ja
- prefer_old: False
- short_pair: en-ja
- helsinki_git_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-10-12-11:13 | {"language": ["en", "ja"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-en-ja | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### en-ro
* source group: English
* target group: Romanian
* OPUS readme: [eng-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ron/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* valid language labels:
* download original weights: [opus+bt-2021-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.zip)
* test set translations: [opus+bt-2021-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.test.txt)
* test set scores: [opus+bt-2021-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2016-enro.eng-ron | 33.5 | 0.610 | 1999 | 51566 | 0.984 |
| newstest2016-enro.eng-ron | 31.7 | 0.591 | 1999 | 49094 | 0.998 |
| Tatoeba-test.eng-ron | 46.9 | 0.678 | 5000 | 36851 | 0.983 |
### System Info:
- hf_name: en-ro
- source_languages: eng
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ro']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Romanian', {'ron'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-ron
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.test.txt
- src_alpha3: eng
- tgt_alpha3: ron
- chrF2_score: 0.678
- bleu: 46.9
- src_name: English
- tgt_name: Romanian
- train_date: 2021-03-07 00:00:00
- src_alpha2: en
- tgt_alpha2: ro
- prefer_old: False
- short_pair: en-ro
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-11-08-09:31 | {"language": ["en", "ro"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-en-ro | null | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### en-tr
* source group: English
* target group: Turkish
* OPUS readme: [eng-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.zip)
* test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.test.txt)
* test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2016-entr.eng-tur | 21.5 | 0.575 | 1001 | 16127 | 1.000 |
| newstest2016-entr.eng-tur | 21.4 | 0.558 | 3000 | 50782 | 0.986 |
| newstest2017-entr.eng-tur | 22.8 | 0.572 | 3007 | 51977 | 0.960 |
| newstest2018-entr.eng-tur | 20.8 | 0.561 | 3000 | 53731 | 0.963 |
| Tatoeba-test.eng-tur | 41.5 | 0.684 | 10000 | 60469 | 0.932 |
### System Info:
- hf_name: en-tr
- source_languages: eng
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tr']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Turkish', {'tur'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-tur
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.test.txt
- src_alpha3: eng
- tgt_alpha3: tur
- chrF2_score: 0.684
- bleu: 41.5
- src_name: English
- tgt_name: Turkish
- train_date: 2021-04-10 00:00:00
- src_alpha2: en
- tgt_alpha2: tr
- prefer_old: False
- short_pair: en-tr
- helsinki_git_sha: a6bd0607aec9603811b2b635aec3f566f3add79d
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-10-05-12:13 | {"language": ["en", "tr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-en-tr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### es-zh
* source group: Spanish
* target group: Chinese
* OPUS readme: [spa-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zho/README.md)
* model: transformer
* source language(s): spa
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant hsn hsn_Hani lzh nan wuu yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2021-01-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.zip)
* test set translations: [opus-2021-01-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.test.txt)
* test set scores: [opus-2021-01-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.zho | 38.8 | 0.324 |
### System Info:
- hf_name: es-zh
- source_languages: spa
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'zh']
- src_constituents: ('Spanish', {'spa'})
- tgt_constituents: ('Chinese', {'wuu_Bopo', 'wuu', 'cmn_Hang', 'lzh_Kana', 'lzh', 'wuu_Hani', 'lzh_Yiii', 'yue_Hans', 'cmn_Hani', 'cjy_Hans', 'cmn_Hans', 'cmn_Kana', 'zho_Hans', 'zho_Hant', 'yue', 'cmn_Bopo', 'yue_Hang', 'lzh_Hans', 'wuu_Latn', 'yue_Hant', 'hak_Hani', 'lzh_Bopo', 'cmn_Hant', 'lzh_Hani', 'lzh_Hang', 'cmn', 'lzh_Hira', 'yue_Bopo', 'yue_Hani', 'gan', 'zho', 'cmn_Yiii', 'yue_Hira', 'cmn_Latn', 'yue_Kana', 'cjy_Hant', 'cmn_Hira', 'nan_Hani', 'nan'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: spa-zho
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.test.txt
- src_alpha3: spa
- tgt_alpha3: zho
- chrF2_score: 0.324
- bleu: 38.8
- brevity_penalty: 0.878
- ref_len: 22762.0
- src_name: Spanish
- tgt_name: Chinese
- train_date: 2021-01-04 00:00:00
- src_alpha2: es
- tgt_alpha2: zh
- prefer_old: False
- short_pair: es-zh
- helsinki_git_sha: dfdcef114ffb8a8dbb7a3fcf84bde5af50309500
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2021-01-04-18:53 | {"language": ["es", "zh"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-es-zh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### fi-en
* source group: Finnish
* target group: English
* OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md)
* model: transformer-align
* source language(s): fin
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807+bt-2021-08-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip)
* test set translations: [opusTCv20210807+bt-2021-08-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt)
* test set scores: [opusTCv20210807+bt-2021-08-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2015-enfi.fin-eng | 27.1 | 0.550 | 1500 | 32104 | 0.988 |
| newstest2015-enfi.fin-eng | 28.5 | 0.560 | 1370 | 27356 | 0.980 |
| newstest2016-enfi.fin-eng | 31.7 | 0.586 | 3000 | 63043 | 1.000 |
| newstest2017-enfi.fin-eng | 34.6 | 0.610 | 3002 | 61936 | 0.988 |
| newstest2018-enfi.fin-eng | 25.4 | 0.530 | 3000 | 62325 | 0.981 |
| newstest2019-fien.fin-eng | 30.6 | 0.577 | 1996 | 36227 | 0.994 |
| newstestB2016-enfi.fin-eng | 25.8 | 0.538 | 3000 | 63043 | 0.987 |
| newstestB2017-enfi.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| newstestB2017-fien.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| Tatoeba-test-v2021-08-07.fin-eng | 54.1 | 0.700 | 10000 | 75212 | 0.988 |
### System Info:
- hf_name: fi-en
- source_languages: fin
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fi', 'en']
- src_constituents: ('Finnish', {'fin'})
- tgt_constituents: ('English', {'eng'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fin-eng
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt
- src_alpha3: fin
- tgt_alpha3: eng
- chrF2_score: 0.7
- bleu: 54.1
- src_name: Finnish
- tgt_name: English
- train_date: 2021-08-25 00:00:00
- src_alpha2: fi
- tgt_alpha2: en
- prefer_old: False
- short_pair: fi-en
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-11-04-21:36 | {"language": ["fi", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-fi-en | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"fi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### fr-it
* source group: French
* target group: Italian
* OPUS readme: [fra-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): ita
* raw source language(s): fra
* raw target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807-2021-11-11.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip)
* test set translations: [opusTCv20210807-2021-11-11.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt)
* test set scores: [opusTCv20210807-2021-11-11.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test-v2021-08-07.fra-ita | 54.8 | 0.737 | 10000 | 61517 | 0.953 |
### System Info:
- hf_name: fr-it
- source_languages: fra
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'it']
- src_constituents: ('French', {'fra'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fra-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt
- src_alpha3: fra
- tgt_alpha3: ita
- chrF2_score: 0.737
- bleu: 54.8
- src_name: French
- tgt_name: Italian
- train_date: 2021-11-11 00:00:00
- src_alpha2: fr
- tgt_alpha2: it
- prefer_old: False
- short_pair: fr-it
- helsinki_git_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18
- transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
- port_machine: LM0-400-22516.local
- port_time: 2021-11-11-19:40 | {"language": ["fr", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-fr-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### he-fr
* source group: Hebrew
* target group: French
* OPUS readme: [heb-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md)
* model: transformer
* source language(s): heb
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.fra | 47.3 | 0.644 |
### System Info:
- hf_name: he-fr
- source_languages: heb
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'fr']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('French', {'fra'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-fra
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: fra
- chrF2_score: 0.644
- bleu: 47.3
- brevity_penalty: 0.9740000000000001
- ref_len: 26123.0
- src_name: Hebrew
- tgt_name: French
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: fr
- prefer_old: False
- short_pair: he-fr
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:03 | {"language": ["he", "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-he-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### he-it
* source group: Hebrew
* target group: Italian
* OPUS readme: [heb-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md)
* model: transformer
* source language(s): heb
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ita | 41.1 | 0.643 |
### System Info:
- hf_name: he-it
- source_languages: heb
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'it']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: ita
- chrF2_score: 0.643
- bleu: 41.1
- brevity_penalty: 0.997
- ref_len: 11464.0
- src_name: Hebrew
- tgt_name: Italian
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: it
- prefer_old: False
- short_pair: he-it
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:01 | {"language": ["he", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-he-it | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"he",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
translation | transformers | ### it-he
* source group: Italian
* target group: Hebrew
* OPUS readme: [ita-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md)
* model: transformer
* source language(s): ita
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.heb | 38.5 | 0.593 |
### System Info:
- hf_name: it-he
- source_languages: ita
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'he']
- src_constituents: ('Italian', {'ita'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: ita-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt
- src_alpha3: ita
- tgt_alpha3: heb
- chrF2_score: 0.593
- bleu: 38.5
- brevity_penalty: 0.985
- ref_len: 9796.0
- src_name: Italian
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: it
- tgt_alpha2: he
- prefer_old: False
- short_pair: it-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:02 | {"language": ["it", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-it-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Hemang/DialoGPT-small-mickeymousebot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | Thanks for checking this out! <br />
This video explains the ideas behind KerasBERT (still very much a work in progress)
https://www.youtube.com/watch?v=J3P8WLAELqk | {} | HenryAI/KerasBERTv1 | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-scitldr
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0232
- Rouge1: 35.2134
- Rouge2: 16.8919
- Rougel: 30.8442
- Rougelsum: 30.9316
- Gen Len: 18.7981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0533 | 1.0 | 996 | 2.0285 | 34.9774 | 16.6163 | 30.6177 | 30.7038 | 18.7981 |
| 2.0994 | 2.0 | 1992 | 2.0232 | 35.2134 | 16.8919 | 30.8442 | 30.9316 | 18.7981 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-base-finetuned-scitldr", "results": []}]} | HenryHXR/t5-base-finetuned-scitldr | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_genre"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0,
'其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6,
'科幻': 9, '神话': 8, '宫廷': 5}
id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇',
2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打',
9: '科幻', 8: '神话', 5: '宫廷'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['涉案']
Citation
TBA | {"language": ["zh"], "license": "apache-2.0", "tags": ["classification"], "datasets": ["Custom"], "metrics": ["rouge"]} | Herais/pred_genre | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"zh",
"dataset:Custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_timeperiod"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}
id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['当代']
Citation
{} | {"language": ["zh"], "license": "apache-2.0", "tags": ["classification"], "datasets": ["Custom"], "metrics": ["rouge"]} | Herais/pred_timeperiod | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"classification",
"zh",
"dataset:Custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# marian-finetuned-hi-hinglish
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1869
- Validation Loss: 4.0607
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 279, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1869 | 4.0607 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "marian-finetuned-hi-hinglish", "results": []}]} | Hetarth/marian-finetuned-hi-hinglish | null | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Hexious/Jimrie | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | Create README.md
## ByT5 Base Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Base by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: https://arxiv.org/abs/2105.13626
#### Training data
It was trained from products reviews from a Americanas.com. You can found the data here: https://github.com/HeyLucasLeao/finetuning-byt5-model.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: **1e-4**
##### Epochs: **1**
##### Colab for Finetuning: https://drive.google.com/file/d/17TcaN52moq7i7TE2EbcVbwQEQuAIQU63/view?usp=sharing
##### Colab for Metrics: https://colab.research.google.com/drive/1wbTDfOsE45UL8Q3ZD1_FTUmdVOKCcJFf#scrollTo=S4nuLkAFrlZ6
#### Score:
```python
Training Set:
'accuracy': 0.9019706922688226,
'f1': 0.9305820610687022,
'precision': 0.9596555965559656,
'recall': 0.9032183375781431
Test Set:
'accuracy': 0.9019409684035312,
'f1': 0.9303758732034697,
'precision': 0.9006660401258529,
'recall': 0.9621126145787866
Validation Set:
'accuracy': 0.9044948078526491,
'f1': 0.9321924443009364,
'precision': 0.9024426549173129,
'recall': 0.9639705531617191
```
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/byt5-base-pt-product-reviews")
model = AutoModelForSeq2SeqLM.from_pretrained("HeyLucasLeao/byt5-base-pt-product-reviews")
model.to(device)
def classificar_review(review):
inputs = tokenizer([review], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
pred = np.argmax(output.cpu(), axis=1)
dici = {0: 'Review Negativo', 1: 'Review Positivo'}
return dici[pred.item()]
classificar_review(review)
``` | {} | HeyLucasLeao/byt5-base-pt-product-reviews | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2105.13626",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers | Create README.md
## ByT5 Small Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Small by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: https://arxiv.org/abs/2105.13626
#### Training data
It was trained from products reviews from a Americanas.com. You can found the data here: https://github.com/HeyLucasLeao/finetuning-byt5-model.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: **1e-4**
##### Epochs: **1**
##### Colab for Finetuning: https://colab.research.google.com/drive/1EChTeQkGeXi_52lClBNazHVuSNKEHN2f
##### Colab for Metrics: https://colab.research.google.com/drive/1o4tcsP3lpr1TobtE3Txhp9fllxPWXxlw#scrollTo=PXAoog5vQaTn
#### Score:
```python
Training Set:
'accuracy': 0.8974239585927603,
'f1': 0.927229848590765,
'precision': 0.9580290812115055,
'recall': 0.8983492356469835
Test Set:
'accuracy': 0.8957881282882026,
'f1': 0.9261366030421776,
'precision': 0.9559431131213848,
'recall': 0.8981326359661668
Validation Set:
'accuracy': 0.8925383190163382,
'f1': 0.9239208204149773,
'precision': 0.9525448733710351,
'recall': 0.8969668904839083
```
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/byt5-small-pt-product-reviews")
model = AutoModelForSeq2SeqLM.from_pretrained("HeyLucasLeao/byt5-small-pt-product-reviews")
model.to(device)
def classificar_review(review):
inputs = tokenizer([review], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
pred = np.argmax(output.cpu(), axis=1)
dici = {0: 'Review Negativo', 1: 'Review Positivo'}
return dici[pred.item()]
classificar_review(review)
``` | {} | HeyLucasLeao/byt5-small-pt-product-reviews | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2105.13626",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | Create README.md
## Emo Bot
#### Model Description
This is a finetuned version from GPT-Neo-125M for Generating Music Lyrics by Emo Genre.
#### Training data
It was trained with 2381 songs by 15 bands that were important to emo culture in the early 2000s, not necessary directly playing on the genre.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library.
##### Learning Rate: **2e-4**
##### Epochs: **40**
##### Colab for Finetuning: https://colab.research.google.com/drive/1jwTYI1AygQf7FV9vCHTWA4Gf5i--sjsD?usp=sharing
##### Colab for Testing: https://colab.research.google.com/drive/1wSP4Wyr1-DTTNQbQps_RCO3ThhH-eeZc?usp=sharing
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
import re
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/gpt-neo-small-emo-lyrics")
model = AutoModelForCausalLM.from_pretrained("HeyLucasLeao/gpt-neo-small-emo-lyrics")
model.to('cuda')
generated = tokenizer('I miss you',return_tensors='pt').input_ids.cuda()
#Generating texts
sample_outputs = model.generate(generated,
# Use sampling instead of greedy decoding
do_sample=True,
# Keep only top 3 token with the highest probability
top_k=10,
# Maximum sequence length
max_length=200,
# Keep only the most probable tokens with cumulative probability of 95%
top_p=0.95,
# Changes randomness of generated sequences
temperature=2.,
# Number of sequences to generate
num_return_sequences=3)
# Decoding and printing sequences
for i, sample_output in enumerate(sample_outputs):
texto = tokenizer.decode(sample_output.tolist())
regex_padding = re.sub('<|pad|>', '', texto)
regex_barra = re.sub('[|+]', '', regex_padding)
espaço = re.sub('[ +]', ' ', regex_barra)
resultado = re.sub('[\n](2, )', '\n', espaço)
print(">> Text {}: {}".format(i+1, resultado + '\n'))
""">> Texto 1: I miss you
I miss you more than anything
And if you change your mind
I do it like a change of mind
I always do it like theeah
Everybody wants a surprise
Everybody needs to stay collected
I keep your locked and numbered
Use this instead: Run like the wind
Use this instead: Run like the sun
And come back down: You've been replaced
Don't want to be the same
Tomorrow
I don't even need your name
The message is on the way
make it while you're holding on
It's better than it is
Everything more security than a parade
Im getting security
angs the world like a damned soul
We're hanging on a queue
and the truth is on the way
Are you listening?
We're getting security
Send me your soldiers
We're getting blood on"""
""">> Texto 2: I miss you
And I could forget your name
All the words we'd hear
You miss me
I need you
And I need you
You were all by my side
When we'd talk to no one
And I
Just to talk to you
It's easier than it has to be
Except for you
You missed my know-all
You meant to hug me
And I
Just want to feel you touch me
We'll work up
Something wild, just from the inside
Just get closer to me
I need you
You were all by my side
When we*d talk to you
, you better admit
That I'm too broken to be small
You're part of me
And I need you
But I
Don't know how
But I know I need you
Must"""
""">> Texto 3: I miss you
And I can't lie
Inside my head
All the hours you've been through
If I could change your mind
I would give it all away
And I'd give it all away
Just to give it away
To you
Now I wish that I could change
Just to you
I miss you so much
If I could change
So much
I'm looking down
At the road
The one that's already been
Searching for a better way to go
So much I need to see it clear
topk wish me an ehive
I wish I wish I wish I knew
I can give well
In this lonely night
The lonely night
I miss you
I wish it well
If I could change
So much
I need you"""
``` | {} | HeyLucasLeao/gpt-neo-small-emo-lyrics | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | ## GPT-Neo Small Portuguese
#### Model Description
This is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.
#### Training data
It was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: https://archive.org/details/ptwiki-dump-20210520
#### Training Procedure
Every text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.
##### Learning Rate: **2e-4**
##### Epochs: **1**
#### Goals
My true intention was totally educational, thus making available a Portuguese version of this model.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese")
model = AutoModelForCausalLM.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese")
text = 'eu amo o brasil.'
generated = tokenizer(f'<|startoftext|> {text}',
return_tensors='pt').input_ids.cuda()
#Generating texts
sample_outputs = model.generate(generated,
# Use sampling instead of greedy decoding
do_sample=True,
# Keep only top 3 token with the highest probability
top_k=3,
# Maximum sequence length
max_length=200,
# Keep only the most probable tokens with cumulative probability of 95%
top_p=0.95,
# Changes randomness of generated sequences
temperature=1.9,
# Number of sequences to generate
num_return_sequences=3)
# Decoding and printing sequences
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\\\\
\\\\
{}".format(i+1, tokenizer.decode(sample_output.tolist())))
# >> Generated text
#Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
#>> Generated text 1
#<|startoftext|> eu amo o brasil. O termo foi usado por alguns autores como uma forma de designar a formação do poder político do Brasil. A partir da década de 1960, o termo passou a ser usado para designar a formação política do Brasil. A partir de meados da década de 1970 e até o inicio dos anos 2000, o termo foi aplicado à formação político-administrativo do país, sendo utilizado por alguns autores como uma expressão de "política de direita". História Antecedentes O termo "político-administrário" foi usado pela primeira vez em 1891 por um gru
#>> Generated text 2
#<|startoftext|> eu amo o brasil. É uma das muitas pessoas do mundo, ao contrário da maioria das pessoas, que são chamados de "pessoas do Brasil", que são chamados de "brincos do país" e que têm uma carreira de mais de um século. O termo "brincal de ouro" é usado em referências às pessoas que vivem no Brasil, e que são chamados "brincos do país", que são "cidade" e que vivem na cidade de Nova York e que vive em um país onde a maior parte das pessoas são chamados de "cidades". Hist
#>> Generated text 3
#<|startoftext|> eu amo o brasil. É uma expressão que se refere ao uso de um instrumento musical em particular para se referir à qualidade musical, o que é uma expressão da qualidade da qualidade musical de uma pessoa. A expressão "amor" (em inglês, amo), é a expressão que pode ser usada com o intuito empregado em qualquer situação em que a vontade de uma pessoa de se sentir amado ou amoroso é mais do que um desejo de uma vontade. Em geral, a expressão "amoro" (do inglês, amo) pode também se referir tanto a uma pessoa como um instrumento de cordas ou de uma
``` | {} | HeyLucasLeao/gpt-neo-small-portuguese | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | # Convert Fairseq Wav2Vec2 to HF
This repo has two scripts that can show how to convert a fairseq checkpoint to HF Transformers.
It's important to always check in a forward pass that the two checkpoints are the same. The procedure should be as follows:
1. Download original model
2. Create HF version of the model:
```
huggingface-cli repo create <name_of_model> --organization <org_of_model>
git clone https://huggingface.co/<org_of_model>/<name_of_model>
```
3. Convert the model
```
./run_convert.sh <name_of_model> <path/to/orig/checkpoint/> 0
```
The "0" means that checkpoint is **not** a fine-tuned one.
4. Verify that models are equal:
```
./run_forward.py <name_of_model> <path/to/orig/checkpoint/> 0
```
Check the scripts to better understand how they work or contact https://huggingface.co/patrickvonplaten | {} | HfSpeechUtils/convert_wav2vec2_to_hf | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | # Run any CTC model
```python
./run_ctc_model.py "yourModelId" "yourLanguageCode" "yourPhonemeLang" "NumSamplesToDecode"
```
| {} | HfSpeechUtils/run_ctc_common_voice.py | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | HfaceDevGl96/DialoGPT-small-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hidde/iFlow | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | HieuLV3/QA_UIT_xlm_roberta_large | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | HighCWu/rudalle-paddle-utils | null | [
"paddlepaddle",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | HighVoltage/imp | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8301
- Matthews Correlation: 0.5481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5252 | 1.0 | 535 | 0.5094 | 0.4268 |
| 0.3515 | 2.0 | 1070 | 0.5040 | 0.4948 |
| 0.2403 | 3.0 | 1605 | 0.5869 | 0.5449 |
| 0.1731 | 4.0 | 2140 | 0.7338 | 0.5474 |
| 0.1219 | 5.0 | 2675 | 0.8301 | 0.5481 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5481326292844919}}]}]} | Hinova/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Hipanda/distilbert-base-uncased-finetuned-mnli | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hitham/FirstModel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2176 | 1.0 | 5533 | 1.1429 |
| 0.9425 | 2.0 | 11066 | 1.1196 |
| 0.7586 | 3.0 | 16599 | 1.1582 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"]} | Hoang/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Hoang/my-new-shiny-tokenizer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hoang/vn-tokenizer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | KOD file | {} | HoeioUser/kod | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Hokuto/testrinna | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | Testing NER | {} | Holako/NER_CAMELBERT | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Holako/NER_model_holako")
model = AutoModelForTokenClassification.from_pretrained("Holako/NER_model_holako")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "اسمي احمد"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
=======
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
| {} | Holako/NER_model_holako | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | MagnusChase7/DialoGPT-medium-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | HolyFish/testing123 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Homerzz/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Hooray/housing | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers |
# AlbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Albert | 0.993405 | 0.938907 | 0.943966 | 0.941429 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.820639 | 0.820639 | 0.820639 |
| EVE | 256 | 0.936803 | 0.984375 | 0.960000 |
| FAC | 248 | 0.925373 | 1.000000 | 0.961240 |
| LOC | 2884 | 0.960818 | 0.960818 | 0.960818 |
| MON | 98 | 0.913978 | 0.867347 | 0.890052 |
| ORG | 3216 | 0.920892 | 0.937500 | 0.929122 |
| PCT | 94 | 0.946809 | 0.946809 | 0.946809 |
| PER | 2644 | 0.960000 | 0.944024 | 0.951945 |
| PRO | 318 | 0.942943 | 0.987421 | 0.964670 |
| TIM | 43 | 0.780488 | 0.744186 | 0.761905 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install sentencepiece
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/albert-fa-zwnj-base-v2-ner" # Albert
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. | {"language": "fa"} | HooshvareLab/albert-fa-zwnj-base-v2-ner | null | [
"transformers",
"pytorch",
"tf",
"albert",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# ALBERT-Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
> Call it little_berty
### BibTeX entry and citation info
Please cite in your publication as the following:
```bibtex
@misc{ALBERTPersian,
author = {Hooshvare Team},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/albert-fa-zwnj-base-v2 | null | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|----------|------------|--------------|----------|----------------|------------|
| ARMAN | 93.10* | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-base-parsbert-armanner-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
| Label | # |
|:------------:|:-----:|
| Organization | 16964 |
| Money | 2037 |
| Location | 8782 |
| Date | 4259 |
| Time | 732 |
| Person | 7675 |
| Percent | 699 |
**Download**
You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/)
---
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:---------------:|:--------:|:----------:|:--------------:|:----------:|:----------------:|:------------:|
| ARMAN + PEYMA | 95.13* | - | - | - | - | - |
| PEYMA | 98.79* | - | 90.59 | - | 84.00 | - |
| ARMAN | 93.10* | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-base-parsbert-ner-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
| Label | # |
|:------------:|:-----:|
| Organization | 16964 |
| Money | 2037 |
| Location | 8782 |
| Date | 4259 |
| Time | 732 |
| Person | 7675 |
| Percent | 699 |
**Download**
You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|----------|------------|--------------|----------|----------------|------------|
| PEYMA | 98.79* | - | 90.59 | - | 84.00 | - |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-base-parsbert-peymaner-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.