system HF staff commited on
Commit
5f530a3
1 Parent(s): 6e0dd2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: art
3
+ tags:
4
+ - translation
5
+
6
+ license: apache-2.0
7
+ ---
8
+
9
+ ### art-eng
10
+
11
+ * source group: Artificial languages
12
+ * target group: English
13
+ * OPUS readme: [art-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md)
14
+
15
+ * model: transformer
16
+ * source language(s): afh_Latn avk_Latn dws_Latn epo ido ido_Latn ile_Latn ina_Latn jbo jbo_Cyrl jbo_Latn ldn_Latn lfn_Cyrl lfn_Latn nov_Latn qya qya_Latn sjn_Latn tlh_Latn tzl tzl_Latn vol_Latn
17
+ * target language(s): eng
18
+ * model: transformer
19
+ * pre-processing: normalization + SentencePiece (spm32k,spm32k)
20
+ * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip)
21
+ * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt)
22
+ * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.eval.txt)
23
+
24
+ ## Benchmarks
25
+
26
+ | testset | BLEU | chr-F |
27
+ |-----------------------|-------|-------|
28
+ | Tatoeba-test.afh-eng.afh.eng | 1.2 | 0.099 |
29
+ | Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.105 |
30
+ | Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.076 |
31
+ | Tatoeba-test.epo-eng.epo.eng | 34.6 | 0.530 |
32
+ | Tatoeba-test.ido-eng.ido.eng | 12.7 | 0.310 |
33
+ | Tatoeba-test.ile-eng.ile.eng | 4.6 | 0.218 |
34
+ | Tatoeba-test.ina-eng.ina.eng | 5.8 | 0.254 |
35
+ | Tatoeba-test.jbo-eng.jbo.eng | 0.2 | 0.115 |
36
+ | Tatoeba-test.ldn-eng.ldn.eng | 0.7 | 0.083 |
37
+ | Tatoeba-test.lfn-eng.lfn.eng | 1.8 | 0.172 |
38
+ | Tatoeba-test.multi.eng | 11.6 | 0.287 |
39
+ | Tatoeba-test.nov-eng.nov.eng | 5.1 | 0.215 |
40
+ | Tatoeba-test.qya-eng.qya.eng | 0.7 | 0.113 |
41
+ | Tatoeba-test.sjn-eng.sjn.eng | 0.9 | 0.090 |
42
+ | Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.124 |
43
+ | Tatoeba-test.tzl-eng.tzl.eng | 1.4 | 0.109 |
44
+ | Tatoeba-test.vol-eng.vol.eng | 0.5 | 0.115 |
45
+
46
+
47
+ ### System Info:
48
+ - hf_name: art-eng
49
+
50
+ - source_languages: art
51
+
52
+ - target_languages: eng
53
+
54
+ - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md
55
+
56
+ - original_repo: Tatoeba-Challenge
57
+
58
+ - tags: ['translation']
59
+
60
+ - prepro: normalization + SentencePiece (spm32k,spm32k)
61
+
62
+ - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip
63
+
64
+ - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt
65
+
66
+ - src_alpha3: art
67
+
68
+ - tgt_alpha3: eng
69
+
70
+ - short_pair: art-en
71
+
72
+ - chrF2_score: 0.287
73
+
74
+ - bleu: 11.6
75
+
76
+ - brevity_penalty: 1.0
77
+
78
+ - ref_len: 73037.0
79
+
80
+ - src_name: Artificial languages
81
+
82
+ - tgt_name: English
83
+
84
+ - train_date: 2020-07-31
85
+
86
+ - src_alpha2: art
87
+
88
+ - tgt_alpha2: en
89
+
90
+ - prefer_old: False
91
+
92
+ - long_pair: art-eng
93
+
94
+ - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
95
+
96
+ - transformers_git_sha: 46e9f53347bbe9e989f0335f98465f30886d8173
97
+
98
+ - port_machine: brutasse
99
+
100
+ - port_time: 2020-08-18-01:48