entai2965 commited on
Commit
af447d0
1 Parent(s): 9ceb990

Upload 9 files

Browse files
README.md CHANGED
@@ -1,3 +1,361 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ - af
5
+ - am
6
+ - ar
7
+ - ast
8
+ - az
9
+ - ba
10
+ - be
11
+ - bg
12
+ - bn
13
+ - br
14
+ - bs
15
+ - ca
16
+ - ceb
17
+ - cs
18
+ - cy
19
+ - da
20
+ - de
21
+ - el
22
+ - en
23
+ - es
24
+ - et
25
+ - fa
26
+ - ff
27
+ - fi
28
+ - fr
29
+ - fy
30
+ - ga
31
+ - gd
32
+ - gl
33
+ - gu
34
+ - ha
35
+ - he
36
+ - hi
37
+ - hr
38
+ - ht
39
+ - hu
40
+ - hy
41
+ - id
42
+ - ig
43
+ - ilo
44
+ - is
45
+ - it
46
+ - ja
47
+ - jv
48
+ - ka
49
+ - kk
50
+ - km
51
+ - kn
52
+ - ko
53
+ - lb
54
+ - lg
55
+ - ln
56
+ - lo
57
+ - lt
58
+ - lv
59
+ - mg
60
+ - mk
61
+ - ml
62
+ - mn
63
+ - mr
64
+ - ms
65
+ - my
66
+ - ne
67
+ - nl
68
+ - 'no'
69
+ - ns
70
+ - oc
71
+ - or
72
+ - pa
73
+ - pl
74
+ - ps
75
+ - pt
76
+ - ro
77
+ - ru
78
+ - sd
79
+ - si
80
+ - sk
81
+ - sl
82
+ - so
83
+ - sq
84
+ - sr
85
+ - ss
86
+ - su
87
+ - sv
88
+ - sw
89
+ - ta
90
+ - th
91
+ - tl
92
+ - tn
93
+ - tr
94
+ - uk
95
+ - ur
96
+ - uz
97
+ - vi
98
+ - wo
99
+ - xh
100
+ - yi
101
+ - yo
102
+ - zh
103
+ - zu
104
+ license: mit
105
+ tags:
106
+ - small100
107
+ - translation
108
+ - flores101
109
+ - gsarti/flores_101
110
+ - tico19
111
+ - gmnlp/tico19
112
+ - tatoeba
113
+ datasets:
114
+ - tico19
115
+ - flores101
116
+ - tatoeba
117
+ ---
118
+
119
+ From: https://huggingface.co/alirezamsh/small100
120
+
121
+ # SMALL-100 Model
122
+
123
+ SMaLL-100 is a compact and fast massively multilingual machine translation model covering more than 10K language pairs, that achieves competitive results with M2M-100 while being much smaller and faster. It is introduced in [this paper](https://arxiv.org/abs/2210.11621)(accepted to EMNLP2022), and initially released in [this repository](https://github.com/alirezamshi/small100).
124
+
125
+ The model architecture and config are the same as [M2M-100](https://huggingface.co/facebook/m2m100_418M/tree/main) implementation, but the tokenizer is modified to adjust language codes. So, you should load the tokenizer locally from [tokenization_small100.py](https://huggingface.co/alirezamsh/small100/blob/main/tokenization_small100.py) file for the moment.
126
+
127
+ **Demo**: https://huggingface.co/spaces/alirezamsh/small100
128
+
129
+ **Note**: SMALL100Tokenizer requires sentencepiece, so make sure to install it by:
130
+
131
+ ```pip install sentencepiece```
132
+
133
+ - **Supervised Training**
134
+
135
+ SMaLL-100 is a seq-to-seq model for the translation task. The input to the model is ```source:[tgt_lang_code] + src_tokens + [EOS]``` and ```target: tgt_tokens + [EOS]```.
136
+
137
+ An example of supervised training is shown below:
138
+
139
+ ```
140
+ from transformers import M2M100ForConditionalGeneration
141
+ from tokenization_small100 import SMALL100Tokenizer
142
+
143
+ model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
144
+ tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="fr")
145
+
146
+ src_text = "Life is like a box of chocolates."
147
+ tgt_text = "La vie est comme une boîte de chocolat."
148
+
149
+ model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
150
+
151
+ loss = model(**model_inputs).loss # forward pass
152
+ ```
153
+
154
+ Training data can be provided upon request.
155
+
156
+ - **Generation**
157
+
158
+ Beam size of 5, and maximum target length of 256 is used for the generation.
159
+
160
+ - **Evaluation**
161
+
162
+ Please refer to [original repository](https://github.com/alirezamshi/small100) for spBLEU computation.
163
+
164
+ - **Languages Covered**
165
+
166
+ Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)
167
+
168
+ # Citation
169
+
170
+ If you use this model for your research, please cite the following work:
171
+ ```
172
+ @inproceedings{mohammadshahi-etal-2022-small,
173
+ title = "{SM}a{LL}-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages",
174
+ author = "Mohammadshahi, Alireza and
175
+ Nikoulina, Vassilina and
176
+ Berard, Alexandre and
177
+ Brun, Caroline and
178
+ Henderson, James and
179
+ Besacier, Laurent",
180
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
181
+ month = dec,
182
+ year = "2022",
183
+ address = "Abu Dhabi, United Arab Emirates",
184
+ publisher = "Association for Computational Linguistics",
185
+ url = "https://aclanthology.org/2022.emnlp-main.571",
186
+ pages = "8348--8359",
187
+ abstract = "In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the {``}curse of multilinguality{''}, these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100(12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference.",
188
+ }
189
+
190
+ @inproceedings{mohammadshahi-etal-2022-compressed,
191
+ title = "What Do Compressed Multilingual Machine Translation Models Forget?",
192
+ author = "Mohammadshahi, Alireza and
193
+ Nikoulina, Vassilina and
194
+ Berard, Alexandre and
195
+ Brun, Caroline and
196
+ Henderson, James and
197
+ Besacier, Laurent",
198
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
199
+ month = dec,
200
+ year = "2022",
201
+ address = "Abu Dhabi, United Arab Emirates",
202
+ publisher = "Association for Computational Linguistics",
203
+ url = "https://aclanthology.org/2022.findings-emnlp.317",
204
+ pages = "4308--4329",
205
+ abstract = "Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.",
206
+ }
207
+
208
+ ```
209
+
210
+ ## How to download this model using python
211
+
212
+ - Install Python https://www.python.org/downloads/
213
+ - cmd
214
+ - python --version
215
+ - python -m pip install huggingface_hub
216
+ - python
217
+
218
+ ```
219
+ import huggingface_hub
220
+ huggingface_hub.download_snapshot('entai2965/small100-ctranslate2',local_dir='small100-ctranslate2')
221
+ ```
222
+
223
+ ## How to run this model
224
+
225
+ - https://opennmt.net/CTranslate2/guides/transformers.html#m2m-100
226
+ - https://huggingface.co/alirezamsh/small100
227
+ - cmd
228
+ - python -m pip install ctranslate2 transformers
229
+ - python
230
+
231
+ ```
232
+ import sys
233
+ import ctranslate2
234
+
235
+ #model_path=r'Downloads\models\small100-ctranslate2'
236
+ model_path='Downloads/models/small100-ctranslate2'
237
+
238
+ sys.path.insert(1,model_path)
239
+ from tokenization_small100 import SMALL100Tokenizer
240
+
241
+ string1='जीवन एक चॉकलेट बॉक्स की तरह है।'
242
+
243
+ translator=ctranslate2.Translator(model_path,device='cpu')
244
+ tokenizer=SMALL100Tokenizer.from_pretrained(model_path, clean_up_tokenization_spaces=True)
245
+
246
+ tokenizer.src_lang='hi'
247
+ tokenizer.tgt_lang='es'
248
+ target_language_token=[tokenizer.lang_code_to_token['es']]
249
+
250
+ encoded_string=tokenizer.convert_ids_to_tokens(tokenizer.encode(string1))
251
+
252
+ output=translator.translate_batch([encoded_string], target_prefix=[target_language_token])
253
+
254
+ output=tokenizer.decode(tokenizer.convert_tokens_to_ids(output[0].hypotheses[0][1:]))
255
+
256
+ print(output)
257
+ ```
258
+
259
+ ## How to run this model (batch syntax)
260
+
261
+ ```
262
+ import sys
263
+ import os
264
+ import ctranslate2
265
+
266
+ #set defaults
267
+ model_name='alirezamsh/small100'
268
+ home_path=os.path.expanduser('~')
269
+ model_path=home_path+'/Downloads/models/small100-ctranslate2'
270
+
271
+ source_language_code='hi'
272
+ #target_language_code='ar'
273
+ #target_language_code='fr'
274
+ #target_language_code='en'
275
+ target_language_code='es'
276
+
277
+ device='cpu'
278
+ #device=gpu
279
+
280
+ #import tokenizer.py library
281
+ #https://stackoverflow.com/questions/16114391/adding-directory-to-sys-path-pythonpath
282
+ sys.path.insert(1,model_path)
283
+ from tokenization_small100 import SMALL100Tokenizer
284
+
285
+ #load data, languages list -> https://huggingface.co/alirezamsh/small100 <-
286
+ string1='जीवन एक चॉकलेट बॉक्स की तरह है।'
287
+ string2='生活就像一盒巧克力。'
288
+ string3="You never know what you are going to get."
289
+ raw_list=[string1,string2,string3]
290
+
291
+ #load models
292
+ translator=ctranslate2.Translator(model_path,device='cpu')
293
+ tokenizer=SMALL100Tokenizer.from_pretrained(model_path, clean_up_tokenization_spaces=True)
294
+
295
+ #configure languages
296
+ tokenizer.src_lang=source_language_code #this tokenizer seems to completely ignore this setting
297
+ tokenizer.tgt_lang=target_language_code
298
+ target_language_token=[tokenizer.lang_code_to_token[target_language_code]]
299
+
300
+ #encode
301
+ encoded_list=[]
302
+ for text in raw_list:
303
+ encoded_list.append(tokenizer.convert_ids_to_tokens(tokenizer.encode(text)))
304
+
305
+ # translate
306
+ translated_list=translator.translate_batch(encoded_list,target_prefix=[target_language_token]*len(raw_list))
307
+
308
+ #decode
309
+ for counter,token in enumerate(translated_list):
310
+ translated_list[counter]=tokenizer.decode(tokenizer.convert_tokens_to_ids(token.hypotheses[0][1:]))
311
+
312
+ #output
313
+ for text in translated_list:
314
+ print(text)
315
+ ```
316
+
317
+ [Functional programming](https://docs.python.org/3/howto/functional.html) version
318
+
319
+ ```
320
+ import sys
321
+ import os
322
+ import ctranslate2
323
+
324
+ #set defaults
325
+ model_name='alirezamsh/small100'
326
+ home_path=os.path.expanduser('~')
327
+ model_path=home_path+'/Downloads/models/models--alirezamsh--small100-ctranslate2'
328
+
329
+ source_language_code='hi'
330
+ #target_language_code='ar'
331
+ #target_language_code='fr'
332
+ #target_language_code='en'
333
+ target_language_code='es'
334
+
335
+ device='cpu'
336
+ #device=gpu
337
+
338
+ #import tokenizer.py library
339
+ #https://stackoverflow.com/questions/16114391/adding-directory-to-sys-path-pythonpath
340
+ sys.path.insert(1,model_path)
341
+ from tokenization_small100 import SMALL100Tokenizer
342
+
343
+ #load data, languages list -> https://huggingface.co/alirezamsh/small100 <-
344
+ string1='जीवन एक चॉकलेट बॉक्स की तरह है।'
345
+ string2='生活就像一盒巧克力。'
346
+ string3="You never know what you are going to get."
347
+ raw_list=[string1,string2,string3]
348
+
349
+ #load models
350
+ translator=ctranslate2.Translator(model_path,device='cpu')
351
+ tokenizer=SMALL100Tokenizer.from_pretrained(model_path, clean_up_tokenization_spaces=True)
352
+ tokenizer.tgt_lang=target_language_code
353
+
354
+ #invoke witchcraft
355
+ translated_list=[tokenizer.decode(tokenizer.convert_tokens_to_ids(token.hypotheses[0][1:])) for token in translator.translate_batch([tokenizer.convert_ids_to_tokens(tokenizer.encode(text)) for text in raw_list],target_prefix=[[tokenizer.lang_code_to_token[target_language_code]]]*len(raw_list))]
356
+
357
+ #output
358
+ for text in translated_list:
359
+ print(text)
360
+ ```
361
+
config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<s>",
5
+ "decoder_start_token": "</s>",
6
+ "eos_token": "</s>",
7
+ "layer_norm_epsilon": null,
8
+ "unk_token": "<unk>"
9
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fae54c20aa744b25ecee3ec84b0c2ffa5338179d4cb2dd4e03783f5cc7740d5
3
+ size 1335148325
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8f7c76ed2a5e0822be39f0a4f95a55eb19c78f4593ce609e2edbc2aea4d380a
3
+ size 2423393
shared_vocabulary.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "__af__",
4
+ "__am__",
5
+ "__ar__",
6
+ "__ast__",
7
+ "__az__",
8
+ "__ba__",
9
+ "__be__",
10
+ "__bg__",
11
+ "__bn__",
12
+ "__br__",
13
+ "__bs__",
14
+ "__ca__",
15
+ "__ceb__",
16
+ "__cs__",
17
+ "__cy__",
18
+ "__da__",
19
+ "__de__",
20
+ "__el__",
21
+ "__en__",
22
+ "__es__",
23
+ "__et__",
24
+ "__fa__",
25
+ "__ff__",
26
+ "__fi__",
27
+ "__fr__",
28
+ "__fy__",
29
+ "__ga__",
30
+ "__gd__",
31
+ "__gl__",
32
+ "__gu__",
33
+ "__ha__",
34
+ "__he__",
35
+ "__hi__",
36
+ "__hr__",
37
+ "__ht__",
38
+ "__hu__",
39
+ "__hy__",
40
+ "__id__",
41
+ "__ig__",
42
+ "__ilo__",
43
+ "__is__",
44
+ "__it__",
45
+ "__ja__",
46
+ "__jv__",
47
+ "__ka__",
48
+ "__kk__",
49
+ "__km__",
50
+ "__kn__",
51
+ "__ko__",
52
+ "__lb__",
53
+ "__lg__",
54
+ "__ln__",
55
+ "__lo__",
56
+ "__lt__",
57
+ "__lv__",
58
+ "__mg__",
59
+ "__mk__",
60
+ "__ml__",
61
+ "__mn__",
62
+ "__mr__",
63
+ "__ms__",
64
+ "__my__",
65
+ "__ne__",
66
+ "__nl__",
67
+ "__no__",
68
+ "__ns__",
69
+ "__oc__",
70
+ "__or__",
71
+ "__pa__",
72
+ "__pl__",
73
+ "__ps__",
74
+ "__pt__",
75
+ "__ro__",
76
+ "__ru__",
77
+ "__sd__",
78
+ "__si__",
79
+ "__sk__",
80
+ "__sl__",
81
+ "__so__",
82
+ "__sq__",
83
+ "__sr__",
84
+ "__ss__",
85
+ "__su__",
86
+ "__sv__",
87
+ "__sw__",
88
+ "__ta__",
89
+ "__th__",
90
+ "__tl__",
91
+ "__tn__",
92
+ "__tr__",
93
+ "__uk__",
94
+ "__ur__",
95
+ "__uz__",
96
+ "__vi__",
97
+ "__wo__",
98
+ "__xh__",
99
+ "__yi__",
100
+ "__yo__",
101
+ "__zh__",
102
+ "__zu__"
103
+ ],
104
+ "bos_token": "<s>",
105
+ "eos_token": "</s>",
106
+ "pad_token": "<pad>",
107
+ "sep_token": "</s>",
108
+ "unk_token": "<unk>"
109
+ }
tokenization_small100.py ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) 2022 Idiap Research Institute, http://www.idiap.ch/
2
+ # Written by Alireza Mohammadshahi <[email protected]>
3
+ # This is a modified version of https://github.com/huggingface/transformers/blob/main/src/transformers/models/m2m_100/tokenization_m2m_100.py
4
+ # which owns by Fariseq Authors and The HuggingFace Inc. team.
5
+ #
6
+ #
7
+ # Licensed under the Apache License, Version 2.0 (the "License");
8
+ # you may not use this file except in compliance with the License.
9
+ # You may obtain a copy of the License at
10
+ #
11
+ # http://www.apache.org/licenses/LICENSE-2.0
12
+ #
13
+ # Unless required by applicable law or agreed to in writing, software
14
+ # distributed under the License is distributed on an "AS IS" BASIS,
15
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16
+ # See the License for the specific language governing permissions and
17
+ # limitations under the License.
18
+ """Tokenization classes for SMALL100."""
19
+ import json
20
+ import os
21
+ from pathlib import Path
22
+ from shutil import copyfile
23
+ from typing import Any, Dict, List, Optional, Tuple, Union
24
+
25
+ import sentencepiece
26
+
27
+ from transformers.tokenization_utils import BatchEncoding, PreTrainedTokenizer
28
+ from transformers.utils import logging
29
+
30
+
31
+ logger = logging.get_logger(__name__)
32
+
33
+ SPIECE_UNDERLINE = "▁"
34
+
35
+ VOCAB_FILES_NAMES = {
36
+ "vocab_file": "vocab.json",
37
+ "spm_file": "sentencepiece.bpe.model",
38
+ "tokenizer_config_file": "tokenizer_config.json",
39
+ }
40
+
41
+ PRETRAINED_VOCAB_FILES_MAP = {
42
+ "vocab_file": {
43
+ "alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/vocab.json",
44
+ },
45
+ "spm_file": {
46
+ "alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/sentencepiece.bpe.model",
47
+ },
48
+ "tokenizer_config_file": {
49
+ "alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/tokenizer_config.json",
50
+ },
51
+ }
52
+
53
+ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
54
+ "alirezamsh/small100": 1024,
55
+ }
56
+
57
+ # fmt: off
58
+ FAIRSEQ_LANGUAGE_CODES = {
59
+ "m2m100": ["af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu"]
60
+ }
61
+ # fmt: on
62
+
63
+
64
+ class SMALL100Tokenizer(PreTrainedTokenizer):
65
+ """
66
+ Construct an SMALL100 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
67
+ This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
68
+ this superclass for more information regarding those methods.
69
+ Args:
70
+ vocab_file (`str`):
71
+ Path to the vocabulary file.
72
+ spm_file (`str`):
73
+ Path to [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that
74
+ contains the vocabulary.
75
+ tgt_lang (`str`, *optional*):
76
+ A string representing the target language.
77
+ eos_token (`str`, *optional*, defaults to `"</s>"`):
78
+ The end of sequence token.
79
+ sep_token (`str`, *optional*, defaults to `"</s>"`):
80
+ The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
81
+ sequence classification or for a text and a question for question answering. It is also used as the last
82
+ token of a sequence built with special tokens.
83
+ unk_token (`str`, *optional*, defaults to `"<unk>"`):
84
+ The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
85
+ token instead.
86
+ pad_token (`str`, *optional*, defaults to `"<pad>"`):
87
+ The token used for padding, for example when batching sequences of different lengths.
88
+ language_codes (`str`, *optional*):
89
+ What language codes to use. Should be `"m2m100"`.
90
+ sp_model_kwargs (`dict`, *optional*):
91
+ Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
92
+ SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
93
+ to set:
94
+ - `enable_sampling`: Enable subword regularization.
95
+ - `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
96
+ - `nbest_size = {0,1}`: No sampling is performed.
97
+ - `nbest_size > 1`: samples from the nbest_size results.
98
+ - `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
99
+ using forward-filtering-and-backward-sampling algorithm.
100
+ - `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
101
+ BPE-dropout.
102
+ Examples:
103
+ ```python
104
+ >>> from tokenization_small100 import SMALL100Tokenizer
105
+ >>> tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="ro")
106
+ >>> src_text = " UN Chief Says There Is No Military Solution in Syria"
107
+ >>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
108
+ >>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
109
+ >>> model(**model_inputs) # should work
110
+ ```"""
111
+
112
+ vocab_files_names = VOCAB_FILES_NAMES
113
+ max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
114
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
115
+ model_input_names = ["input_ids", "attention_mask"]
116
+
117
+ prefix_tokens: List[int] = []
118
+ suffix_tokens: List[int] = []
119
+
120
+ def __init__(
121
+ self,
122
+ vocab_file,
123
+ spm_file,
124
+ tgt_lang=None,
125
+ bos_token="<s>",
126
+ eos_token="</s>",
127
+ sep_token="</s>",
128
+ pad_token="<pad>",
129
+ unk_token="<unk>",
130
+ language_codes="m2m100",
131
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
132
+ num_madeup_words=8,
133
+ **kwargs,
134
+ ) -> None:
135
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
136
+
137
+ self.language_codes = language_codes
138
+ fairseq_language_code = FAIRSEQ_LANGUAGE_CODES[language_codes]
139
+ self.lang_code_to_token = {lang_code: f"__{lang_code}__" for lang_code in fairseq_language_code}
140
+
141
+ kwargs["additional_special_tokens"] = kwargs.get("additional_special_tokens", [])
142
+ kwargs["additional_special_tokens"] += [
143
+ self.get_lang_token(lang_code)
144
+ for lang_code in fairseq_language_code
145
+ if self.get_lang_token(lang_code) not in kwargs["additional_special_tokens"]
146
+ ]
147
+
148
+ self.vocab_file = vocab_file
149
+ self.encoder = load_json(vocab_file)
150
+ self.decoder = {v: k for k, v in self.encoder.items()}
151
+ self.spm_file = spm_file
152
+ self.sp_model = load_spm(spm_file, self.sp_model_kwargs)
153
+
154
+ self.encoder_size = len(self.encoder)
155
+
156
+ self.lang_token_to_id = {
157
+ self.get_lang_token(lang_code): self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code)
158
+ }
159
+ self.lang_code_to_id = {lang_code: self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code)}
160
+ self.id_to_lang_token = {v: k for k, v in self.lang_token_to_id.items()}
161
+
162
+ self._tgt_lang = tgt_lang if tgt_lang is not None else "en"
163
+ self.cur_lang_id = self.get_lang_id(self._tgt_lang)
164
+ self.num_madeup_words = num_madeup_words
165
+
166
+ super().__init__(
167
+ tgt_lang=tgt_lang,
168
+ bos_token=bos_token,
169
+ eos_token=eos_token,
170
+ sep_token=sep_token,
171
+ unk_token=unk_token,
172
+ pad_token=pad_token,
173
+ language_codes=language_codes,
174
+ sp_model_kwargs=self.sp_model_kwargs,
175
+ num_madeup_words=num_madeup_words,
176
+ **kwargs,
177
+ )
178
+
179
+ self.set_lang_special_tokens(self._tgt_lang)
180
+
181
+
182
+ @property
183
+ def vocab_size(self) -> int:
184
+ return len(self.encoder) + len(self.lang_token_to_id) + self.num_madeup_words
185
+
186
+ @property
187
+ def tgt_lang(self) -> str:
188
+ return self._tgt_lang
189
+
190
+ @tgt_lang.setter
191
+ def tgt_lang(self, new_tgt_lang: str) -> None:
192
+ self._tgt_lang = new_tgt_lang
193
+ self.set_lang_special_tokens(self._tgt_lang)
194
+
195
+ def _tokenize(self, text: str) -> List[str]:
196
+ return self.sp_model.encode(text, out_type=str)
197
+
198
+ def _convert_token_to_id(self, token):
199
+ if token in self.lang_token_to_id:
200
+ return self.lang_token_to_id[token]
201
+ return self.encoder.get(token, self.encoder[self.unk_token])
202
+
203
+ def _convert_id_to_token(self, index: int) -> str:
204
+ """Converts an index (integer) in a token (str) using the decoder."""
205
+ if index in self.id_to_lang_token:
206
+ return self.id_to_lang_token[index]
207
+ return self.decoder.get(index, self.unk_token)
208
+
209
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
210
+ """Converts a sequence of tokens (strings for sub-words) in a single string."""
211
+ return self.sp_model.decode(tokens)
212
+
213
+ def get_special_tokens_mask(
214
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
215
+ ) -> List[int]:
216
+ """
217
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
218
+ special tokens using the tokenizer `prepare_for_model` method.
219
+ Args:
220
+ token_ids_0 (`List[int]`):
221
+ List of IDs.
222
+ token_ids_1 (`List[int]`, *optional*):
223
+ Optional second list of IDs for sequence pairs.
224
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
225
+ Whether or not the token list is already formatted with special tokens for the model.
226
+ Returns:
227
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
228
+ """
229
+
230
+ if already_has_special_tokens:
231
+ return super().get_special_tokens_mask(
232
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
233
+ )
234
+
235
+ prefix_ones = [1] * len(self.prefix_tokens)
236
+ suffix_ones = [1] * len(self.suffix_tokens)
237
+ if token_ids_1 is None:
238
+ return prefix_ones + ([0] * len(token_ids_0)) + suffix_ones
239
+ return prefix_ones + ([0] * len(token_ids_0)) + ([0] * len(token_ids_1)) + suffix_ones
240
+
241
+ def build_inputs_with_special_tokens(
242
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
243
+ ) -> List[int]:
244
+ """
245
+ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
246
+ adding special tokens. An MBART sequence has the following format, where `X` represents the sequence:
247
+ - `input_ids` (for encoder) `X [eos, src_lang_code]`
248
+ - `decoder_input_ids`: (for decoder) `X [eos, tgt_lang_code]`
249
+ BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
250
+ separator.
251
+ Args:
252
+ token_ids_0 (`List[int]`):
253
+ List of IDs to which the special tokens will be added.
254
+ token_ids_1 (`List[int]`, *optional*):
255
+ Optional second list of IDs for sequence pairs.
256
+ Returns:
257
+ `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
258
+ """
259
+ if token_ids_1 is None:
260
+ if self.prefix_tokens is None:
261
+ return token_ids_0 + self.suffix_tokens
262
+ else:
263
+ return self.prefix_tokens + token_ids_0 + self.suffix_tokens
264
+ # We don't expect to process pairs, but leave the pair logic for API consistency
265
+ if self.prefix_tokens is None:
266
+ return token_ids_0 + token_ids_1 + self.suffix_tokens
267
+ else:
268
+ return self.prefix_tokens + token_ids_0 + token_ids_1 + self.suffix_tokens
269
+
270
+ def get_vocab(self) -> Dict:
271
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
272
+ vocab.update(self.added_tokens_encoder)
273
+ return vocab
274
+
275
+ def __getstate__(self) -> Dict:
276
+ state = self.__dict__.copy()
277
+ state["sp_model"] = None
278
+ return state
279
+
280
+ def __setstate__(self, d: Dict) -> None:
281
+ self.__dict__ = d
282
+
283
+ # for backward compatibility
284
+ if not hasattr(self, "sp_model_kwargs"):
285
+ self.sp_model_kwargs = {}
286
+
287
+ self.sp_model = load_spm(self.spm_file, self.sp_model_kwargs)
288
+
289
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
290
+ save_dir = Path(save_directory)
291
+ if not save_dir.is_dir():
292
+ raise OSError(f"{save_directory} should be a directory")
293
+ vocab_save_path = save_dir / (
294
+ (filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["vocab_file"]
295
+ )
296
+ spm_save_path = save_dir / (
297
+ (filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["spm_file"]
298
+ )
299
+
300
+ save_json(self.encoder, vocab_save_path)
301
+
302
+ if os.path.abspath(self.spm_file) != os.path.abspath(spm_save_path) and os.path.isfile(self.spm_file):
303
+ copyfile(self.spm_file, spm_save_path)
304
+ elif not os.path.isfile(self.spm_file):
305
+ with open(spm_save_path, "wb") as fi:
306
+ content_spiece_model = self.sp_model.serialized_model_proto()
307
+ fi.write(content_spiece_model)
308
+
309
+ return (str(vocab_save_path), str(spm_save_path))
310
+
311
+ def prepare_seq2seq_batch(
312
+ self,
313
+ src_texts: List[str],
314
+ tgt_texts: Optional[List[str]] = None,
315
+ tgt_lang: str = "ro",
316
+ **kwargs,
317
+ ) -> BatchEncoding:
318
+ self.tgt_lang = tgt_lang
319
+ self.set_lang_special_tokens(self.tgt_lang)
320
+ return super().prepare_seq2seq_batch(src_texts, tgt_texts, **kwargs)
321
+
322
+ def _build_translation_inputs(self, raw_inputs, tgt_lang: Optional[str], **extra_kwargs):
323
+ """Used by translation pipeline, to prepare inputs for the generate function"""
324
+ if tgt_lang is None:
325
+ raise ValueError("Translation requires a `tgt_lang` for this model")
326
+ self.tgt_lang = tgt_lang
327
+ inputs = self(raw_inputs, add_special_tokens=True, **extra_kwargs)
328
+ return inputs
329
+
330
+ def _switch_to_input_mode(self):
331
+ self.set_lang_special_tokens(self.tgt_lang)
332
+
333
+ def _switch_to_target_mode(self):
334
+ self.prefix_tokens = None
335
+ self.suffix_tokens = [self.eos_token_id]
336
+
337
+ def set_lang_special_tokens(self, src_lang: str) -> None:
338
+ """Reset the special tokens to the tgt lang setting. No prefix and suffix=[eos, tgt_lang_code]."""
339
+ lang_token = self.get_lang_token(src_lang)
340
+ self.cur_lang_id = self.lang_token_to_id[lang_token]
341
+ self.prefix_tokens = [self.cur_lang_id]
342
+ self.suffix_tokens = [self.eos_token_id]
343
+
344
+ def get_lang_token(self, lang: str) -> str:
345
+ return self.lang_code_to_token[lang]
346
+
347
+ def get_lang_id(self, lang: str) -> int:
348
+ lang_token = self.get_lang_token(lang)
349
+ return self.lang_token_to_id[lang_token]
350
+
351
+
352
+ def load_spm(path: str, sp_model_kwargs: Dict[str, Any]) -> sentencepiece.SentencePieceProcessor:
353
+ spm = sentencepiece.SentencePieceProcessor(**sp_model_kwargs)
354
+ spm.Load(str(path))
355
+ return spm
356
+
357
+
358
+ def load_json(path: str) -> Union[Dict, List]:
359
+ with open(path, "r") as f:
360
+ return json.load(f)
361
+
362
+
363
+ def save_json(data, path: str) -> None:
364
+ with open(path, "w") as f:
365
+ json.dump(data, f, indent=2)
tokenizer_config.json ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "__af__",
4
+ "__am__",
5
+ "__ar__",
6
+ "__ast__",
7
+ "__az__",
8
+ "__ba__",
9
+ "__be__",
10
+ "__bg__",
11
+ "__bn__",
12
+ "__br__",
13
+ "__bs__",
14
+ "__ca__",
15
+ "__ceb__",
16
+ "__cs__",
17
+ "__cy__",
18
+ "__da__",
19
+ "__de__",
20
+ "__el__",
21
+ "__en__",
22
+ "__es__",
23
+ "__et__",
24
+ "__fa__",
25
+ "__ff__",
26
+ "__fi__",
27
+ "__fr__",
28
+ "__fy__",
29
+ "__ga__",
30
+ "__gd__",
31
+ "__gl__",
32
+ "__gu__",
33
+ "__ha__",
34
+ "__he__",
35
+ "__hi__",
36
+ "__hr__",
37
+ "__ht__",
38
+ "__hu__",
39
+ "__hy__",
40
+ "__id__",
41
+ "__ig__",
42
+ "__ilo__",
43
+ "__is__",
44
+ "__it__",
45
+ "__ja__",
46
+ "__jv__",
47
+ "__ka__",
48
+ "__kk__",
49
+ "__km__",
50
+ "__kn__",
51
+ "__ko__",
52
+ "__lb__",
53
+ "__lg__",
54
+ "__ln__",
55
+ "__lo__",
56
+ "__lt__",
57
+ "__lv__",
58
+ "__mg__",
59
+ "__mk__",
60
+ "__ml__",
61
+ "__mn__",
62
+ "__mr__",
63
+ "__ms__",
64
+ "__my__",
65
+ "__ne__",
66
+ "__nl__",
67
+ "__no__",
68
+ "__ns__",
69
+ "__oc__",
70
+ "__or__",
71
+ "__pa__",
72
+ "__pl__",
73
+ "__ps__",
74
+ "__pt__",
75
+ "__ro__",
76
+ "__ru__",
77
+ "__sd__",
78
+ "__si__",
79
+ "__sk__",
80
+ "__sl__",
81
+ "__so__",
82
+ "__sq__",
83
+ "__sr__",
84
+ "__ss__",
85
+ "__su__",
86
+ "__sv__",
87
+ "__sw__",
88
+ "__ta__",
89
+ "__th__",
90
+ "__tl__",
91
+ "__tn__",
92
+ "__tr__",
93
+ "__uk__",
94
+ "__ur__",
95
+ "__uz__",
96
+ "__vi__",
97
+ "__wo__",
98
+ "__xh__",
99
+ "__yi__",
100
+ "__yo__",
101
+ "__zh__",
102
+ "__zu__"
103
+ ],
104
+ "bos_token": "<s>",
105
+ "eos_token": "</s>",
106
+ "language_codes": "m2m100",
107
+ "model_max_length": 1024,
108
+ "name_or_path": "facebook/m2m100_418M",
109
+ "num_madeup_words": 8,
110
+ "pad_token": "<pad>",
111
+ "sep_token": "</s>",
112
+ "sp_model_kwargs": {},
113
+ "special_tokens_map_file": "m2m_100_1.2B_v2/special_tokens_map.json",
114
+ "tgt_lang": null,
115
+ "tokenizer_class": "M2M100Tokenizer",
116
+ "tokenizer_file": null,
117
+ "unk_token": "<unk>"
118
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff