{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:09.996822Z" }, "title": "Multiple Captions Embellished Multilingual MultiModal Neural Machine Translation", "authors": [ { "first": "Salam", "middle": [ "Michael" ], "last": "Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "NIT Silchar", "location": { "country": "India" } }, "email": "" }, { "first": "Loitongbam", "middle": [ "Sanayai" ], "last": "Meetei", "suffix": "", "affiliation": { "laboratory": "", "institution": "NIT Silchar", "location": { "country": "India" } }, "email": "" }, { "first": "Thoudam", "middle": [], "last": "Doren Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "NIT Silchar", "location": { "country": "India" } }, "email": "thoudam.doren@gmail.com" }, { "first": "Sivaji", "middle": [], "last": "Bandyopadhyay", "suffix": "", "affiliation": { "laboratory": "", "institution": "NIT Silchar", "location": { "country": "India" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural machine translation based on bilingual text with limited training data suffers from lexical diversity, which lowers the rare word translation accuracy and reduces the general izability of the translation system. In this work, we utilise the multiple captions from the Multi-30K dataset to increase the lexical di versity aided with the crosslingual transfer of information among the languages in a multi lingual setup. In this multilingual and multi modal setting, the inclusion of the visual fea tures boosts the translation quality by a signif icant margin. Empirical study affirms that our proposed multimodal approach achieves sub stantial gain in terms of the automatic score and shows robustness in handling the rare word translation in the pretext of English to/from Hindi and Telugu translation tasks. 2 Related Works CallisonBurch et al. (2006) used paraphrase in a phrasebased statistical machine translation model", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Neural machine translation based on bilingual text with limited training data suffers from lexical diversity, which lowers the rare word translation accuracy and reduces the general izability of the translation system. In this work, we utilise the multiple captions from the Multi-30K dataset to increase the lexical di versity aided with the crosslingual transfer of information among the languages in a multi lingual setup. In this multilingual and multi modal setting, the inclusion of the visual fea tures boosts the translation quality by a signif icant margin. Empirical study affirms that our proposed multimodal approach achieves sub stantial gain in terms of the automatic score and shows robustness in handling the rare word translation in the pretext of English to/from Hindi and Telugu translation tasks. 2 Related Works CallisonBurch et al. (2006) used paraphrase in a phrasebased statistical machine translation model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The machine translation (MT) systems by (Koehn et al., 2003\u037e Sutskever et al., 2014\u037e Gehring et al., 2017\u037e Vaswani et al., 2017 has been the defacto standard which are based on parallel dataset. But, in recent times, use of monolingual data (Singh and Singh, 2020) or incorporating multiple lan guages in a jointly trained single multilingual model (Johnson et al., 2017\u037e Fan et al., 2020 has improved the translation quality of low re source languages. Compared to training separate bilingual models with the same parameters, the ability to handle translation between multiple lan guage pairs provides an inherent advantage of hav ing relatively compact model parameters. Typi cally, in such models, the encoder and decoders are shared among all the languages and attention. The sharing of the encoder is crucial to learn the ini tial multilingual crosslingual information (Sachan and Neubig, 2018) however, a single shared de coder is often insufficient in handling the transla tion of multiple languages. This decoder degener acy is addressed by partial sharing of decoder and attention parameters (Sachan and Neubig, 2018) or through a languageagnostic universal models (Bapna and Firat, 2019) .", "cite_spans": [ { "start": 40, "end": 127, "text": "(Koehn et al., 2003\u037e Sutskever et al., 2014\u037e Gehring et al., 2017\u037e Vaswani et al., 2017", "ref_id": null }, { "start": 241, "end": 264, "text": "(Singh and Singh, 2020)", "ref_id": "BIBREF32" }, { "start": 349, "end": 388, "text": "(Johnson et al., 2017\u037e Fan et al., 2020", "ref_id": null }, { "start": 874, "end": 899, "text": "(Sachan and Neubig, 2018)", "ref_id": "BIBREF28" }, { "start": 1101, "end": 1126, "text": "(Sachan and Neubig, 2018)", "ref_id": "BIBREF28" }, { "start": 1174, "end": 1197, "text": "(Bapna and Firat, 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Image features along with the text data has been used in sequence generation tasks such as the im age caption generation (Singh et al., 2021a,b) and multimodal machine translation (MMT) which in corporates visual features into ordinary NMT sys tems for low resource languages. With the intro duction of MMT datasets such as Multi-30k (El liott et al., 2016) and Hindi Visual Genome Parida et al. (2019) , MT researchers (Huang et al., 2016\u037e Caglayan et al., 2016 , 2019\u037e Meetei et al., 2019 have highlighted improvement in translation qual ity by incorporating image features in the MT sys tems.", "cite_spans": [ { "start": 121, "end": 144, "text": "(Singh et al., 2021a,b)", "ref_id": null }, { "start": 334, "end": 357, "text": "(El liott et al., 2016)", "ref_id": null }, { "start": 382, "end": 402, "text": "Parida et al. (2019)", "ref_id": "BIBREF24" }, { "start": 420, "end": 462, "text": "(Huang et al., 2016\u037e Caglayan et al., 2016", "ref_id": null }, { "start": 463, "end": 490, "text": ", 2019\u037e Meetei et al., 2019", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we adopt a single shared multilin gual machine translation system between English and under resourced languages viz., Hindi and Tel ugu, aided by linguistic information in the form of multiple captions. The inclusion of multiple cap tions during training makes the system implicitly robust to lexical and syntactic diversity. In addi tion to the multiple captions, we infuse our multi caption multilingual model with the visual infor mation in a multimodal (Calixto et al., 2017a\u037e El liott and K\u00e1d\u00e1r, 2017\u037e Yao and Wan, 2020 setting. English and Hindi belong to the IndoEuropean language family, while Telugu is a Dravidian lan guage. All three languages use different scripts\u037e Roman for English, Devanagari for Hindi and Tel ugu is written in Telugu script, an abugida writing system from the Brahmic family of scripts. and found that their method improves over the sin gle parallel corpora PBSMT baseline in terms of the overall word coverage and the translation qual ity. Paraphrase has also been leveraged as a data augmentation technique to improve dialog gener ation (Gao et al., 2020) and question generation (Jia et al., 2020) tasks. More similar to our pro posed work Zhou et al. (2019) decomposed the paraphrase as a foreign language in a multilingual scenario. Similar to the findings of CallisonBurch et al. (2006) , Zhou et al. (2019) found that their method improves the word coverage with diverse lexical choices.", "cite_spans": [ { "start": 471, "end": 538, "text": "(Calixto et al., 2017a\u037e El liott and K\u00e1d\u00e1r, 2017\u037e Yao and Wan, 2020", "ref_id": null }, { "start": 1087, "end": 1105, "text": "(Gao et al., 2020)", "ref_id": "BIBREF12" }, { "start": 1130, "end": 1148, "text": "(Jia et al., 2020)", "ref_id": "BIBREF16" }, { "start": 1191, "end": 1209, "text": "Zhou et al. (2019)", "ref_id": "BIBREF37" }, { "start": 1313, "end": 1340, "text": "CallisonBurch et al. (2006)", "ref_id": "BIBREF6" }, { "start": 1343, "end": 1361, "text": "Zhou et al. (2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, these works are purely unimodal, and on the other hand, a visually informed multimodal system involves the extraction of the global se mantic features from the image and initialize ei ther the encoder or decoder to fuse the visual con text along with the textual input (Calixto et al., 2017b) . In cases where the textual context is re stricted, Caglayan et al. (2019) showed that vi sual features could help to generate better transla tions. Similar to the proposed work, (Chakravarthi et al., 2019) trained a multimodal machine transla tion system in the pretext of Tamil, Kannada and Malayalam by generating a synthetic dataset from Flickr30k (Plummer et al., 2015) . They showed that transliteration of the Dravidian languages into Latin script and the multilingual setup improves the multimodal system over the bilingual multi modal baseline.", "cite_spans": [ { "start": 278, "end": 301, "text": "(Calixto et al., 2017b)", "ref_id": "BIBREF5" }, { "start": 355, "end": 377, "text": "Caglayan et al. (2019)", "ref_id": "BIBREF3" }, { "start": 655, "end": 677, "text": "(Plummer et al., 2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The proposed multicaption enabled multimodal multilingual machine translation system employs two major steps: first, we create the training cor pus and then train the multicaption multilingual system fused with the visual features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "The creation of a training corpus for the experi mentation is the first step. Multilingual machine translation with visual features for English (en) to/from {Hindi (hi), Telugu (te)} is the experi ment's premise. Multi-30K (Elliott et al., 2016) , on the other hand, lacks the hi and te data. As a result, for training, validation, and test data, a pub licly available machine translation model (Ramesh et al., 2021) generates the hi and te translations cor responding to the English captions 1 (caption-1 1 Further details are provided in the Dataset section. and caption-2).", "cite_spans": [ { "start": 223, "end": 245, "text": "(Elliott et al., 2016)", "ref_id": "BIBREF9" }, { "start": 395, "end": 416, "text": "(Ramesh et al., 2021)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Creation", "sec_num": "3.1" }, { "text": "Initially, all the caption-1 instances are suf fixed with the prefix.lang1 while prefix.lang2 for the caption-2 where prefix \u2208 (train, validation, test) and lang \u2208 (en, hi, te). Furthermore, dur ing the manytoone (m2o) training, all the en in stances of the train and validation are merged into a single compound target language while all the nonEnglish instances are merged as the source lan guage. Here, we do not append any artificial target token at the source side to denote the target lan guage as English is the sole target language. On the other hand, for the onetomany (o2m) training all the en instances of the train and validation are merged into as the source language while all the nonEnglish instances are merged as the target lan guage. In this case, an artificial target language to ken <__tgt__lang> is appended at the beginning of the source sentence to denote the target language lang \u2208 (hi1, te1, hi2, te2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Creation", "sec_num": "3.1" }, { "text": "NMT is an encoderdecoder based sequenceto sequence approach to machine translation which jointly models the conditional probability p(y|x) to translate a target sequence, y = {y 1 , . . . , y m } given a source sequence, x = {x 1 , . . . , x n } as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation (NMT)", "sec_num": "3.2" }, { "text": "p(y|x; \u03b8) = m \u220f j=1 p(y j |y