|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:15:13.136430Z" |
|
}, |
|
"title": "Multimodal Neural Machine Translation System for English to Bengali", |
|
"authors": [ |
|
{ |
|
"first": "Shantipriya", |
|
"middle": [], |
|
"last": "Parida", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Subhadarshi", |
|
"middle": [], |
|
"last": "Panda", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "City University of New York", |
|
"location": { |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Satya", |
|
"middle": [], |
|
"last": "Prakash", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ketan", |
|
"middle": [], |
|
"last": "Kotwal", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Arghyadeep", |
|
"middle": [], |
|
"last": "Sen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Satya", |
|
"middle": [], |
|
"last": "Ranjan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Motlicek", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Multimodal Machine Translation (MMT) systems utilize additional information from other modalities beyond text to improve the quality of machine translation (MT). The additional modality is typically in the form of images. Despite proven advantages, it is indeed difficult to develop an MMT system for various languages primarily due to the lack of a suitable multimodal dataset. In this work, we develop an MMT for English\u2192Bengali using a recently published Bengali Visual Genome (BVG) dataset that contains images with associated bilingual textual description. Through a comparative study of the developed MMT system visa -vis a Text-totext translation, we demonstrate that the use of multimodal data not only improves the translation performance improvement in BLEU score of +1.3 on the development set, +3.9 on the evaluation test, and +0.9 on the challenge test set but also helps to resolve ambiguities in the pure text description. As per best of our knowledge, our English-Bengali MMT system is the first attempt in this direction, and thus, can act as a baseline for the subsequent research in MMT for low resource languages.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Multimodal Machine Translation (MMT) systems utilize additional information from other modalities beyond text to improve the quality of machine translation (MT). The additional modality is typically in the form of images. Despite proven advantages, it is indeed difficult to develop an MMT system for various languages primarily due to the lack of a suitable multimodal dataset. In this work, we develop an MMT for English\u2192Bengali using a recently published Bengali Visual Genome (BVG) dataset that contains images with associated bilingual textual description. Through a comparative study of the developed MMT system visa -vis a Text-totext translation, we demonstrate that the use of multimodal data not only improves the translation performance improvement in BLEU score of +1.3 on the development set, +3.9 on the evaluation test, and +0.9 on the challenge test set but also helps to resolve ambiguities in the pure text description. As per best of our knowledge, our English-Bengali MMT system is the first attempt in this direction, and thus, can act as a baseline for the subsequent research in MMT for low resource languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Over the last decade, deep neural networks (DNN) achieved state-of-the-art results for many tasks including computer vision, natural language processing, and speech processingwhich encouraged researchers to design a system that will get benefit from the fusion of multiple modalities (Caglayan et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 307, |
|
"text": "(Caglayan et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "English Text: A girl playing tennis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Bengali Text: \u098f\u0995\u09bf\u099f \u09c7\u09ae\u09c7\u09df \u09c7\u099f\u09bf\u09a8\u09b8 \u09c7\u0996\u09b2\u09c7\u099b Figure 1 : A sample from the BVG dataset: an image with a specific region marked and its description in English and Bengali.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 44, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Multimodal Translation refers to the extraction of information from more than one modality where it is assumed that alternative views would be used for input data (Sulubacak et al., 2020; Yao and Wan, 2020; Elliott, 2018) . The tasks and applications in multimodal translation involve translation of image captions, translation of video content, translation of spoken language, and others. These applications exploit more than one modality such as translation from video content includes audio and visual modality, and translation of image captions includes visual modality and caption text. Although there are different opinions on the performance of machine translation using visual modality but under limited resources, visual input generates better translation (Caglayan et al., 2019) . It is observed that multimodal translation systems do not leverage the visual signal to produce the correct translation in case of mistakes in the source language sentence (Chowdhury and Elliott, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 187, |
|
"text": "(Sulubacak et al., 2020;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 206, |
|
"text": "Yao and Wan, 2020;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 221, |
|
"text": "Elliott, 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 788, |
|
"text": "(Caglayan et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 963, |
|
"end": 992, |
|
"text": "(Chowdhury and Elliott, 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "When images are considered as an additional modality, the recent research can be divided into two major approaches based on their utilization of image features for the MMT: processing the global image features (Calixto and Liu, 2017) , and processing the object tags derived from the images (Gupta et al., 2021) . Cross-lingual visual pre-training which learns multimodal cross-lingual representations also found effective in MMT (Caglayan et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 233, |
|
"text": "(Calixto and Liu, 2017)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 311, |
|
"text": "(Gupta et al., 2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 453, |
|
"text": "(Caglayan et al., 2021)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Bengali (also known as Bangla) is an Indo-Aryan language widely spoken in India and Bangladesh and considered as the 6-th most spoken language of the world with approximately 230 million speakers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although English-to-Bengali text-only parallel corpora (Haddow and Kirefu, 2020; Ramesh et al., 2021) are available for building MT systems (Hasan et al., 2019 (Hasan et al., , 2020 Parida et al., 2020) ; the multimodal dataset for Bengali did not exist. Thus, English-Bengali MMT systems have not been developed until now. Recently, the first English-Bengali multimodal dataset: Bengali Visual Genome (BVG) has been published (Sen et al., (in press ) -which has facilitated research and development of corresponding multimodal as well as image captioning tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 80, |
|
"text": "(Haddow and Kirefu, 2020;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 81, |
|
"end": 101, |
|
"text": "Ramesh et al., 2021)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 159, |
|
"text": "(Hasan et al., 2019", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 181, |
|
"text": "(Hasan et al., , 2020", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 202, |
|
"text": "Parida et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 449, |
|
"text": "(Sen et al., (in press", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The primary objective of this paper is to develop an MMT system for Bengali where the multimodal input is provided as an image and its description in English. We have used the BVG dataset to demonstrate our MMT system. The BVG consists of image descriptions (or captions) in the bilingual corpus for a specific rectangular region in the image as shown in Figure 1 . The bounded box region information (X, Y, width, height) for each of the images is provided in the dataset. The MMT system uses both text and the associated image to build the model to translate into the target Bengali text. We extracted the object tags as image features. Then the object tags are appended to the original English sentence which is then translated using mBART , a multilingual sequence to sequence model trained on millions of unsupervised multilingual sentences. We also perform a comparative study between the English-Bengali Text-to-text translation system and the built MMT system.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 355, |
|
"end": 363, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There is limited research conducted in the domain of multimodal machine translation for Indian languages. Other than Hindi, no MMT system is available in other Indian languages due to the unavailability of the multimodal dataset for translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Flickr30k dataset with Hindi description is used for multimodal NMT task by Dutta Chowdhury et al. (2018) . They attempted to conduct multimodal translation from Hindi to English and examined whether visual image features can improve translation performance. They used synthetic Hindi descriptions for the Flickr30k dataset and provided validation and test corpus of English translations of the Flickr30k dataset. Similarly, Madaan et al. (2020) considered the Flickr30k dataset and asked five different crowd workers to provide Hindi translation of an image from the dataset and generated English captions with evaluating the quality of the translation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 109, |
|
"text": "(2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 449, |
|
"text": "Madaan et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Laskar et al. (2020) used Hindi Visual Genome 1.1 dataset (Parida et al., 2019) and used OpenNMT-py to build text-only NMT and multimodal NMT. They had used pretrained CNN with VGG19 for extracting local and global features from the images for the multimodal translation. The multimodal NMT performs better as compared to textonly NMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 79, |
|
"text": "(Parida et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we describe the multimodal translation system developed for English \u2192Bengali using the multimodal data which consists of images accompanying text. Our model is adapted from ViTA (Gupta et al., 2021) 1 which uses mBART , a multilingual sequence-to-sequence denoising auto-encoder that has been pre-trained using the BART objective . Gupta et al. (2021) built a English\u2192Hindi Figure 2 : Multimodal machine translation. The object tags extracted from images along with the English source text input to the mBART to generate the Bengali translation output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 368, |
|
"text": "Gupta et al. (2021)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 391, |
|
"end": 399, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Description of the MMT System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "multimodal translation system by utilizing the object tags extracted from the images of the Hindi Visual Genome multimodal dataset (Parida et al., 2019) which is a dataset similar to the Bengali Visual Genome (see Section 4.1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 152, |
|
"text": "(Parida et al., 2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Description of the MMT System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Similar to the ViTA approach, we first derive the list of object tags for a given image using the pretrained Faster R-CNN with ResNet-101-C4 backbone. Based on their confidence scores, we pick the top 10 object tags. In cases where less than 10 object tags are detected, we consider all the tags. The object tags are then concatenated to the English sentence which needs to be translated to Bengali. The concatenation is done using the special token '##' as the separator. The separator is followed by comma-separated object tags. Adding objects enables the model to utilize visual concepts which may not be readily available in the original sentence. The English sentences along with the object tags are fed to the encoder of the mBART model. The mBART's decoder generates the Bengali translations autoregressively. The block diagram of the multimodal translation using object tags is shown in Figure 2 . Gupta et al. (2021) applied ViTA for English to Hindi translation by using the mBART-25 model which has been pre-trained using the BART objective . For this pre-training, only multilingual unsupervised data spanning 25 languages was used . Then they finetune the model for the machine translation task using 1.6 million English-Hindi parallel sentences. Finally, they finetune the model on the English-Hindi multimodal data with the addition of object tags by (a) first masking out 15% of the English tokens in the input and then (b) with no masking. For translating from English to Bengali, however, we do not perform a large-scale machine translation pre-training using a million training examples. We instead use the pre-trained mBART-50 2 model finetuned on the machine translation task in a one-to-many setup using multilingual data which contains merely 4487 English-Bengali parallel sentences (Tang et al., 2020) . We take this pre-trained model and train it further on the machine translation task using the Bengali Visual Genome multimodal data by adding object tags to the English source sentences. Because of the scarcity of the pretraining machine translation English-Bengali parallel data (nearly 320 times smaller) as compared English-Hindi parallel data, our system represents a low-resource scenario.", |
|
"cite_spans": [ |
|
{ |
|
"start": 906, |
|
"end": 925, |
|
"text": "Gupta et al. (2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1806, |
|
"end": 1825, |
|
"text": "(Tang et al., 2020)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 895, |
|
"end": 903, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Description of the MMT System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Although the main approach we use is similar to the ViTA method by Gupta et al. (2021) , we state the one modification in our implementation and the reason behind it. The original ViTA method stochastically masks out 15% of the tokens in the input English sentence. This is done to incentivize the model to utilize the object tags while generating the Bengali translation and not rely only on the English sentence. However, in our experiments, we do not mask out 15% of the English tokens. This is done because we already see gains above the text-only results without masking. Using masking can potentially improve the multimodal translation scores further.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 86, |
|
"text": "Gupta et al. (2021)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Description of the MMT System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we first describe the dataset used followed by details of model training configurations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Details", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our experiments have been carried out using the BVG (Sen et al., (in press) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 75, |
|
"text": "(Sen et al., (in press)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BVG Dataset", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We used the large mBART one-to-many pretrained model 3 in Huggingface Transformers library (Wolf et al., 2020) . We did not freeze any model parameters during finetuning, therefore, the number of trainable parameters was 610M. The fine-tuning did not fit in the memory of a 28 GB GPU so we decreased the batch size to 1 and trained on a 48 GB GPU which was successful. The training time per epoch was 170 min. The model was fine-tuned for a maximum of 30 epochs. Adam optimizer (Kingma and Ba, 2014) was used with a learning rate of 1e-4. The training was stopped early if the development BLEU score did not improve for 5 consecutive epochs. The decoding beam size was set to 5. Model checkpoints were saved after every epoch and the best checkpoint was selected based on the development BLEU score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 110, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation using image and text modality", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To demonstrate the impact of using image signals in the form of object tags, we conducted the experiments described in the previous section but without using any object tags. We did not modify any other configuration to ensure a fair comparison. We also note that adding object tags results in a large increase of tokens in each sentence. As a result, while not using object tags we observed that the training time per epoch reduced to 60 min, that is, the training was nearly 3 times faster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation using text modality only", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition to the mBART pre-trained models, we also experimented with training a plain transformer model (Vaswani et al., 2017) from scratch. We first trained sentencepiece subword units (Kudo and Richardson, 2018) setting maximum vocabulary size to 8k. The vocabulary was learned jointly on the source and target sentences of the Bengali Visual Genome training dataset. The implementation was done using PyTorch (Paszke et al., 2019) . The number of encoder and decoder layers was set to 3 each and the number of heads was set to 8. The hidden size was set to 128, along with the dropout value of 0.1. We initialized the model parameters using Xavier initialization (Glorot and Bengio, 2010) and used the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 5e \u2212 4 for optimizing model parameters. Gradient clipping was used to clip gradients greater than 1. The training was stopped when the development loss did not improve for 5 consecutive epochs. For generating translations, we used greedy decoding and generated tokens auto-regressively till the end-ofsentence token was generated or the maximum translation length was reached, which was set to 100.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 128, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 215, |
|
"text": "(Kudo and Richardson, 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 435, |
|
"text": "(Paszke et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 668, |
|
"end": 693, |
|
"text": "(Glorot and Bengio, 2010)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation using Text-to-text transformer", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We have used the popular machine translation metric BLEU (Papineni et al., 2002) for the automatic evaluation, computed using sacre-BLEU toolkit (Post, 2018) . The development BLEU scores during training are shown in Figure 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 80, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 157, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 225, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The development BLEU score increases as the training progress. The mBART based scores (both Text-to-text and Multimodal) reach a notably high BLEU score even after one epoch of training. This is because of the prior knowledge acquired from pre-training, which is missing in the case of the Text-to-text transformer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The MMT results on the D-Test, E-Test, and C-Test are shown in Table 2 . The C-Test scores are consistently lower than D-Test and E-Test scores, indicating that the C-Test consists of more challenging segments which are harder to translate to Bengali. The Text-to- text mBART performs better as compared to the Text-to-text transformer model. The multimodal mBART performs the best overall. The object tags added to the original English sentences provide more context about the image and enable the generation of a better translation, which is indicated by the higher overall BLEU scores. The improvement is seen in translating C-Test as well.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 70, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We performed the comparison of MMT system with the best Text-to-text translation system without using any image features. There is a performance improvement (BLEU score) of +1.3 on D-Test, +3.9 on E-Test, and +0.9 on C-Test. Apart from performance, MMT systems help to resolve ambiguities as shown in the Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 305, |
|
"end": 312, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The MMT system can correctly translate the ambiguous word court which the Text-totext MT system fails. We compared the translation output between both Text-to-text and MMT systems and observed the MMT system produces a better translation, correct word order, no ambiguity as compared with the Textto-text MT system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Result and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To validate the automatic scoring, we manually annotated 100 randomly selected sen-tences from the C-Test set as translated by the Text-to-text machine translation system and MMT system. The annotation was performed by the native Bengali speaker. Bengali Captions in the MMT translation outcomes fall under five different sets where some of them are translated perfectly without any issue, some of them are very close to perfect, some of them have parts of speech or grammar issues, some of them have ambiguity in meanings, and some of them have lack of words than original annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this annotation, each annotated segment gets exactly one label from the following set (Parida and Bojar, 2018) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 113, |
|
"text": "(Parida and Bojar, 2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Flawless for translations without any error (typesetting issues with diacritic marks due to different tokenization are ignored),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Good for translations which are generally OK and complete but need a small correction, Partly Correct for cases where a part of the segment is correct but some words are mistranslated,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Ambiguity for segments where the MT system \"misunderstood\" a word's meaning, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Incomplete for segments that run well but stop too early, missing some content words. This category also includes the relatively rare cases where the Text-to-text or MMT system produced just a single word, unrelated to the source.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The manual evaluation results are summarized in following graph Figure 4 . The MMT system generates more flawless and good translation output as compared to the Text-to-text system (see Figure 4) . The Text-to-text system obtained more partial correct and incomplete translation output. It observed that still Table 3 : Samples of Text-to-text and Multimodal Translation obtained from the Text-to-text mBART and the Multimodal mBART systems. First two columns from left provide the input image and its corresponding English caption. The third and fourth columns are the Bengali captions generated by Text-only and Multimodal translation systems. For each Bengali caption, we also provide the English translation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 72, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 195, |
|
"text": "Figure 4)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 317, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "there are ambiguities exist in the translation output of both systems. Some translation sam-ples are shown in Figure 5 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 118, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper, we build an English-Bengali MMT system utilizing bi-lingual text and the associated images which improves the translation quality (based on an automatic evaluation) and resolve ambiguities. Our work helps to build a better English-Bengali MT system and encourages researchers to explore the MMT system for Bengali. The future work includes exploring other state-of-the-art MMT systems on the BVG dataset and performs a comparison analysis (Tamura et al., 2020; Caglayan et al., 2021; Tan et al., 2020; Liu et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 454, |
|
"end": 475, |
|
"text": "(Tamura et al., 2020;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 498, |
|
"text": "Caglayan et al., 2021;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 516, |
|
"text": "Tan et al., 2020;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 534, |
|
"text": "Liu et al., 2021)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://github.com/kshitij98/vita", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The mBART-50 model is trained using the same objective as the mBART-25 model. The difference is that the former supports 50 languages one of which is Bengali. Bengali is not supported by the latter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://huggingface.co/facebook/ mbart-large-50-one-to-many-mmt", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors Shantipriya Parida and Petr Motlicek were supported by the European Union's Horizon 2020 research and innovation program under grant agreement No. 833635 (project ROXANNE: Real-time network, text, and speaker analytics for combating organized crime, 2019-2022).The authors do not see any significant ethical or privacy concerns that would prevent the processing of the data used in the study. The datasets do contain personal data, and these are processed in compliance with the GDPR and national law.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Multimodal attention for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ozan", |
|
"middle": [], |
|
"last": "Caglayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.03976" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ozan Caglayan, Lo\u00efc Barrault, and Fethi Bougares. 2016. Multimodal attention for neural machine translation. arXiv preprint arXiv:1609.03976.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Cross-lingual visual pre-training for multimodal machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ozan", |
|
"middle": [], |
|
"last": "Caglayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Menekse", |
|
"middle": [], |
|
"last": "Kuyu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranava", |
|
"middle": [], |
|
"last": "Mustafa Sercan Amac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erkut", |
|
"middle": [], |
|
"last": "Swaroop Madhyastha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aykut", |
|
"middle": [], |
|
"last": "Erdem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Erdem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1317--1324", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Swaroop Madhyastha, Erkut Er- dem, Aykut Erdem, and Lucia Specia. 2021. Cross-lingual visual pre-training for multimodal machine translation. In Proceedings of the 16th Conference of the European Chapter of the As- sociation for Computational Linguistics: Main Volume, pages 1317-1324.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Probing the need for visual context in multimodal machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ozan", |
|
"middle": [], |
|
"last": "Caglayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranava", |
|
"middle": [], |
|
"last": "Swaroop Madhyastha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4159--4170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ozan Caglayan, Pranava Swaroop Madhyastha, Lucia Specia, and Lo\u00efc Barrault. 2019. Prob- ing the need for visual context in multimodal machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4159-4170.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Incorporating global visual features into attention-based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Iacer", |
|
"middle": [], |
|
"last": "Calixto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neu- ral machine translation. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Understanding the effect of textual adversaries in multimodal machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Dutta", |
|
"middle": [], |
|
"last": "Koel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Desmond", |
|
"middle": [], |
|
"last": "Chowdhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Elliott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Beyond Vision and LANguage: in-TEgrating Real-world kNowledge (LANTERN)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koel Dutta Chowdhury and Desmond Elliott. 2019. Understanding the effect of textual adversaries in multimodal machine translation. In Proceed- ings of the Beyond Vision and LANguage: in- TEgrating Real-world kNowledge (LANTERN), pages 35-40.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Multimodal neural machine translation for low-resource language pairs using synthetic data", |
|
"authors": [ |
|
{ |
|
"first": "Mohammed", |
|
"middle": [], |
|
"last": "Koel Dutta Chowdhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Hasanuzzaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--42", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-3405" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koel Dutta Chowdhury, Mohammed Hasanuzza- man, and Qun Liu. 2018. Multimodal neural machine translation for low-resource language pairs using synthetic data. In Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, pages 33-42, Melbourne. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Adversarial evaluation of multimodal machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Desmond", |
|
"middle": [], |
|
"last": "Elliott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2974--2978", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Desmond Elliott. 2018. Adversarial evaluation of multimodal machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2974-2978.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Understanding the difficulty of training deep feedforward neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Glorot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "249--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Under- standing the difficulty of training deep feedfor- ward neural networks. In Proceedings of the Thirteenth International Conference on Artifi- cial Intelligence and Statistics, volume 9 of Pro- ceedings of Machine Learning Research, pages 249-256, Chia Laguna Resort, Sardinia, Italy. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Vita: Visual-linguistic translation by aligning object tags", |
|
"authors": [ |
|
{ |
|
"first": "Kshitij", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devansh", |
|
"middle": [], |
|
"last": "Gautam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radhika", |
|
"middle": [], |
|
"last": "Mamidi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2106.00250" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kshitij Gupta, Devansh Gautam, and Radhika Mamidi. 2021. Vita: Visual-linguistic transla- tion by aligning object tags. arXiv preprint arXiv:2106.00250.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Pmindiaa collection of parallel corpora of languages of india", |
|
"authors": [ |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Faheem", |
|
"middle": [], |
|
"last": "Kirefu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2001.09907" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barry Haddow and Faheem Kirefu. 2020. Pmindia- a collection of parallel corpora of languages of india. arXiv preprint arXiv:2001.09907.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Neural machine translation for the bangla-english language pair", |
|
"authors": [ |
|
{ |
|
"first": "Firoj", |
|
"middle": [], |
|
"last": "Md Arid Hasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Alam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naira", |
|
"middle": [], |
|
"last": "Shammur Absar Chowdhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Khan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 22nd International Conference on Computer and Information Technology (ICCIT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Md Arid Hasan, Firoj Alam, Shammur Absar Chowdhury, and Naira Khan. 2019. Neural ma- chine translation for the bangla-english language pair. In 2019 22nd International Conference on Computer and Information Technology (ICCIT), pages 1-6. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Not lowresource anymore: Aligner ensembling, batch filtering, and new datasets for bengali-english machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Tahmid", |
|
"middle": [], |
|
"last": "Hasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhik", |
|
"middle": [], |
|
"last": "Bhattacharjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazi", |
|
"middle": [], |
|
"last": "Samin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masum", |
|
"middle": [], |
|
"last": "Hasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madhusudan", |
|
"middle": [], |
|
"last": "Basak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rifat", |
|
"middle": [], |
|
"last": "Sohel Rahman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shahriyar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2612--2623", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tahmid Hasan, Abhik Bhattacharjee, Kazi Samin, Masum Hasan, Madhusudan Basak, M Sohel Rahman, and Rifat Shahriyar. 2020. Not low- resource anymore: Aligner ensembling, batch filtering, and new datasets for bengali-english machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), pages 2612- 2623.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "3rd International Conference for Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a con- ference paper at the 3rd International Confer- ence for Learning Representations, San Diego, 2015.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Sentence-Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
|
"authors": [ |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "66--71", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-2012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taku Kudo and John Richardson. 2018. Sentence- Piece: A simple and language independent sub- word tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66- 71, Brussels, Belgium. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Multimodal neural machine translation for english to hindi", |
|
"authors": [ |
|
{ |
|
"first": "Abdullah", |
|
"middle": [], |
|
"last": "Sahinur Rahman Laskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Partha", |
|
"middle": [], |
|
"last": "Faiz Ur Rahman Khilji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sivaji", |
|
"middle": [], |
|
"last": "Pakray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bandyopadhyay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 7th Workshop on Asian Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "109--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sahinur Rahman Laskar, Abdullah Faiz Ur Rah- man Khilji, Partha Pakray, and Sivaji Bandy- opadhyay. 2020. Multimodal neural machine translation for english to hindi. In Proceedings of the 7th Workshop on Asian Translation, pages 109-113.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marjan", |
|
"middle": [], |
|
"last": "Ghazvininejad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdelrahman", |
|
"middle": [], |
|
"last": "Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7871--7880", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.703" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Variational multimodal machine translation with underlying semantic alignment", |
|
"authors": [ |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiliang", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huawen", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Information Fusion", |
|
"volume": "69", |
|
"issue": "", |
|
"pages": "73--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao Liu, Jing Zhao, Shiliang Sun, Huawen Liu, and Hao Yang. 2021. Variational multimodal machine translation with underlying semantic alignment. Information Fusion, 69:73-80.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Multilingual denoising pre-training for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marjan", |
|
"middle": [], |
|
"last": "Ghazvininejad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "726--742", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilin- gual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Antonios Anastasopoulos, Yiming Yang, and Graham Neubig. 2020. Practical comparable data collection for low-resource languages via images", |
|
"authors": [ |
|
{ |
|
"first": "Aman", |
|
"middle": [], |
|
"last": "Madaan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shruti", |
|
"middle": [], |
|
"last": "Rijhwani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.11954" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aman Madaan, Shruti Rijhwani, Antonios Anas- tasopoulos, Yiming Yang, and Graham Neu- big. 2020. Practical comparable data collection for low-resource languages via images. arXiv preprint arXiv:2004.11954.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Au- tomatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Translating short segments with nmt: A case study in english-to-hindi", |
|
"authors": [ |
|
{ |
|
"first": "Shantipriya", |
|
"middle": [], |
|
"last": "Parida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shantipriya Parida and Ond\u0159ej Bojar. 2018. Trans- lating short segments with nmt: A case study in english-to-hindi.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Hindi visual genome: A dataset for multimodal english-to-hindi machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Shantipriya", |
|
"middle": [], |
|
"last": "Parida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satya", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.08948" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shantipriya Parida, Ond\u0159ej Bojar, and Satya Ran- jan Dash. 2019. Hindi visual genome: A dataset for multimodal english-to-hindi machine transla- tion. arXiv preprint arXiv:1907.08948.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Amulya Ratna Dash, Satya Ranjan Dash, Debasish Kumar Mallick, Satya Prakash Biswal, Priyanka Pattnaik, Biranchi Narayan Nayak, and Ond\u0159ej Bojar. 2020. Odianlp's participation in wat2020", |
|
"authors": [ |
|
{ |
|
"first": "Shantipriya", |
|
"middle": [], |
|
"last": "Parida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Motlicek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 7th Workshop on Asian Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shantipriya Parida, Petr Motlicek, Amulya Ratna Dash, Satya Ranjan Dash, Debasish Kumar Mallick, Satya Prakash Biswal, Priyanka Pat- tnaik, Biranchi Narayan Nayak, and Ond\u0159ej Bo- jar. 2020. Odianlp's participation in wat2020. In Proceedings of the 7th Workshop on Asian Translation, pages 103-108.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Pytorch: An imperative style, highperformance deep learning library", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Massa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Killeen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Gimelshein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Kopf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Raison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alykhan", |
|
"middle": [], |
|
"last": "Tejani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sasank", |
|
"middle": [], |
|
"last": "Chilamkurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benoit", |
|
"middle": [], |
|
"last": "Steiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "8024--8035", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chin- tala. 2019. Pytorch: An imperative style, high- performance deep learning library. In H. Wal- lach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "186--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Con- ference on Machine Translation: Research Pa- pers, pages 186-191, Belgium, Brussels. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Samanantar: The largest publicly available parallel corpora collection for 11 indic languages", |
|
"authors": [ |
|
{ |
|
"first": "Gowtham", |
|
"middle": [], |
|
"last": "Ramesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumanth", |
|
"middle": [], |
|
"last": "Doddapaneni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravinth", |
|
"middle": [], |
|
"last": "Bheemaraj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mayank", |
|
"middle": [], |
|
"last": "Jobanputra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ajitesh", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujit", |
|
"middle": [], |
|
"last": "Sahoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harshita", |
|
"middle": [], |
|
"last": "Diddee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Divyanshu", |
|
"middle": [], |
|
"last": "Kakwani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navneet", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2104.05596" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gowtham Ramesh, Sumanth Doddapaneni, Ar- avinth Bheemaraj, Mayank Jobanputra, Ragha- van AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Divyanshu Kakwani, Navneet Kumar, et al. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages. arXiv preprint arXiv:2104.05596.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Ond\u0159ej Bojar, and Satya Ranjan Dash. (in press) 2021. Bengali visual genome: A multimodal datasetfor machine translation and image captioning", |
|
"authors": [ |
|
{ |
|
"first": "Arghyadeep", |
|
"middle": [], |
|
"last": "Sen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shantipriya", |
|
"middle": [], |
|
"last": "Parida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ketan", |
|
"middle": [], |
|
"last": "Kotwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhadarshi", |
|
"middle": [], |
|
"last": "Panda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of 9th International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arghyadeep Sen, Shantipriya Parida, Ketan Kot- wal, Subhadarshi Panda, Ond\u0159ej Bojar, and Satya Ranjan Dash. (in press) 2021. Bengali vi- sual genome: A multimodal datasetfor machine translation and image captioning. In Proceed- ings of 9th International Conference on Fron- tiers of Intelligent Computing: Theory and Ap- plications (FICTA). Springer.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Multimodal machine translation through visuals and speech", |
|
"authors": [ |
|
{ |
|
"first": "Umut", |
|
"middle": [], |
|
"last": "Sulubacak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozan", |
|
"middle": [], |
|
"last": "Caglayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stig-Arne", |
|
"middle": [], |
|
"last": "Gr\u00f6nroos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aku", |
|
"middle": [], |
|
"last": "Rouhe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Desmond", |
|
"middle": [], |
|
"last": "Elliott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Machine Translation", |
|
"volume": "34", |
|
"issue": "2", |
|
"pages": "97--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Umut Sulubacak, Ozan Caglayan, Stig-Arne Gr\u00f6n- roos, Aku Rouhe, Desmond Elliott, Lucia Spe- cia, and J\u00f6rg Tiedemann. 2020. Multimodal machine translation through visuals and speech. Machine Translation, 34(2):97-147.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Tmu japanese-english multimodal machine translation system for wat 2020", |
|
"authors": [ |
|
{ |
|
"first": "Hiroto", |
|
"middle": [], |
|
"last": "Tamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tosho", |
|
"middle": [], |
|
"last": "Hirasawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masahiro", |
|
"middle": [], |
|
"last": "Kaneko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mamoru", |
|
"middle": [], |
|
"last": "Komachi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 7th Workshop on Asian Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--91", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hiroto Tamura, Tosho Hirasawa, Masahiro Kaneko, and Mamoru Komachi. 2020. Tmu japanese-english multimodal machine transla- tion system for wat 2020. In Proceedings of the 7th Workshop on Asian Translation, pages 80- 91.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "An empirical study on ensemble learning of multimodal machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lin", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yifeng", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaixi", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peipei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Tan, Lin Li, Yifeng Han, Dong Li, Kaixi Hu, Dong Zhou, and Peipei Wang. 2020. An empir- ical study on ensemble learning of multimodal machine translation. In 2020 IEEE Sixth In- ternational Conference on Multimedia Big Data (BigMM), pages 63-69. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Multilingual translation with extensible multilingual pretraining and finetuning", |
|
"authors": [ |
|
{ |
|
"first": "Yuqing", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chau", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng-Jen", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and fine- tuning.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in Neu- ral Information Processing Systems, volume 30. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Transformers: Stateof-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quentin", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lhoest", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State- of-the-art natural language processing. In Pro- ceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Sys- tem Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Multimodal transformer for multimodal machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Shaowei", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4346--4350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaowei Yao and Xiaojun Wan. 2020. Multimodal transformer for multimodal machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4346-4350.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Development BLEU scores during training.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Manual evaluation summary. Out of 100 translation samples, five categories are chosen for observing Text-to-text and Multimodal Machine Translation Accuracy", |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Statistics of BVG for experiments. The number of tokens for English (EN) and Bengali (BN) for each set are reported.", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Text only and multimodal translation performance on the BVG dataset.", |
|
"content": "<table><tr><td>Input Image</td><td>Input Caption</td><td colspan=\"2\">Text-to-text</td><td/><td>MMT Result</td></tr><tr><td/><td/><td>Result</td><td/><td/></tr><tr><td/><td>The water bottle on the stand</td><td colspan=\"3\">\u09af\u09cd\u09be\u09c7 \u099c\u09c7\u09b2\u09b0 \u09c7\u09ac\u09be-\u09a4\u09b2</td><td>\u09af\u09cd\u09be\u09c7 \u099c\u09c7\u09b2\u09b0 \u09c7\u09ac\u09be-\u09a4\u09b2</td></tr><tr><td/><td/><td colspan=\"3\">\"Water bottle on</td><td>\"Water bottle on</td></tr><tr><td/><td/><td colspan=\"2\">the stand\"</td><td/><td>the stand\"</td></tr><tr><td/><td>Two people wait-ing to cross</td><td colspan=\"2\">\u09a6\u09c1 \u099c\u09a8 \u09c7\u09b2\u09be\u0995 \u0985\u09c7\u09aa\u0995\u09cd\u09b7\u09be \u0995\u09b0\u09c7\u099b</td><td>\u09b8</td><td>\u09a6\u09c1 \u099c\u09a8 \u09c7\u09b2\u09be\u0995 \u0985\u09c7\u09aa\u0995\u09cd\u09b7\u09be \u0995\u09b0\u09c7\u099b</td><td>\u09b8</td></tr><tr><td/><td/><td colspan=\"3\">\"Two people are</td><td>\"Two people are</td></tr><tr><td/><td/><td colspan=\"2\">waiting cross\"</td><td/><td>waiting cross\"</td></tr><tr><td/><td>Man standing on a tennis court</td><td colspan=\"3\">\u09c7\u099f\u09bf\u09a8\u09b8 \u09c7\u0995\u09be\u09c7\u099f\u09b0\u09cd \u09a6\u0981 \u09be\u09bf\u09dc-\u09c7\u09df \u09c7\u09b2\u09be\u0995</td><td>\u09c7\u099f\u09bf\u09a8\u09b8 \u09c7\u0995\u09be\u09c7\u099f\u09b0\u09cd \u09a6\u0981 \u09be\u09bf\u09dc-\u09c7\u09df \u09c7\u09b2\u09be\u0995</td></tr><tr><td/><td/><td colspan=\"3\">\"Man standing on</td><td>\"Man standing on</td></tr><tr><td/><td/><td colspan=\"2\">a tennis court\"</td><td/><td>a tennis court\"</td></tr><tr><td/><td>stamp on boy's left hand</td><td colspan=\"3\">\u09c7\u099b\u09c7\u09b2\u09bf\u099f\u09b0 \u09ac\u09be\u09ae \u09b9\u09be\u09c7\u09a4 \u09af\u09cd\u09be</td><td>\u09c7\u099b\u09c7\u09b2\u09bf\u099f\u09b0 \u09ac\u09be\u09ae \u09b9\u09be\u09c7\u09a4 \u09af\u09cd\u09be</td></tr><tr><td/><td/><td colspan=\"3\">\"Stank on boy's</td><td>\"Stamp on boy's</td></tr><tr><td/><td/><td colspan=\"3\">left hand\" (in-</td><td>left hand\" (cor-</td></tr><tr><td/><td/><td>correct</td><td colspan=\"2\">Bengali</td><td>rect Bengali word</td></tr><tr><td/><td/><td colspan=\"3\">word 'Stank' ob-</td><td>'stamp' obtained</td></tr><tr><td/><td/><td colspan=\"3\">tained in T2T</td><td>in MMT transla-</td></tr><tr><td/><td/><td colspan=\"2\">translation)</td><td/><td>tion)</td></tr><tr><td/><td>fence around the court</td><td colspan=\"3\">\u0986\u09a6\u09be\u09b2\u09c7\u09a4\u09b0 \u099a\u09be\u09b0\u09bf\u09a6-\u09c7\u0995 \u09c7\u09ac\u09dc\u09be</td><td>\u09c7\u0995\u09be\u09c7\u099f\u09b0\u09cd \u09b0 \u099a\u09be\u09b0\u09aa\u09be\u09c7\u09b6 \u09c7\u09ac\u09dc\u09be</td></tr><tr><td/><td/><td>\"Fence</td><td colspan=\"2\">around</td><td>\"Fence</td><td>around</td></tr><tr><td/><td/><td colspan=\"3\">the court\" (court</td><td>the court\" (court</td></tr><tr><td/><td/><td colspan=\"3\">is translated by</td><td>is translated by</td></tr><tr><td/><td/><td colspan=\"3\">T2T as Judicial</td><td>MMT as Tennis</td></tr><tr><td/><td/><td colspan=\"3\">Court in Bengali)</td><td>Court in Bengali)</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |