{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:28.883696Z" }, "title": "Product Review Translation: Parallel Corpus Creation and Robustness towards User-generated Noisy Text", "authors": [ { "first": "Kumar", "middle": [], "last": "Kamal", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Soumya", "middle": [], "last": "Gupta", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Chennabasavraj", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Nikesh", "middle": [], "last": "Garera", "suffix": "", "affiliation": {}, "email": "nikesh.garera@flipkart.com" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Reviews written by the users for a particular product or service play an influencing role for the customers to make an informative decision. Although online e-commerce portals have immensely impacted our lives, available contents predominantly are in English language-often limiting its widespread usage. There is an exponential growth in the number of e-commerce users who are not proficient in English. Hence, there is a necessity to make these services available in non-English languages, especially in a multilingual country like India. This can be achieved by an in-domain robust machine translation (MT) system. However, the reviews written by the users pose unique challenges to MT, such as misspelled words, ungrammatical constructions, presence of colloquial terms, lack of resources such as in-domain parallel corpus etc. We address the above challenges by presenting an English-Hindi review domain parallel corpus. We train an English-to-Hindi neural machine translation (NMT) system to translate the product reviews available on e-commerce websites. By training the Transformer based NMT model over the generated data, we achieve a score of 33.26 BLEU points for English-to-Hindi translation. In order to make our NMT model robust enough to handle the noisy tokens in the reviews, we integrate a character based language model to generate word vectors and map the noisy tokens with their correct forms. Experiments on four language pairs, viz.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Reviews written by the users for a particular product or service play an influencing role for the customers to make an informative decision. Although online e-commerce portals have immensely impacted our lives, available contents predominantly are in English language-often limiting its widespread usage. There is an exponential growth in the number of e-commerce users who are not proficient in English. Hence, there is a necessity to make these services available in non-English languages, especially in a multilingual country like India. This can be achieved by an in-domain robust machine translation (MT) system. However, the reviews written by the users pose unique challenges to MT, such as misspelled words, ungrammatical constructions, presence of colloquial terms, lack of resources such as in-domain parallel corpus etc. We address the above challenges by presenting an English-Hindi review domain parallel corpus. We train an English-to-Hindi neural machine translation (NMT) system to translate the product reviews available on e-commerce websites. By training the Transformer based NMT model over the generated data, we achieve a score of 33.26 BLEU points for English-to-Hindi translation. In order to make our NMT model robust enough to handle the noisy tokens in the reviews, we integrate a character based language model to generate word vectors and map the noisy tokens with their correct forms. Experiments on four language pairs, viz.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the era of exponentially rising internet users, contents over social media, e-commerce portals are increasing rapidly. In recent times, there has been a phenomenal growth in the number of e-commerce users, especially during this COVID pandemic situation. However, the contents in such e-commerce portals are mostly in English, limiting the scope of these services to only a section of the society who can read and/or write in English. India is a multilingual country with 22 officially spoken languages. The number of internet users in India has increased dramatically in the last few years with the widespread usage of low-cost android phones. Users find it very difficult to understand the English contents written in these service portals. Hence, there is a great demand to translate these contents from English to Indian languages. As the manual translation is both time-consuming and cost-sensitive, building an automated machine translation (MT) system that could translate these enormous amounts of reviews will be of great interest. However, there are several challenges for this, such as the non-availability of in-domain parallel training corpus, noisy nature of the text, ungrammatical constructions, and the mixing of more than one language (i.e. code-mixed contents) (ref. Table 1 ). Product reviews are user generated content where writing inconsistencies are common as shown in Table 1 . It is possible to make mistakes in writing words in a sentence due to various reasons, for example, weak grasp on the language, fast writing, writing just to convey the message without concerning more about the sentence formation etc. In our current work, we take up English-to-Hindi translation as there are 57.09% Hindi 1. Source the perfomence of the phone is bad.", "cite_spans": [], "ref_spans": [ { "start": 1289, "end": 1296, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1396, "end": 1403, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Reference \u092b\u094b\u0928 \u0915 \u092a\u0930\u092b\u0949\u092e \u0938 \u0916\u0930\u093e\u092c \u0939\u0948 \u0964 (phone of performance bad is.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Output \u092b\u094b\u0928 \u0915\u093e \u092a\u0930 \u0942 \u092e \u0916\u0930\u093e\u092c \u0939\u0948 \u0964 (phone of perfume bad is.) 2. Source The content is a disgrce to the page. Reference Der Inhalt ist eine Schande f\u00fcr die Seite.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Der Inhalt ist eine Abneigung gegen die Seite. 3. Source current procedure is more transpatent Reference la proc\u00e9dure courante est plus transparente . Output la proc\u00e9dure courante est plus transcriptive . speakers in India 1 . These two languages are morphologically and syntactically distant to each other, posing challenges to build a robust NMT system. We crawl the English review sentences (electronic gadgets) from the ecommerce websites. After pre-processing (ref.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output", "sec_num": null }, { "text": "Section 3.2) and filtering (ref. Section 3. 3), we translate the English sentences into Hindi language using our in-house English-Hindi translation system 2 . The generated Hindi output sentences are given to the professionals who are experts in Hindi and English languages. The language experts post-edit the Hindi output as per the guidelines (ref. Section 3.5) provided to them. In addition, we also crawl monolingual Hindi sentences (ref. Section 3.6) from electronics gadgets' description websites. These sentences are back-translated 3 (Sennrich et al., 2016a) using the Hindi-to-English translation model trained over the post-edited parallel corpus. Neural machine translation (NMT) (Bahdanau et al., 2015; Vaswani et al., 2017) is the dominant translation technology nowadays, and adapting this to the noisy text is very crucial due to the phenomenal growth in social media. Since NMT models learn from a fixed number of source and target vocabulary during training, any noisy word during the inference becomes an out-of-vocabulary (OOV) token because it does not belong to the NMT model's training vocabulary. It is not possible to train an NMT model with all the noisy versions of a correct token. In this case, models treat noisy tokens as OOV tokens and either miss their translation in the output sentence or translate them incorrectly. Incorrect translation of noisy tokens affects the translation quality of the whole output sequence. It affects the translation output and degrades the output quality. For example, English-to-Hindi (En-to-Hi) NMT model has one token performance as a part of its source vocabulary during training. As shown in example 1 in Table 2 , a noisy version performence appears in the input sentence which is incorrectly translated as '\u092a\u0930 \u0942 \u092e' perfume instead of '\u092a\u0930\u092b\u0949\u092e \u0938' performance. Similarly, in examples 2 and 3, we can see that disgrce and transpatent are the noisy tokens in the English to German (En-to-De) and English to French (En-to-Fr) models, respectively, and both of these noisy tokens are incorrectly translated by their respective translation models.", "cite_spans": [ { "start": 27, "end": 43, "text": "(ref. Section 3.", "ref_id": null }, { "start": 691, "end": 714, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 715, "end": 736, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 1672, "end": 1680, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Output", "sec_num": null }, { "text": "To handle the noisy tokens as source input, we integrate a similarity based token replacement model before word segmentation at inference time where the word vectors of noisy input tokens are matched with the correct and seen tokens in the source vocabulary, and replaced with the highest similar token. We use a character based language model to generate the word vectors and map the noisy and correct version of tokens in vector space. The generation of the word vectors depends on the characters present in that word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output", "sec_num": null }, { "text": "The remainder of the paper is organized as follows. In Section 2, we discuss the related work. Section 5 presents the approaches of training the character language model, word vector generation model, and handling of noisy source input tokens at inference time. Section 6 presents the details regarding the dataset used and the experimental setup. Results and analysis of our approach are discussed in Section 7. Finally, Section 8 concludes the work with future research directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output", "sec_num": null }, { "text": "Machine translation with noisy text is, itself, a very challenging task. Noisy tokens (misspelled words) pose great challenges to develop the Neural Machine Translation (NMT) models (c.f. Table 2) (Michel and Neubig, 2018) . In the literature, there are a few existing works that focus on handling the noisy text by increasing the robustness of the translation model. An MTNT (machine translation of noisy text) test-bed was introduced in (Michel and Neubig, 2018) that discussed the challenges of noisy contents. It has been also observed that even small noise in the input sentence can degrade the translation quality of the NMT model significantly (Belinkov and Bisk, 2018; Karpukhin et al., 2019) . To improve the robustness of the translation model, they introduced synthetic errors like character swapping, replacement and drop in the corpus. Synthetic noise using back-translated corpus was also inserted in the original corpus to introduce the NMT model with noise at train- (Vaibhav et al., 2019; Anastasopoulos et al., 2019) . Since it is difficult for the NMT model to see all the noisy variants of a correct token at training time, the model hence treats the noisy tokens as the unseen tokens. Word segmentation is a popular method that deals with the unseen tokens. Byte-pair-encoding (BPE) (Sennrich et al., 2016b) segments the words based on the rare character combinations. In BPE, a word is converted into the subword units based on the fixed learned list of less frequent character combinations. Subword regularization (SR) (Kudo, 2018) was introduced as a more diverse word segmentation method which segments the words based on a unigram language model. For these segmentation models, it is difficult to capture all the noisy versions at training time. So before segmentation, we use a character based language model that maps the noisy and correct versions of tokens together in a vector space as shown in Figure 1 . It helps to replace the noisy token with its correct form before inference.", "cite_spans": [ { "start": 197, "end": 222, "text": "(Michel and Neubig, 2018)", "ref_id": "BIBREF13" }, { "start": 439, "end": 464, "text": "(Michel and Neubig, 2018)", "ref_id": "BIBREF13" }, { "start": 651, "end": 676, "text": "(Belinkov and Bisk, 2018;", "ref_id": "BIBREF2" }, { "start": 677, "end": 700, "text": "Karpukhin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 983, "end": 1005, "text": "(Vaibhav et al., 2019;", "ref_id": "BIBREF16" }, { "start": 1006, "end": 1034, "text": "Anastasopoulos et al., 2019)", "ref_id": "BIBREF0" }, { "start": 1304, "end": 1328, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF15" }, { "start": 1542, "end": 1554, "text": "(Kudo, 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 188, "end": 196, "text": "Table 2)", "ref_id": "TABREF2" }, { "start": 1926, "end": 1935, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There has not been any significant attempt to translate the product reviews, except the one proposed in (Berard et al., 2019) that addressed the translation of English to French. In contrast, we develop product review translation system for English-Hindi. English and Hindi are morphologically and syntactically distant languages, which pose more challenges for machine translation. Further, Hindi is a resource-poor language for which we do not have sufficient resources and tools, even for the generic domain.", "cite_spans": [ { "start": 104, "end": 125, "text": "(Berard et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We crawl English product reviews from the ecommerce portal, Flipkart. Product reviews are user generated contents and contain various noises (inconsistencies) as shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 179, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Crawling reviews and challenges in pre-processing", "sec_num": "3.1" }, { "text": "We remove the emojis from the English sentence by providing their unicode range using regular expressions. Any character having repetition of more than 2 times is trimmed and then checked for its compatible correct word using spell-checker 4 , and a list provided by Facebook 5 (Edizel et al., 2019) . Writing the complete sentence in upper case is also very common in user generated content (i.e. NICE PHONE IN LOW BUDGET). Normalization is done to convert all such instances into the lower case. Since we focus on the product reviews data, we make the first character of brand's name 6 (Google, Moto, Nokia etc.) as capital. After the pre-processing steps as mentioned above (emoji removal, character repetition, casing etc.), we found that approximately 62.3% sentences from the total crawled sentences are corrected.", "cite_spans": [ { "start": 278, "end": 299, "text": "(Edizel et al., 2019)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-processing", "sec_num": "3.2" }, { "text": "We prepare the translation model to deal with the noises as mentioned in Section 3.1. Some sentences in reviews are written in Roman script 7 . We consider these sentences as nonstandard sentences. Before generating the target counterpart of the source sentences, we filter out the non-standard sentences using an autoencoder based NMT model. We use Sockeye toolkit (Hieber et al., 2018) to train our model, and the hyperparameters used are mentioned in Section 6.2. Steps involved in the filtering process are as follows:", "cite_spans": [ { "start": 366, "end": 387, "text": "(Hieber et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Filtering Standard vs. Non-standard Sentences", "sec_num": "3.3" }, { "text": "\u2022 Apply 30,000 BPE merge operations using subword technique (Sennrich et al., 2016b) \u2022 Train an English-to-English system. Here, the source and target are identical. \u2022 After training, infer the English sentence from the crawled product reviews and generate an English hypothesis. \u2022 Calculate the similarity between the input sentence and the inferred hypothesis using BLEU score. \u2022 If BLEU < 40 then the sentence will be filtered out. We consider 40 BLEU point as a threshold because BLEU in the range 30-40 is considered as \"understandable to good translations\" 8 .", "cite_spans": [ { "start": 60, "end": 84, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Filtering Standard vs. Non-standard Sentences", "sec_num": "3.3" }, { "text": "The objective of training the autoencoder is to generate an output sequence very similar to the input sequence. Model would not be able to regenerate a source input properly if it is not trained on the similar kind of samples. On an average from the total crawled reviews, 15 to 20 % reviews were filtered out as the non-standard sentences which were dropped and not considered further. After this filtering, there were still some sentences left, having grammar and spelling inconsistencies. These sentences have been considered as noisy sentences. Noise handling techniques as discussed in Section 5 are used to train the model to translate the noisy sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering Standard vs. Non-standard Sentences", "sec_num": "3.3" }, { "text": "Post-editing After pre-processing and filtering, we obtain 16,138 standard English sentences. Instead of translating sentences from scratch, we use an in-house English-Hindi machine translation system developed for the judicial domain. The model is trained for English-Hindi translation using judicial data (English-Hindi), and additional English-Hindi corpus (Kunchukuttan et al., 2018) 9 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Corpus Creation by Human", "sec_num": "3.4" }, { "text": "The sentences generated from this automatic translation are postedited by human experts. The experts are post-graduates in linguistics and have good command in Hindi and English both. The experts read the English sentences and its Hindi translation. They were instructed to make the correction in the sentences, if required. Some guidelines for making the corrections in the data are mentioned in Section 3.5. The human corrected parallel corpus is divided into training, development and test set consisting of 13000, 599 and 2,539 parallel sentences, respectively. Vocabulary size of English and Hindi training data is 9,331 and 8,367 tokens respectively. We also crawl Hindi sentences and back-translate them into English. In Table 4, 'Base+BT' shows the size of those samples. Section 3.6 describes the generation process of that synthetic (back-translated) data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold Corpus Creation by Human", "sec_num": "3.4" }, { "text": "Creation Guidelines for making the corrections (ref.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "Section 3.4) to generate the review domain parallel corpus are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "\u2022 Source and target sentence should carry the same semantic structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "\u2022 Product name should be transliterated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "\u2022 User friendly vocabulary selection at Hindi (target) side. Too many complicated Hindi words which are not in much use should be avoided. Transliteration of an English word can also be used in the Hindi side because in India, people generally use Hinglish (mix of Hindi and English words) vocabulary, e.g. 'time', 'face recognition', 'premium' etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "\u2022 If hyphen, slash, dot etc. symbols occur in the source side then the same pattern should be preserved at the translated side too.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "\u2022 Literal translation can be avoided sometimes. For example, adjectives and nouns like terrible, great etc. which carry extreme intensity should be translated into understandable simple words as \u0918 \u091f\u092f\u093e (ghatiya), \u0936\u093e\u0928\u0926\u093e\u0930 (shaanadaar) respectively which preserve the sense and intensity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "A few samples from the generated parallel English-Hindi corpus are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Guidelines for the Gold Corpus", "sec_num": "3.5" }, { "text": "We crawl 35,000 monolingual Hindi sentences from the various websites 101112 which provide Hindi descriptions of electronic gadgets. Since these are commercial websites, we randomly gave 3,000 sentences out of all the crawled Hindi sentences as a sample to our language experts to read, and they found them to be in-domain, relevant, correct in the sense of syntax and semantics, and hence useful for our use-case. We build a Hindito-English NMT model to back-translate the crawled Hindi sentences. We use IIT Bombay Hindi-English general domain parallel corpus (Kunchukuttan et al., 2018) to train a Hindito-English NMT model, and then fine-tune it over the human corrected review domain parallel corpus. The fine-tuned Hindi-to-English NMT model is used to back-translate the crawled 35,000 monolingual Hindi sentences into English. This back-translated (BT) English-Hindi synthetic parallel corpus is augmented with the original 13,000 parallel sentences. Table 4 gives the statistics about the dataset. A new system Base+BT model from English-to-Hindi is trained using the human corrected+back-translated corpus. We will make the human corrected and backtranslated parallel corpus available on request for the research purpose 13 .", "cite_spans": [ { "start": 562, "end": 589, "text": "(Kunchukuttan et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 959, "end": 966, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Crawling Hindi Reviews and Back-translating into English", "sec_num": "3.6" }, { "text": "We train an English-to-Hindi baseline model using the human corrected corpus as mentioned in Table 4 . We use the Sockeye framework (Hieber et al., 2018) for training the Transformer neural network based NMT. We splitted the words into subwords (Sennrich et al., 2016b) using BPE technique. We perform 4,000 BPE merge operations. Our model contains 6-6 encoder-decoder layers, 512 hidden size and word embedding size, learning rate as 0.0002 and min batch size as 3800 tokens. We used early stopping over the validation set. After training over the human corrected cor- Figure 2 : Token correction at inference time using character sequence based word embedding. to-kenC and tokenN are the correct and noisy tokens respectively paired together for training. 0 and 1 denotes the similar and non-similar token pairs respectively.", "cite_spans": [ { "start": 132, "end": 153, "text": "(Hieber et al., 2018)", "ref_id": "BIBREF7" }, { "start": 245, "end": 269, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 93, "end": 100, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 570, "end": 578, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Training NMT over Human Corrected Corpus", "sec_num": "4" }, { "text": "pus, we perform testing over the review domain test set and achieves 33.26 BLEU points and name it as 'Base'-the baseline model. In addition to it, we also add the back-translated synthetic corpus into human corrected corpus, and train the NMT model over it. We call it as the 'Base+BT' model that yields 37.79 BLEU points.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training NMT over Human Corrected Corpus", "sec_num": "4" }, { "text": "In this section, we describe the methodology used in our work. Figure 2 presents the overall process of our proposed method. It consists of various steps like character language model (LM) training, word vector (embedding) generation, and finally noisy token replacement at inference time. Section 5.1 and Section 5.2 describe the steps in details.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 71, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Handling Noisy Tokens", "sec_num": "5" }, { "text": "A word consists of a sequence of characters. Each character is represented as a one-hot vector and a sequence of such vectors is passed through two different Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) layers. It generates the embedding vector of that particular word. As a training model, chars2vec 14 is utilized for embedding generation and character sequence learning. To be more specific, we deal with a neural network taking two sequences of one-hot vectors representing two different words as an input, creating their embeddings with one chars2vec model, calculating the norm of the difference between these embedding vectors and feeding it into the last layer of the network with the sigmoid activation function. The output of the neural network is a number that ranges from 0 to 1 because of the sigmoid as an output function. The network is trained to capture the similarity between the noisy and its non-noisy version. For similar word pairs, i.e. for noisy and its equivalent version, we use 0 as a class label. On the other hand, we use 1 to denote the non-similar word pairs. For example, panasonic and pansonic are similar pairs while panasonic and panorama are nonsimilar. As shown in Figure 3 , we are trying to reduce the distance of two embeddings Emb1 and Emb2 of two similar tokens so that they can be mapped as close as possible in vector space. This is why the label of similar pairs is 0 and the training objective is to reduce the distance between Emb1 and Emb2 close to 0.", "cite_spans": [ { "start": 188, "end": 222, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1222, "end": 1230, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Training Character LM and Word Vector Generation", "sec_num": "5.1" }, { "text": "As shown in the Figure 1 , our objective is to map the correct and noisy versions of a token as close as possible in vector space. To train the character LM, we prepare the training data by creating noisy versions of tokens in the source vocabulary set trainX. The labelled training data can be generated by taking a multitude of words and then performing various changes (e.g. character drop and replacement) upon them to obtain the noisy versions of those words. These new noisy words, so produced by injecting character errors in one original word, would naturally be similar to this original word, and such pairs of words would have the label 0. As an example, two noisy versions performnce and performence are generated using character drop and character replacement, respectively, from the original source vocabulary word performance. So [(performance, performnce) : 0] and [(performance, performence) : 0] are two similar training pairs with label 0. It is to be noted that on a source token we apply at most two character operations to generate the similar pairs. To generate non-similar pairs with label 1, with token from the source vocabulary, we randomly pair the shuffled tokens, for example: [(performance, product) : 1] and [(performance, smartphone) : 1]. These training pairs are used to train and save the character LM model which learns the parameter in the process of mapping the similar word embeddings closer. Now this model is used to generate the vector representation of the source vocabulary tokens and tokens in the input sentence at in- ", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Training Character LM and Word Vector Generation", "sec_num": "5.1" }, { "text": "As discussed in Section 5.1, a trained model is saved which is used to generate the vector representation (embedding) of each word in the training source vocabulary. The vector representation is generated based on the characters in those words. Let us have a vector space S which contains the vector representation trainV i of training source vocabulary token trainX i . Here, trainV i is generated using the trained chars2vec model based on the characters appearing in the token trainX i . Now at the time of inference, each test input sentence input i consists of len tokens and j = 1, ....., len. Here, we assume that if a noisy token or say a noisy version of a word appears in the test input sentence then it will not be a part of the training source vocabulary trainX. As shown in Figure 4 , for each token inf erX ij in the test input sentence input i , we find if inf erX ij belongs to the source train vocabulary trainX then we keep that token as it is in source input sentence. If inf erX ij does not belong to the source train vocabulary trainX, we find the most similar token from the vocabulary list trainX using the cosine similarity. Now inf erX ij will be replaced with the most similar token from trainX. Finally, the corrected (replaced) source sequence segmented by the subword model will be fed to the NMT model for the translation.", "cite_spans": [], "ref_spans": [ { "start": 787, "end": 795, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Noisy Token Replacement", "sec_num": "5.2" }, { "text": "In this section, we present the details of the datasets used in our experiments and the various setups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Experimental Setup", "sec_num": "6" }, { "text": "We perform experiments with four different translation directions which are Englishto-Hindi (En-to-Hi), English-German (Ento-De), English-to-Czech (En-to-Cs) and English-to-French (En-to-Fr). Among these language pairs, English-Hindi is a low-resource and less-explored, and distant language pair. For English-to-Hindi translation, we use the IIT Bombay English-Hindi parallel corpus 15 . For English-to-Hindi, we also perform experiments over the generated review domain parallel corpus. For English-to-German, we use Europarl corpus from WMT 2014 16 (Bojar et al., 2014) . We use the IWSLT17 dataset for English-to-Czech and English-to-French 17 . We also use the MTNT 18 dataset for English-to-French translation. Table 5 presents the statistics of training, development and test sets.", "cite_spans": [ { "start": 552, "end": 572, "text": "(Bojar et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 717, "end": 724, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Dataset", "sec_num": "6.1" }, { "text": "In order to build our machine translation systems, we use the Sockeye 19 (Hieber et al., 2018) toolkit. Our training set-up is described below. The tokens of training, evaluation and validation sets are segmented into the subword units using the BPE technique (Gage, 1994) proposed by (Sennrich et al., 2016b) . We perform 20,000 join operations. We use 6 layers at encoder and decoder sides each, 8-head attention, hidden layer of size 512, embedding vector of size 512, learning rate of 0.0002, and the minimum batch size of 3800 tokens.", "cite_spans": [ { "start": 73, "end": 94, "text": "(Hieber et al., 2018)", "ref_id": "BIBREF7" }, { "start": 260, "end": 272, "text": "(Gage, 1994)", "ref_id": "BIBREF6" }, { "start": 285, "end": 309, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.2" }, { "text": "For the experiment, we introduce noise in the En-Hi, En-De, En-Fr and En-Cs test sets to make them noisy and suitable for testing the models' performance in the noisy environment. We introduce two kinds of noise in the source test sequence: 1. character drop and 2. character replacement. In character drop, we randomly drop any character from a source token and in character replacement, we replace the characters randomly with some other characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise Injection in the Test Sets", "sec_num": "6.3" }, { "text": "We evaluate the models using BLEU, and these results are shown in Table 6 . We 15 http://www.cfilt.iitb.ac.in/iitb_parallel/ 16 http://www.statmt.org/wmt14/translationtask.html", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 73, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Result and Analysis", "sec_num": "7" }, { "text": "17 https://wit3.fbk.eu/ 18 https://www.cs.cmu.edu/~pmichel1/mtnt/ 19 https://github.com/awslabs/sockeye Table 6 : Evaluation results of the proposed method in terms of BLEU score for different translation pairs. Here, SR: Subword regularization, BPE: Byte pair encoding also perform experiments using the word segmentation approaches, viz. BPE (Sennrich et al., 2016b) and subword regularization (Kudo, 2018) . For English-to-Hindi review domain translation, proposed method yields 35.09 BLEU points which significantly outperforms synthetic noise, SR and BPE with a difference of 0.82, 1.61 and 1.83 BLEU points, respectively. We also perform experiments for En-to-Hi translation using benchmark new-stest2014 as the test set. We achieve 0.96 and 1.29 BLEU improvements over subword regularization (SR) (Kudo, 2018) and bytepair-encoding BPE (Sennrich et al., 2016b) , respectively. We also evaluate the performance for En-to-De translation and achieve 1.05 and 1.07 BLEU improvements over SR and BPE, respectively. For En-to-Cs, we use the IWSLT17 testset, and the evaluation yields 1.94 and 2.48 BLEU improvements over SR and BPE, respectively. For En-to-Fr, we evaluate over two datasets, IWSLT17 and MTNT18. The MTNT is a noisy testset for En-Fr translation. For the MTNT testset, our model yields 1.55 and 2.18 BLEU improvements over SR and BPE, respectively. For IWSLT testset, En-to-Fr translation using our approach achieves the 1.63 and 1.57 BLEU improvements over SR and BPE, respectively. We also perform experiments by adding synthetic noise in the training corpus (Vaibhav et al., 2019) which is a noise handling technique. For En-to-Hi, En-to-De, En-to-Fr and En-to-Cs, our proposed method achieves 0.96, 1.05, 1.63 and 1.94 BLEU improvement, respectively, over the synthetic noise model (Vaibhav et al., 2019) . We perform statistical significance tests 20 (Koehn, 2004) , and found that the proposed model attains significant performance gain with 95% confidence level (with p=0.013 which is < 0.05). For En-to-Fr over MTNT18 testset, we achieve only 0.14 BLEU improvement over the synthetic noise model (Vaibhav et al., 2019) which is not a Table 7 : Performance evaluation in terms of BLEU scores by increasing the % of noisy tokens significant improvement.", "cite_spans": [ { "start": 344, "end": 368, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF15" }, { "start": 396, "end": 408, "text": "(Kudo, 2018)", "ref_id": "BIBREF11" }, { "start": 804, "end": 816, "text": "(Kudo, 2018)", "ref_id": "BIBREF11" }, { "start": 843, "end": 867, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF15" }, { "start": 1578, "end": 1600, "text": "(Vaibhav et al., 2019)", "ref_id": "BIBREF16" }, { "start": 1803, "end": 1825, "text": "(Vaibhav et al., 2019)", "ref_id": "BIBREF16" }, { "start": 1873, "end": 1886, "text": "(Koehn, 2004)", "ref_id": "BIBREF10" }, { "start": 2121, "end": 2143, "text": "(Vaibhav et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 104, "end": 111, "text": "Table 6", "ref_id": null }, { "start": 2159, "end": 2166, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Result and Analysis", "sec_num": "7" }, { "text": "We evaluate the performance of our approach in the presence of varying amount of noisy tokens. We inject the noise (character drop and character replacement) in 20%, 25%, 30% and 40% tokens in source input sentences for En-Hi, En-De and En-Fr testsets. Table 7 shows the change in the BLEU scores by increasing the count of noisy tokens. As we increase the number of noisy source tokens for three translation tasks, viz. En-to-Hi, En-to-De and Ento-Fr, we observe a decrease in BLEU score in both the models (proposed and SR). But our proposed method preserves the robustness significantly as compared to the SR model.", "cite_spans": [], "ref_spans": [ { "start": 253, "end": 260, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Quantitative Analysis", "sec_num": "7.1" }, { "text": "We perform the qualitative analysis of outputs using human evaluation. We took 250 random samples from English-Hindi review test set. It is given to 3 language experts (postgraduate in linguistics and have experiences for the translation task) to rate the outputs on the basis of adequacy and fluency and assign the scores in the range of 0 to 4 (0: incorrect, 1: almost incorrect, 2: moderately correct, 3: almost correct and 4: correct). Table 8 shows the average ratings for the En-Hi translation. We also calculate the inter-annotatoragreement scores (IAA) using Fleiss's Kappa. The scores for \"En-to-Hi (proposed)\" translation are found to be 87.2 and 86.8 for adequacy and fluency rating, respectively. The \"En-to-Hi(SR)\" translation shows the scores of 89.5 and 84.0 for adequacy and fluency, respectively. We also present a few output samples and error analysis in the appendix A.", "cite_spans": [], "ref_spans": [ { "start": 440, "end": 447, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "7.2" }, { "text": "In this paper, we have developed a robust NMT model for product review translation that can handle noisy input text. Because of the absence of an in-domain parallel corpus, we introduce a parallel English-Hindi corpus for product review domain. We crawl the user reviews of electronic gadgets from e-commerce sites written into English language. These are Table 8 : Human evaluation for English-to-Hindi translation pre-processed; passed through an in-house judicial domain NMT system; and a part of this dataset is post-edited by the language experts. It is also observed that product reviews which are user generated content contain noisy tokens which are a challenge to handle in any MT system. Due to the limitation of fixed vocabulary size at training time, it is not possible for the NMT models to see all the noisy variants of input tokens. We have integrated a token replacement approach during the inference time. We trained a character based language model which generates the vector representation of the tokens present in the source vocabulary based on the characters present in that word. The token replacement approach finds the most similar token from the source vocabulary for each noisy input token at inference time to replace it with the correct token. We perform experiments over a variety of language pairs, such as En-to-Hi, En-to-De, En-to-Fr and En-to-Cs. and using the proposed approach, we achieve 35.09, 28.91, 34.68 and 14.52 BLEU points respectively. We also observe the behaviour of the proposed method by varying the % (20, 25, 30 and 40%) of noisy tokens at the input side. The proposed method significantly outperforms the baseline in the presence of different quantities of noisy tokens. Human evaluation shows that our model achieves good fluency and adequacy levels.", "cite_spans": [], "ref_spans": [ { "start": 356, "end": 363, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Names of other partiipatng lawmkers were to be released in coming days . into Hindi. The two tokens, partiipating and lawmkers are noisy and appear as OOV candidates for the trained NMT model. The SR model is not able to recognize those tokens and misses their translations in the output sentence. Our proposed method of replacing the tokens using character LM finds the two most similar tokens participating and lawmakers as the correct tokens and update the source English sentence which results in the correct Hindi sentence as the output. Similarly, in example B, for English-to-German translation, prsents appears as noisy as well as OOV token, which is eventually replaced by its correct version presents in the proposed method. Example C shows one limitation of the spell correction method. There are two misspelled tokens featurs and lok in the source sentence. Using the proposed method, features and lock tokens appear as the replaced correct tokens respectively. features is the correct replacement for featurs but lock is not the correct replacement for lok. It should be look as the correct token. But, lok, lock and look tokens contain almost similar character combinations which make them appear closer to each other in the vector space. So our method may struggle in case of very small length (character count) noisy tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A. En\u2192Hi Source", "sec_num": null }, { "text": "https://en.wikipedia.org/wiki/List_of_languages _by_number_of_native_speakers_in_India 2 This system has BLEU of 55.67 for judicial domain 3 Translating monolingual target data using target\u2192source NMT model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://pypi.org/project/pyspellchecker/ 5 https://github.com/facebookresearch/moe/ tree\\/master/data 6 https://en.wikipedia.org/wiki/List_of_mobile_ phone_brands_by_country7 We do not focus on the sentences written in the Roman script (Hindi words written in Roman script (English letters)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://cloud.google.com/translate/automl/ docs/evaluate 9 It achieves a 55.67 BLEU (En-to-Hi) on our inhouse judicial domain test set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.digit.in/hi/reviews/ 11 https://hindi.gadgets360.com/reviews 12 https://www.91mobiles.com/hi/tech/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.iitp.ac.in/~ai-nlp-ml/ resources/data/review-corpus.zip", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ IntuitionEngineeringTeam/chars2vec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/moses-smt/ mosesdecoder/blob/master/scripts\\/analysis/ bootstrap-hypothesis-difference-significance. pl", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Authors gratefully acknowledge the unrestricted research grant received from the Flipkart Internet Private Limited to carry out the research. Authors thank Muthusamy Chelliah for his continuous feedbacks and suggestions to improve the quality of work; and to Anubhav Tripathee for gold standard parallel corpus creation and translation quality evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation of text from non-native speakers", "authors": [ { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Alison", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Toan", "middle": [ "Q" ], "last": "Nguyen", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3070--3080", "other_ids": { "DOI": [ "10.18653/v1/N19-1311" ] }, "num": null, "urls": [], "raw_text": "Antonios Anastasopoulos, Alison Lui, Toan Q. Nguyen, and David Chiang. 2019. Neural ma- chine translation of text from non-native speak- ers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3070-3080, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural Machine Translation by Jointly Learning to Align and Translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference on Learning Representation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Pro- ceedings of the 3rd International Conference on Learning Representation (ICLR 2015).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Synthetic and natural noise both break neural machine translation", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Syn- thetic and natural noise both break neural ma- chine translation. In International Conference on Learning Representations.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Machine translation of restaurant reviews: New corpus for domain adaptation and robustness", "authors": [ { "first": "Alexandre", "middle": [], "last": "Berard", "suffix": "" }, { "first": "Ioan", "middle": [], "last": "Calapodescu", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Dymetman", "suffix": "" }, { "first": "Claude", "middle": [], "last": "Roux", "suffix": "" }, { "first": "Jean-Luc", "middle": [], "last": "Meunier", "suffix": "" }, { "first": "Vassilina", "middle": [], "last": "Nikoulina", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "168--176", "other_ids": { "DOI": [ "10.18653/v1/D19-5617" ] }, "num": null, "urls": [], "raw_text": "Alexandre Berard, Ioan Calapodescu, Marc Dymetman, Claude Roux, Jean-Luc Meunier, and Vassilina Nikoulina. 2019. Machine trans- lation of restaurant reviews: New corpus for do- main adaptation and robustness. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 168-176, Hong Kong. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Findings of the 2014 Workshop on Statistical Machine Translation", "authors": [ { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Buck", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Leveling", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Pecina", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" }, { "first": "Herve", "middle": [], "last": "Saint-Amand", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the ninth workshop on statistical machine translation", "volume": "", "issue": "", "pages": "12--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ond\u0159ej Bojar, Christian Buck, Christian Feder- mann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of the ninth work- shop on statistical machine translation (WMT 2014), pages 12-58.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Edouard Grave, and Fabrizio Silvestri", "authors": [ { "first": "Bora", "middle": [], "last": "Edizel", "suffix": "" }, { "first": "Aleksandra", "middle": [], "last": "Piktus", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Ferreira", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bora Edizel, Aleksandra Piktus, Piotr Bojanowski, Rui Ferreira, Edouard Grave, and Fabrizio Sil- vestri. 2019. Misspelling oblivious word embed- dings. ArXiv, abs/1905.09755.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A new algorithm for data compression", "authors": [ { "first": "Philip", "middle": [], "last": "Gage", "suffix": "" } ], "year": 1994, "venue": "C Users Journal", "volume": "12", "issue": "2", "pages": "23--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Gage. 1994. A new algorithm for data com- pression. C Users Journal, 12(2):23-38.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The sockeye neural machine translation toolkit at AMTA", "authors": [ { "first": "Felix", "middle": [], "last": "Hieber", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Domhan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "David", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "Artem", "middle": [], "last": "Sokolov", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Clifton", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas", "volume": "1", "issue": "", "pages": "200--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Pro- ceedings of the 13th Conference of the Associ- ation for Machine Translation in the Americas (Volume 1: Research Papers), pages 200-207, Boston, MA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Training on synthetic noise improves robustness to natural noise in machine translation", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)", "volume": "", "issue": "", "pages": "42--47", "other_ids": { "DOI": [ "10.18653/v1/D19-5506" ] }, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in machine translation. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 42-47, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "388--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceed- ings of the 2004 Conference on Empirical Meth- ods in Natural Language Processing, pages 388- 395, Barcelona, Spain. Association for Compu- tational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "66--75", "other_ids": { "DOI": [ "10.18653/v1/P18-1007" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Im- proving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 66-75, Melbourne, Australia. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The IIT Bombay English-Hindi parallel corpus", "authors": [ { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" }, { "first": "Pratik", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anoop Kunchukuttan, Pratik Mehta, and Push- pak Bhattacharyya. 2018. The IIT Bombay English-Hindi parallel corpus. In Proceedings of the Eleventh International Conference on Lan- guage Resources and Evaluation (LREC 2018), Miyazaki, Japan.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "MTNT: A testbed for machine translation of noisy text", "authors": [ { "first": "Paul", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "543--553", "other_ids": { "DOI": [ "10.18653/v1/D18-1050" ] }, "num": null, "urls": [], "raw_text": "Paul Michel and Graham Neubig. 2018. MTNT: A testbed for machine translation of noisy text. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 543-553, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improving Neural Machine Translation Models with Monolingual Data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "86--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Trans- lation Models with Monolingual Data. In Pro- ceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (ACL 2016) (Volume 1: Long Papers), pages 86-96.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improving robustness of machine translation with synthetic noise", "authors": [ { "first": "Vaibhav", "middle": [], "last": "Vaibhav", "suffix": "" }, { "first": "Sumeet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1916--1920", "other_ids": { "DOI": [ "10.18653/v1/N19-1190" ] }, "num": null, "urls": [], "raw_text": "Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, and Graham Neubig. 2019. Improving robust- ness of machine translation with synthetic noise. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1916-1920, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in neural in- formation processing systems, pages 5998-6008.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Mapping the noisy and correct forms of tokens close to each other in a vector space" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Training the character based language model" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Flowchart of the noisy token replacement ference time." }, "TABREF0": { "type_str": "table", "content": "
EmojiFace unlock works well. even in dim light
Char Repetiton full package besttttt phone
Capital letterNICE PHONE IN LOW BUDGET.
MisspellAwsome prodct....loved it
Punctuation Irregularityphone gives best photos!! awesome feeling
Article missingIt is best earphone I got with phone
Starting noun pronoun missing was a nice product i got
Code MixedGood product. lekin price bahot high hai
", "num": null, "text": "", "html": null }, "TABREF1": { "type_str": "table", "content": "", "num": null, "text": "Various noise present in product review sentences", "html": null }, "TABREF2": { "type_str": "table", "content": "
", "num": null, "text": "Sample outputs for 1 En\u2192Hi, 2. En\u2192De and 3. En\u2192Fr translation in presence of noisy input tokens.", "html": null }, "TABREF3": { "type_str": "table", "content": "
Sr.English Sentence (crawled)Hindi Sentence (corrected)
1. 4. 5.rounded corners make (lekin yah aippal hai aur aippal aisa hee hota hai) At first I want to say Thank u flipkart. \u0938\u092c\u0938\u0947 \u092a\u0939\u0932\u0947 \u092e\u0948\u0902 \u0925\u0948\u0902 \u0915 \u092f\u0942 \u093f\u092b\u094d\u0932\u092a\u0915\u093e\u091f\u0930\u094d \u0915\u0939\u0928\u093e \u091a\u093e\u0939\u0924\u093e \u0939\u0942 \u0902 \u0964 (sabase pahale main thaink yoo phlipakaart kahana chaahata hoon.) Rear camera image quality is very good. \u0930\u092f\u0930 \u0915\u0948 \u092e\u0930\u093e \u0907\u092e\u0947 \u091c \u093e \u0932\u091f \u092c\u0939\u0941 \u0924 \u0905\u091a\u094d\u091b \u0939\u0948 \u0964 (riyar kaimara imej kvaalitee bahut achchhee hai)
", "num": null, "text": "griping the phone very well . \u0930\u093e\u0909\u0902 \u0921\u0947 \u0921 \u0915\u094b\u0928\u0930\u094d \u0930 \u092b\u094b\u0928 \u0915\u094b \u092c\u0939\u0941 \u0924 \u0939 \u0905\u091a\u094d\u091b \u092a\u0915\u095c \u0926\u0947 \u0924\u0947 \u0939\u0948\u0902 \u0964 (raunded kornar phon ko bahut hee achchhee pakad dete hain)2.one of the best phone ever at this price . \u0907\u0938 \u0915 \u092e\u0924 \u092e\u0947\u0902 \u0905\u092c \u0924\u0915 \u0915\u0947 \u0938\u092c\u0938\u0947 \u0905\u091a\u094d\u091b\u0947 \u092b\u094b\u0928 \u092e\u0947\u0902 \u0938\u0947 \u090f\u0915 \u0964 (is keemat mein ab tak ke sabase achchhe phon mein se ek .)3.but this is Apple and Apple is like that only \u0932\u0947 \u0915\u0928 \u092f\u0939 \u0910\u092a\u094d\u092a\u0932 \u0939\u0948 \u0914\u0930 \u0910\u092a\u094d\u092a\u0932 \u0910\u0938\u093e \u0939 \u0939\u094b\u0924\u093e \u0939\u0948", "html": null }, "TABREF4": { "type_str": "table", "content": "
: Samples from the generated English-Hindi parallel corpus
ing time
", "num": null, "text": "", "html": null }, "TABREF6": { "type_str": "table", "content": "", "num": null, "text": "BLEU and TER scores for English-to-Hindi NMT system over review domain corpus", "html": null }, "TABREF8": { "type_str": "table", "content": "
", "num": null, "text": "Size of train, dev and test sets for different language pairs", "html": null }, "TABREF10": { "type_str": "table", "content": "
En-to-Hi (newstest2014)20%25%30%40%
Subword Regularization (SR) 14.37 13.68 10.31 9.81
Proposed14.84 14.54 12.86 11.27
En-to-De (newstest2014)
Subword Regularization (SR) 27.24 26.18 24.83 23.48
Proposed28.53 27.34 26.08 25.22
En-to-Fr (IWSLT17)-
Subword Regularization (
", "num": null, "text": "SR) 33.14 32.26 29.24 28.37 Proposed 34.45 33.29 31.07 30.35", "html": null }, "TABREF12": { "type_str": "table", "content": "
C. En\u2192Hi Source Reference Output (Subword Regularization)The new featurs of the phone lok nice. \u092b\u094b\u0928 \u0915\u0947 \u0928\u090f \u092b \u091a\u0930 \u0905\u091a\u094d\u091b\u0947 \u0932\u0917\u0924\u0947 \u0939\u0948\u0902 \u0964 (phon ke nae pheechar achchhe lagate hain. ) \u092b\u094b\u0928 \u0915\u0947 \u0928\u090f \u0938\u094c\u0926\u0947 \u092c\u0939\u0941 \u0924 \u0939 \u0905\u091a\u094d\u091b\u0947 \u0925\u0947 \u0964 (phon ke nae saude bahut hee achchhe the)
Corrected Source Output (Proposed)The new features of the phone lock nice. \u092b\u094b\u0928 \u0915\u0947 \u0928\u090f \u092b \u091a\u0930 \u0905\u091a\u094d\u091b\u0947 \u0938\u0947 \u0932\u0949\u0915 \u0939\u094b\u0924\u0947 \u0939\u0948\u0902 \u0964 (phon ke nae pheechar achchhe se lok hote hain.)
", "num": null, "text": "Reference\u0938\u0939\u092d\u093e \u0917\u0924\u093e \u0915\u0930\u0928\u0947 \u0935\u093e\u0932\u0947 \u0905\u0928\u094d\u092f \u0935 \u0927 -\u0928\u092e\u093e\u0930\u094d \u0924\u093e\u0913\u0902 \u0915\u0947 \u0928\u093e\u092e \u0906\u0928\u0947 \u0935\u093e\u0932\u0947 \u0926\u0928\u094b\u0902 \u092e\u0947\u0902 \u091c\u093e\u0930 \u0915\u090f \u091c\u093e\u090f\u0901 \u0917\u0947 \u0964(sahabhaagita karane vaale any vidhi -nirmaataon ke naam aane vaale dinon mein jaaree kie jaenge ) Output (Subword Regularization) \u0906\u0928\u0947 \u0935\u093e\u0932\u0947 \u0926\u0928\u094b\u0902 \u092e\u0947\u0902 \u0905\u0928\u094d\u092f \u0926\u0932\u094b\u0902 \u0915\u0947 \u0928\u093e\u092e \u091c\u093e\u0930 \u0915\u090f \u091c\u093e\u0928\u0947 \u0925\u0947 \u0964 (aane vaale dinon mein any dalon ke naam jaaree kie jaane the) Corrected Source names of other participating lawmakers were to be released in coming days . Output (Proposed) \u0906\u0917\u093e\u092e\u0940 \u0926\u0928\u094b\u0902 \u092e\u0947\u0902 \u0905\u0928\u094d\u092f \u092d\u093e\u0917 \u0932\u0947 \u0928\u0947 \u0935\u093e\u0932\u0947 \u0935 \u0927 \u0928\u092e\u093e\u0930\u094d \u0924\u093e \u0915\u0947 \u0928\u093e\u092e \u091c\u093e\u0930 \u0915\u090f \u091c\u093e\u0928\u0947 \u0925\u0947 \u0964 (aagaamee dinon mein any bhaag lene vaale vidhinirmaata ke naam jaaree kie jaane the) B. En\u2192De Source the European Commission's sixth report prsents very valuable conclusions . Reference der Sechste Bericht der Europ\u00e4ischen Kommission bietet sehr wertvolle Schlu\u00dffolgerungen . Output (Subword Regularization) der Sechste Bericht der Europ\u00e4ischen Kommission ist sehr wertvoll . Corrected Source the European Commission ' s sixth report presents very valuable conclusions . Output (Proposed) der Sechste Bericht der Europ\u00e4ischen Kommission enth\u00e4lt sehr wertvolle Schlu\u00dffolgerungen .", "html": null }, "TABREF13": { "type_str": "table", "content": "", "num": null, "text": "Output samples for English\u2192Hindi and English\u2192German translation", "html": null } } } }