ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-main.104.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:29:02.608657Z"
},
"title": "Coding Textual Inputs Boosts the Accuracy of Neural Networks",
"authors": [
{
"first": "Abdul",
"middle": [
"Rafae"
],
"last": "Khan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge \u2020",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Natural Language Processing (NLP) tasks are usually performed word by word on textual inputs. We can use arbitrary symbols to represent the linguistic meaning of a word and use these symbols as inputs. As \"alternatives\" to a text representation, we introduce Soundex, MetaPhone, NYSIIS, logogram to NLP, and develop fixed-output-length coding and its extension using Huffman coding. Each of those codings combines different character/digital sequences and constructs a new vocabulary based on codewords. We find that the integration of those codewords with text provides more reliable inputs to Neural-Networkbased NLP systems through redundancy than text-alone inputs. Experiments demonstrate that our approach outperforms the state-ofthe-art models on the application of machine translation, language modeling, and part-ofspeech tagging. The source code is available at https://github.com/abdulrafae/coding nmt.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Natural Language Processing (NLP) tasks are usually performed word by word on textual inputs. We can use arbitrary symbols to represent the linguistic meaning of a word and use these symbols as inputs. As \"alternatives\" to a text representation, we introduce Soundex, MetaPhone, NYSIIS, logogram to NLP, and develop fixed-output-length coding and its extension using Huffman coding. Each of those codings combines different character/digital sequences and constructs a new vocabulary based on codewords. We find that the integration of those codewords with text provides more reliable inputs to Neural-Networkbased NLP systems through redundancy than text-alone inputs. Experiments demonstrate that our approach outperforms the state-ofthe-art models on the application of machine translation, language modeling, and part-ofspeech tagging. The source code is available at https://github.com/abdulrafae/coding nmt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We introduce novel coding schemes on the inputs of Neural-Network-based Natural Language Processing (NN-NLP) that significantly boost the accuracy in three applications. The inputs of NN-NLP rely on observable forms of mental representations of linguistic expressions, and allow alternative designs. For example, both logographic kanji and syllabic kana represent Japanese words, and emoticons and emojis can express sentiments. These showcase that alternative human language representation than text is possible and highlight a common belief of most linguists: the relationship between the mental representations and their phonological forms is highly arbitrary, even though a non-arbitrary (de Saussure, 1916) mapping exists for some special cases, e.g., the bouba/kiki effect.",
"cite_spans": [
{
"start": 692,
"end": 711,
"text": "(de Saussure, 1916)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our work, we ask -Are there alternative forms of mental representation in addition to text as we see in Japanese and Internet language to help language understanding in NN-NLP?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To answer this question, we blend concepts from linguistic phonetics, grammatology, and the statistics of Zipf law to find alternative language representations to text. More precisely, we code a textual word either naturally or artificially by exploring different facets of human languages, from phonetic and logogram codings to new coding constructions generalizable to all languages. Natural codings inspire the finding of artificial codings, which in turn helps us understand and explain natural codings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All of our codings reinforce NLP inputs by reconstructing the character/symbol sequence of a word in various ways with a new alphabet. These variants and their \"decomposition\" are expressive because they contain insightful information about linguistic patterns in units smaller than words and even smaller than characters. For example, in the logogram Wubi (that lists in a coded form the strokes caligraphing a Chinese character), \"\u4f17\" (crowd) is coded as \"www\", which is made of three \"\u4eba\" (person, \"w\"), and \"\u4ece\" (follow, \"ww\") is a composition of two \"\u4eba\". A representation containing such granular details potentially reveals the semantic structure and linguistic meanings inside a word, thus enriching text and allowing a redundancy that ensures more reliable NLP inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Now that we have put our previous question in context let us give an overview of how we incorporate coding schemes into an NLP framework in Figure 1 . For an input sentence, we apply an alternative coding scheme word by word, then use Byte-Pair-Encoding (BPE) to recombine these symbols (to shorten the input lengths), and finally perform embeddings (EMD). In contrast to word em- beddings that map words to real number vectors, our coding range is discrete. The coded sentence and its original textual input are then combined in three ways: concatenation, linear-interpolation at the encoder level, and multi-source encoding with or without Bi-LSTM, attention, and multi-head attention. The combined input is fed into NN-NLP models as a black-box to decode outputs. Our approach is language-, task-, and system-independent and does not use any additional information besides our algorithms.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 148,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct experiments on three NLP applications and five languages, including (1) Machine Translation (MT) on English-German, German-English, English-French, French-English, and Chinese-English; (2) Language Modeling (LM) on English; and (3) Part-of-Speech (POS) Tagging on English. Our approach significantly and consistently improves over state-of-the-art neural models: Transformer, ConvS2S, XLM, and Bi-LSTM with attention mechanisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contribution mainly lies in the three consecutive folds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Phonetic, logogram, and artificial codings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce a variety of language representations by coding words through various schemes of Soundex, NYSIIS, Metaphone, Pinyin, Wubi, fixed-output-length, and Huffman codings, and propose different ways to incorporate them in NLP models. ( \u00a72)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Synergistic coding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce effective ways of combining the textual inputs and their codewords with the state-of-the-art neural network architectures: concatenation, linear-interpolated encoder, and multi-source encoding with or without attention, 3. NLP Applications. Our method is generalizable to different languages and can be applied to any NN-NLP system. Experiments demonstrate that our methods improve over the stateof-the-art models (Transformer, XLM, and ConvS2S) on various tasks in applications including machine translation, language modeling, and part-of-speech tagging. ( \u00a74)",
"cite_spans": [
{
"start": 222,
"end": 232,
"text": "attention,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We view each coding as a function \u03b3 that maps a textual word from x \u2208 V, a natural language vocabulary, into a codeword \u03b3(x) \u2208 V, a codeword vocabulary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coding Words",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b3 : V \u2192 V",
"eq_num": "(1)"
}
],
"section": "Coding Words",
"sec_num": "2"
},
{
"text": "For simplicity of exposition we will consider V to be the image of V under \u03b3. Each codeword \u03b3(x) is a non-empty \u03c3-string over the alphabet \u03a3 of this coding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coding Words",
"sec_num": "2"
},
{
"text": "\u03b3(x) = \u03c3 1 , \u03c3 2 , \u03c3 3 \u2022 \u2022 \u2022 \u03c3 L with code length L. \u03a3 +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coding Words",
"sec_num": "2"
},
{
"text": "is an infinite set of all possible non-empty strings over \u03a3, and V \u2286 \u03a3 + . As an example (albeit one which is practically not useful) consider the mapping of four English words to three binary codewords: V = {\"to\", \"be\", \"or\", \"not\"}, \u03a3 = {0, 1}, \u03a3 + = {0, 1, 00, 01, 10, 11, \u2022 \u2022 \u2022}, V = {00, 01, 11}, L = 2, \u03b3(\"to\") = 00, \u03b3(\"be\") = 01, \u03b3(\"or\") = 11, \u03b3(\"not\") = 01, |V| = 4, and |V| = 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coding Words",
"sec_num": "2"
},
{
"text": "To instantiate this function, we start by introducing several existing linguistically-motivated coding schemes (and later on we will extend this to new coding schemes we develop): the phonetic and logogram coding as surjective functions, where in particular |V| \u2265 |V|; and the fixed-output-length and Huffman coding as bijections, where |V| = |V|. In traditional coding theory, a compression code has to be injective in order to be uniquely decodable. In our work, we only care about the taskspecific prediction and not in decoding the original message. Therefore, we relax the injective restriction on the codings to deviate a little from the standard typical coding theory applications for technical convenience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coding Words",
"sec_num": "2"
},
{
"text": "Throughout this paper, we choose to name the function \u03b3 as \"coding\" (although sometimes it is also called \"encoding\") to distinguish from the encoder in the NN-NLP models. An overview of our coding schemes is illustrated in Figure 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 232,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coding Words",
"sec_num": "2"
},
{
"text": "We introduce three phonetic codings: Soundex, NYSIIS, MetaPhone (and Pinyin just for comparison). A phonetic algorithm (coding) is an algorithm to index words by their pronunciation and produce the corresponding phonetic-phonological representations so that expressions, or sentences can be pronounced by the speaker. The phonetic form takes surface structure as its inputs and outputs an audible, pronounced sentence. Below are the detail of each phonetic coding listed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic Coding",
"sec_num": "2.1"
},
{
"text": "Soundex is a widely known phonetic algorithm for indexing names by sound and avoids misspelling and alternative spelling problems. It maps homophones to the same representation despite minor differences in spelling (Russel, 1918) . Continental European family names share the 26 letters (A to Z) in English.",
"cite_spans": [
{
"start": 215,
"end": 229,
"text": "(Russel, 1918)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic Coding",
"sec_num": "2.1"
},
{
"text": "Intelligence System Phonetic Code) is a phonetic algorithm devised in 1970 (Rajkovic and Jankovic, 2007) . It takes special care to handle phonemes that occur in European and Hispanic surnames by adding rules to Soundex.",
"cite_spans": [
{
"start": 75,
"end": 104,
"text": "(Rajkovic and Jankovic, 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NYSIIS (the New York State Identification and",
"sec_num": null
},
{
"text": "Metaphone is another algorithm (Philips, 1990) that improves on earlier systems such as Soundex and NYSIIS. The Metaphone algorithm is significantly more complicated than previous ones because it includes special rules for handling spelling inconsistencies and for looking at combinations of consonants in addition to some vowels.",
"cite_spans": [
{
"start": 31,
"end": 46,
"text": "(Philips, 1990)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NYSIIS (the New York State Identification and",
"sec_num": null
},
{
"text": "Hanyu Pinyin (or Pinyin for short) is the official romanization system for Standard Chinese in mainland China. Pinyin, which means \"spelled sound\", was originally developed to teach Mandarin. One Pinyin corresponds to multiple Chinese characters. One Chinese word is usually composed of one or more Chinese characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NYSIIS (the New York State Identification and",
"sec_num": null
},
{
"text": "A logogram or logograph is a written character that represents a word or phrase. We introduce to use Wubi for Chinese characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logogram Coding",
"sec_num": "2.2"
},
{
"text": "Wubi Wubizixing (or Wubi for short) is a Chinese character input method primarily used to input Chinese text with a keyboard efficiently. It decomposes a character based on its structure rather than its pronunciation. It is named after the rule that every character can be written with at most 4 keystrokes including -, |, \u4e3f, hook, and \u4e36 with various combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logogram Coding",
"sec_num": "2.2"
},
{
"text": "Zipf (1935) made a key observation of human lexical systems: more frequent words tend to be shorter. This feature enables speakers to minimize articulatory effort by shortening the averaged word length in use. Modern work confirms Zipf's original observation with new refinements in illustrating key factors revealed by word frequency. In this work, we introduce artificial coding by diversifying word length to two extremes: (1) optimizing the averaged length to make it the shortest and (2) fixing the length of every word to make them equal. The method of fixing the output codeword lengths without optimization brings more diversity to the standard textual representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": "Fixed-Output-Length Coding Given a vocabulary V of size |V| in any language, we convert each word in the vocabulary into a codeword, which is a sequence of symbols. All unique symbols make up the alphabet. The alphabet size is the base b, a parameter controlling the code length. Each word is mapped to a sequence of L symbols, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": "L = log |V| b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": ". If b = 2 an example of a codeword is \"01011\", whereas for b = 3 another example is \"0201\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": "The mapping (conversion) from a word in the textual form into a codeword follows Algorithm 1. Firstly, we generate all possible codewords of length L. The new codeword alphabet \u03a3 can be a Algorithm 1 Fixed-Output-Length Coding Input: A word sequence Parameter: base b Output: A code sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": "1: L = log |V| b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": "where |V| is the vocabulary size of the input word sequences, L is the code length, and b is the parameter of the alphabet size. 2: Generate all possible L-long code. 3: Shuffle the vocabulary words and assign oneto-one mapping between each word and the code. 4: for word in vocabulary do 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": "Output its mapped code 6: end for 7: return subset of the Latin alphabets (if b \u2264 26) or that of decimal numbers (if b \u2264 10), for instance. Then, we uniform randomly assign each word x in the vocabulary V to a unique codeword \u03b3(x) with length L. This assignment is a one-to-one random mapping. A random function is completely irrelevant to noisy inputs. 1 Each word (in the text form) in a sentence will be replaced by its codeword. The coding of a word never changes regardless of the number of times it occurs in the NLP system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zipf Law-Motivated Artificial Coding",
"sec_num": "2.3"
},
{
"text": "We consider Huffman coding (Huffman, 1952), a length-wise optimal prefix code with variable lengths, by applying Huffman coding on the fixed-output-length coding of the text input with its parameter base b. The fixed-outputlength coding is random and should be incompressible with significant probability. Therefore, the Huffman coding does not significantly improve the fixed-output-length coding with respect to the machine translation accuracy, because it saves (at best) an additive constant. Algorithm 2 shows the conversion of Huffman codes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Huffman Coding",
"sec_num": null
},
{
"text": "1 A random mapping does not mean that every time we see a word we output a random value. It means that the mapping as a whole is chosen at random. Here is an example on their difference: if we want to assign a random bit string of length 2 to the word \"hello\" then in an article, the first time we see \"hello\" we may output 01 the second time 11 and so on. However, if instead of assigning i.i.d. random values we choose a random mapping \u03b3, then the first time we evaluate \"hello\" with \u03b3(\"hello\")= 01, we will get a uniformly random value 01, but in every subsequent time in the article we evaluate the same word \"hello\" and get the same 01 value (the mapping \u03b3 is random, and is sampled at random but only once throughout its lifetime).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Huffman Coding",
"sec_num": null
},
{
"text": "Algorithm 2 Huffman Coding Input: A word sequence Parameter: base b Output: A code sequence 1: Create huffman tree on the word sequence having b children at each level 2: Shuffle the vocabulary words and assign oneto-one mapping between each word and the code. 3: for word in vocabulary do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Huffman Coding",
"sec_num": null
},
{
"text": "Output its mapped code 5: end for 6: return",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4:",
"sec_num": null
},
{
"text": "Below, we will discuss how to incorporate various types of codings in NLP tasks. Firstly, we code each word independently. Then, the word embedding (Mikolov et al., 2013) is trained on code-and word-based sentences separately. After that, we treat this new form of sentence representation and its written text form as two source inputs to the encoder and feed their combination into a baseline NN-NLP system. Thus, our coding is realized as a portable module that provides inputs to any NN architecture. We introduce three different combination methods to implement the interface of our coding module to various NN architectures.",
"cite_spans": [
{
"start": 148,
"end": 170,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coding Combination",
"sec_num": "3"
},
{
"text": "We implement the combination of the text and the code forms in three ways: (1) concatenation (see Figure 3a) ; (2) linear interpolation (see Figure 3b) , where the dark color boxes have the operation of \"+\"; (3) multi-source encoding on Bi-LSTM (see Figure 3b ), as well as on Transformer (see Figure 3c ). It is worth noting that there is no additional data or information needed except for our coding algorithms themselves.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 108,
"text": "Figure 3a)",
"ref_id": "FIGREF1"
},
{
"start": 141,
"end": 151,
"text": "Figure 3b)",
"ref_id": "FIGREF1"
},
{
"start": 250,
"end": 259,
"text": "Figure 3b",
"ref_id": "FIGREF1"
},
{
"start": 294,
"end": 303,
"text": "Figure 3c",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Coding Combination",
"sec_num": "3"
},
{
"text": "Applying a coding function in Equation 1 on each word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": "3.1"
},
{
"text": "x 1 , x 2 , x 3 , \u2022 \u2022 \u2022 , x i , \u2022 \u2022 \u2022 , x I in an input sen- tence one-by-one generates a sequence of code- words \u03b3(x 1 ), \u03b3(x 2 ), \u03b3(x 3 ), \u2022 \u2022 \u2022 , \u03b3(x i ), \u2022 \u2022 \u2022 , \u03b3(x I )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": "3.1"
},
{
"text": "in the same length I . Note that we use the term \"word\" loosely here, which can mean a word or a subword, or even a character.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": "3.1"
},
{
"text": "The first combination method is concatenating two input sources. We apply the Byte-Pair-Encoding (Sennrich et al., 2015) As shown in Figure 3a , the input to the NLP system is the embedded words of a sentence,",
"cite_spans": [
{
"start": 97,
"end": 120,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 133,
"end": 142,
"text": "Figure 3a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Concatenation",
"sec_num": "3.1"
},
{
"text": "(x 1 ),\u02dc (x 2 ),\u02dc (x 3 ), \u2022 \u2022 \u2022 ,\u02dc (x i ), \u2022 \u2022 \u2022 ,\u02dc (x I ), wher\u1ebd (x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": "3.1"
},
{
"text": "is the concatenation of the embedded word (x i ) and its codeword \u03b3 (\u03b3(x i )):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(x i ) = [ (x i ); \u03b3 (\u03b3(x i ))]",
"eq_num": "(2)"
}
],
"section": "Concatenation",
"sec_num": "3.1"
},
{
"text": "The concatenation method merges two input sources and train one encoder for both. However, it may be beneficial to have textual and codeword embeddings and encoders trained separately, because they have different vocabularies. Then, those two encoders are combined linearly, a widely applied model combination technique. The input to the linear combiner is the encoded sentence, represented by a sequence of hidden",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "3.2"
},
{
"text": "statesh 1 ( (x I )), \u2022 \u2022 \u2022 ,h j ( (x I )), \u2022 \u2022 \u2022 ,h J ( (x I ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "3.2"
},
{
"text": "of the last position I in each of the encoder layer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "3.2"
},
{
"text": "j \u2208 [1, 2, \u2022 \u2022 \u2022 , J]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "3.2"
},
{
"text": ". J is the number of nodes at each decoder layer. Recall that each hidden state is a real vector R d , and that is why we can use the vector space operations such as addition on it. For convenience, we denote the last hidden state of the j-th encoder layer that we take as the input to the decoder,h j ( (x I )), byh j I , the last hidden state of the j-th encoder layer of the original textual sentence h j ( (x I )) by h j I , and the last hidden state of the j-th encoder layer of the code-based sentence h j ( \u03b3 (\u03b3(x I ))) by h \u03b3 j I . The combined encoder hidden stateh j is a linear interpolation of the hidden states of the textural input and its codeword input:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h j = (1 \u2212 \u03b1)h j I + \u03b1h \u03b3 j I",
"eq_num": "(3)"
}
],
"section": "Linear Combination",
"sec_num": "3.2"
},
{
"text": "As shown in Figure (3b) , the combined last hidden state in each layer is fed into the baseline decoder. The black blocks contains only the operator of +, as shown in the gray ellipse. \u03b1 is the encoder weight of the coded sentence, and here, \u03b1 = 0.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 23,
"text": "Figure (3b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Linear Combination",
"sec_num": "3.2"
},
{
"text": "In the linear combination method, the weight \u03b1 is shared among all states in one encoder. To allow different weights for each state, we implement variations of multi-source encoding by Zoph and Knight (2016) for the POS tagging model (Joshi, 2018 ) (see Figure 3b) . The combined hidden stat\u1ebd h j in a layer j is a non-linear transformation of the concatenation of word-based and code-based hidden states of the last position I in layer j multiplied by the weight W c",
"cite_spans": [
{
"start": 185,
"end": 207,
"text": "Zoph and Knight (2016)",
"ref_id": "BIBREF31"
},
{
"start": 234,
"end": 246,
"text": "(Joshi, 2018",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 254,
"end": 264,
"text": "Figure 3b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Multi-Source Encoding.",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h j = tanh(W c [h j I ; h \u03b3 j I ]).",
"eq_num": "(4)"
}
],
"section": "Multi-Source Encoding.",
"sec_num": "3.3"
},
{
"text": "Bi-LSTM In Bi-LSTM decoder, the cell state c of an encoder is a concatenation of the forward and backward cell states. The combined cell statec is the sum of the word-based c and code-based c \u03b3 encoder's cell states",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Source Encoding.",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c = c + c\u03b3.",
"eq_num": "(5)"
}
],
"section": "Multi-Source Encoding.",
"sec_num": "3.3"
},
{
"text": "Single-head Attention The attention model looks at both word-based and code-based encoders simultaneously. A context vector from each source encoder c t and c \u03b3t is created instead of the just c t in the single-source attention model. Hidden states from the top decoder layer looks back at previous hidden statesh t\u22121 and the context vectors of the encoders:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Source Encoding.",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = tanh(W c [h t\u22121 ; c t ; c \u03b3t ])",
"eq_num": "(6)"
}
],
"section": "Multi-Source Encoding.",
"sec_num": "3.3"
},
{
"text": "Multi-head Attention Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. We apply the Fairseq (Ott et al., 2019) implementation of Multilingual Translation in Transformer (Vaswani et al., 2017 ) treating text and codewords as two language inputs. The multilingual transformer trains on two encoders in turn iteratively. For example, in the first epoch it trains the textual encoder then trains the codeword encoder; in the second epoch, it trains again the textual then the codeword encoder, and so on.",
"cite_spans": [
{
"start": 177,
"end": 195,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 254,
"end": 275,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Source Encoding.",
"sec_num": "3.3"
},
{
"text": "NMT We improve over two state-of-the-art Neural Machine Translation (NMT) baselines: the Convolutional Sequence to Sequence Learning (ConvS2S) by Gehring et al. (2017) and the Transformer by Vaswani et al. (2017) . On ConvS2S, we concatenate (+) the input sentence with its coded sentence using the method in \u00a7 3.1 illustrated in Figure 3a . On the Transformer baseline, we combine the input sentence with the encoded sentence using \"multi-head attention\" as described in \u00a7 3.3 and illustrated in Figure 3c .",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 191,
"end": 212,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 330,
"end": 339,
"text": "Figure 3a",
"ref_id": "FIGREF1"
},
{
"start": 497,
"end": 506,
"text": "Figure 3c",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Combination Methods",
"sec_num": "4.1"
},
{
"text": "LM For Neural Language Modeling, we treat the text sentence as one language and the coded sentence as another language and combine them with the cross-lingual Language model (XLM; Lample and Conneau, 2019) using the toolkit introduced in Ott et al. (2019) . The combination method is in \u00a7 3.1 and Figure 3a .",
"cite_spans": [
{
"start": 238,
"end": 255,
"text": "Ott et al. (2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 297,
"end": 306,
"text": "Figure 3a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Combination Methods",
"sec_num": "4.1"
},
{
"text": "POS tagging We implement linear combination illustrated in Figure 3b (with the gray area) and nonlinear multi-encoders that are described in Equations (4) to (6) and Figure 3b (without the gray area). The input to the multi-encoder is the text and coded sentences, and its output is directly fed into the POS tagger. For the linear combined encoder, we element-wise linearly interpolate the text encoding vector and the code coding vector, each trained separately. For example, the subscript \"0.5\" indicates an interpolation with equal weights. WMT'14 and WMT'18 We conduct experiments on WMT'14 News English-German dataset, which contains around 4.6 million sentences before pre-processing. We also conduct experiments News and WMT'18 Bio task. BPE operations: 32k. Baseline is (Gehring et al., 2017) on words. In this paper, we denote baselines for all experiments on all tasks with their names, referring to standard textual word inputs. Systems by adding the codeword inputs on baselines are denoted as \"+..\".",
"cite_spans": [
{
"start": 779,
"end": 801,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 59,
"end": 68,
"text": "Figure 3b",
"ref_id": "FIGREF1"
},
{
"start": 166,
"end": 175,
"text": "Figure 3b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Combination Methods",
"sec_num": "4.1"
},
{
"text": "on English-French dataset forn WMT'18 Biomedical Task that contains around 2.8 million sentences. Table 1 shows vocabulary statistics on the source/target of the training data before and after applying codings. We use Moses tokenizer and restrict 250 characters per sentence and 1.5 length ratio between source and target sentences as a filter in pre-processing. The Byte-pair encoding model is jointly trained on the source textual word inputs, codeword inputs, and target outputs for French and German systems, and separately trained on the source and target for Chinese systems. We applied concatenation for ConvS2S baselines and multi-source encoding for transformer baselines in all tasks, respectively. For ConvS2S we set the embedding dimension as 512, the learning rate as 0.25, the gradient clipping as 0.1, the dropout ratio as 0.2, and the optimizer as NAG. For transformer, we set the embedding dimension as 512, the learning rate as 0.0005, the minimum learning rate as 10 \u22129 , the warmup learning rate as 10 \u22127 , the optimizer batas as 0.9 and 0.98 for adam optimizer, the dropout ratio as 0.3, the weight decay as 0.0001, the shared decoders and shared decoder embedding as true. The training is terminated until the validation loss does not decrease for five consecutive epochs. We compute the BLEU score using sacrebleu. As shown in Figure 4 , on WMT'18 we achieve an improvement of +0.7 BLEU points for English-German and +0.8 BLEU points for French-English, respectively. Some phonetic coding may be more suitable for certain languages than others. Metaphone works best for English because it handles spelling variations and inconsistencies. According to its orthography, the German spelling is largely phonetic (unlike English spelling), thus adding phonetics does not help much in DE-EN NMT. IWSLT In IWSLT'17 task, we achieved +5.2 BLEU point on EN-FR and +1.9 BLEU point on FR-EN. We also add Pinyin for Chinese-English translation on IWSLT'17 (IWSLT, 2017) as a supplementary experiment. Adding Wubi also enhances the baseline performance. On the Transformer baseline, we use the codewords as the input source test set during decoding. Note that all experiments are conducted on the real datasets, without using/verifying on any artificial noise anywhere.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 1",
"ref_id": "TABREF3"
},
{
"start": 1350,
"end": 1358,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Application 1: Machine Translation",
"sec_num": "4.2"
},
{
"text": "Model Complexity. We tune the dropout parameters for conducting the following experiments: Words and W+Metaphone on IWSLT'17 EN-FR. The drop out value is set by default to 0.2, and the beam-size to 12. Figure 8 shows how translation accuracy changes by varying the dropout value. The highest BLEU score is at a dropout of 0.2 for the baseline and 0.2 and 0.3 in our approach. A higher optimal value of dropout means fewer nodes in the Neural Networks are needed to opt NMT quality. This implies that adding auxiliary inputs will reduce the model complexity. Model parameter size. Table 3 shows the change of the parameter size when applying our approaches. Our parameters include weights and biases of neural network models. The parameter size reduces when we concatenate the original inputs with our codewords because the vocabulary size reduces (although the BPE operations stay the same as the baseline). The parameter size increases when we use the multi-source encoding because we added more encoder for the codeword input. Training Speed. Table 5 shows the system training time (with BPE 32k operations). The total time (in minutes) is listed in the first column, and the number of epochs is in the second. Combining codewords reduces the model complexity. Therefore, the training becomes more efficient and needs Chinese-English task. Baselines are (Gehring et al., 2017) and (Vaswani et al., 2017) on words. Systems by adding the codeword inputs on baselines are denoted as \"+..\". a smaller number of epochs to converge. The total training time of our approaches is comparable to that of baselines, sometimes even less. Output example. Table 6 shows a translation example. Combining phonetic coding helps to include more subwords that cannot be obtained from text.",
"cite_spans": [
{
"start": 1356,
"end": 1378,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 1383,
"end": 1405,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 202,
"end": 210,
"text": "Figure 8",
"ref_id": "FIGREF6"
},
{
"start": 580,
"end": 587,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 1045,
"end": 1052,
"text": "Table 5",
"ref_id": "TABREF10"
},
{
"start": 1644,
"end": 1651,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Application 1: Machine Translation",
"sec_num": "4.2"
},
{
"text": "The firefighters were brilliant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Die Feuerwehrleute waren gro\u00dfartig.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "Die Feuerwehr war brillant. +MetaPhone Die Feuerwehrleute waren brilliant . Table 6 : An MT WMT'14 EN-DE output example: +Meta-Phone coding generates new subwords \"fire\" and \"fighter\" that improves the translation over the baseline ConvS2S.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "ConvS2S",
"sec_num": null
},
{
"text": "Task and result. We train and evaluate the English part of EN-FR IWSLT'17 dataset and also on English part of EN-DE WMT'14 News dataset. We use 256 embedding dimensions, six layers, and eight heads for efficiency. We set dropouts to 0.1, the learning rate to 0.0001, and BPE operations to 32k. We used Adam optimizer with betas of 0.9 0.999. As shown in Table 7 , adding Metaphone significantly reduces PPL of the baseline system, i.e., 20.1% relatively. \"+NYSIIS WA\" indicates the system with NYSIIS but adding word alignments between English and its coded form; see Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 7",
"ref_id": "TABREF12"
},
{
"start": 568,
"end": 575,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Application 2: Language Modeling (LM)",
"sec_num": "4.3"
},
{
"text": "Task and result We evaluate our approach in POS Tagging on Brown Corpus (Francis and Kucera, 1979) . Brown corpus is a well-known English dataset for POS and contains 57 341 samples. We uniform randomly sample 64% data as the training set, 16% as the validation set, and 20% as the test set. Our baseline is a Keras (Chollet, 2015) implementation (Joshi, 2018) of Bi-LSTM POS Tagger (Wang et al., 2015) . We train word embedding (Mikolov et al., 2013) implemented by\u0158eh\u016f\u0159ek and Sojka (2010) with 100 dimensions. Each of the forward and the backward LSTM has 64 dimensions. We use a categorical cross-entropy loss and RMSProp optimizer. We also use early stopping based on validation loss. As in Table 8 , the linear multi-encoder with \u03b1 = 0.9 brings the best results, i.e. -15.79% relative improvement over the baseline. ",
"cite_spans": [
{
"start": 72,
"end": 98,
"text": "(Francis and Kucera, 1979)",
"ref_id": "BIBREF6"
},
{
"start": 316,
"end": 331,
"text": "(Chollet, 2015)",
"ref_id": null
},
{
"start": 347,
"end": 360,
"text": "(Joshi, 2018)",
"ref_id": "BIBREF13"
},
{
"start": 383,
"end": 402,
"text": "(Wang et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 429,
"end": 451,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 695,
"end": 702,
"text": "Table 8",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Application 3: POS Tagging",
"sec_num": "4.4"
},
{
"text": "Previous important work investigated the role of auxiliary information to NLP tasks, such as polysemous word embedding structures by Arora et al. (2016) , factored models by Garc\u00eda-Mart\u00ednez et al. (2016) , and feature compilation by Sennrich and Haddow (2016) . We emphasize that we do not use any additional information besides our algorithms. Hayes (1996) ; Johnson et al. (2015) applied explicit phonological rules or constraints to tasks such as word segmentation. In neural networks, we can implicitly learn from phonetic data and leave the networks to discover hidden phonetic features through end-to-end training opt specific NLP tasks, instead of applying hand-coded constraints.",
"cite_spans": [
{
"start": 133,
"end": 152,
"text": "Arora et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 174,
"end": 203,
"text": "Garc\u00eda-Mart\u00ednez et al. (2016)",
"ref_id": "BIBREF7"
},
{
"start": 233,
"end": 259,
"text": "Sennrich and Haddow (2016)",
"ref_id": "BIBREF24"
},
{
"start": 345,
"end": 357,
"text": "Hayes (1996)",
"ref_id": "BIBREF9"
},
{
"start": 360,
"end": 381,
"text": "Johnson et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Closely related, but independent to our work, is the character-based MT, such as the work of Ling et al. (2015) and Chung et al. (2016) , among many others. We go beyond text level representations and look for novel representations for decompositions, sometimes even smaller than characters.",
"cite_spans": [
{
"start": 93,
"end": 111,
"text": "Ling et al. (2015)",
"ref_id": "BIBREF15"
},
{
"start": 116,
"end": 135,
"text": "Chung et al. (2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Different from the inspiring work that uses Pinyin (Du and Way, 2017) , skip-ngram (Bojanowski et al., 2017) , and Huffman on source/target (Chitnis and DeNero, 2015) , our study aims to improve NN-NLP including NMT overall rather than only eliminating unknown words, introducing six new codings into NLP in addition to Pinyin and text. Importantly, our artificial codings apply on all languages. Moreover, we achieve experimental improvements overall. Liu et al. (2018) added Pinyin embedding to robustify NMT against homophone noises. They described that it was unknown why Pinyin also improved predictions on the clean test. This is a very interesting work, and we explain this phenomenon through our theory that the multi-channel coding offers an ensemble of the code words and the text, making the communication more reliable.",
"cite_spans": [
{
"start": 51,
"end": 69,
"text": "(Du and Way, 2017)",
"ref_id": "BIBREF5"
},
{
"start": 83,
"end": 108,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 140,
"end": 166,
"text": "(Chitnis and DeNero, 2015)",
"ref_id": "BIBREF2"
},
{
"start": 453,
"end": 470,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we conduct a comprehensive study on how to code textual inputs from multiple linguistically-motivated perspectives and how to integrate alternative language representations into NN-NLP systems. We propose to use Soundex, NYSIIS, MetaPhone, logogram, fixed-outputlength, and Huffman codings into NLP and describe how to combine them in state-of-the-art NN architectures, such as Transformer, ConvS2S, Bi-LSTM with attentions. Our paradigm is general for any language and adaptable to various models. We conduct extensive experiments on five languages over six tasks. Our approach appears to be very useful and achieves up to 20.77%, 20%, and 15.79% relative improvements on state-of-the-art models of MT, LM, and POS, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "We appreciate the National Science Foundation (NSF) Award No. 1747728 and the National Science Foundation of China (NSFC) Award No. 61672524 to fund this research. We also appreciate the JSALT workshop to support us in continuing this work. In particular, we thank all feedback provided by the colleagues there. We also thank the comments of Periklis Papakonstantinou. Finally, we appreciate the support of the Google Cloud Research Program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Linear Algebraic structure of word senses",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yuanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Risteski",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. Linear Algebraic struc- ture of word senses, with Applications to Polysemy. Transactions of the Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Variablelength word encodings for neural translation models",
"authors": [
{
"first": "Rohan",
"middle": [],
"last": "Chitnis",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohan Chitnis and John DeNero. 2015. Variable- length word encodings for neural translation models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A character-level decoder without explicit segmentation for neural machine translation",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Kyunghyun Cho, and Yoshua Ben- gio. 2016. A character-level decoder without ex- plicit segmentation for neural machine translation. In Proceedings of Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Pinyin as subword unit for Chinese-sourced neural machine translation",
"authors": [
{
"first": "Jinhua",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Irish Conference on Artificial Intelligence and Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhua Du and Andy Way. 2017. Pinyin as subword unit for Chinese-sourced neural machine translation. In Proceedings of Irish Conference on Artificial In- telligence and Cognitive Science.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Brown corpus manual",
"authors": [
{
"first": "W",
"middle": [
"N"
],
"last": "Francis",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kucera",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. N. Francis and H. Kucera. 1979. Brown corpus manual. Technical report, Department of Linguis- tics, Brown University, Providence, Rhode Island, US.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Factored neural machine translation",
"authors": [
{
"first": "Mercedes",
"middle": [],
"last": "Garc\u00eda-Mart\u00ednez",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mercedes Garc\u00eda-Mart\u00ednez, Lo\u00efc Barrault, and Fethi Bougares. 2016. Factored neural machine transla- tion. Computing Research Repository.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of International Conference on Machine Learning.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Phonetically driven phonology: The role of optimality theory and inductive grounding. rutgers optimality archive",
"authors": [
{
"first": "Bruce",
"middle": [],
"last": "Hayes",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of Conference on Formalism and Functionalism in Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce Hayes. 1996. Phonetically driven phonology: The role of optimality theory and inductive ground- ing. rutgers optimality archive. In Proceedings of Conference on Formalism and Functionalism in Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A method for the construction of minimum-redundancy codes",
"authors": [
{
"first": "A",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huffman",
"suffix": ""
}
],
"year": 1952,
"venue": "Proceedings of the Institute of Radio Engineers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A Huffman. 1952. A method for the construc- tion of minimum-redundancy codes. Proceedings of the Institute of Radio Engineers.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Homepage of International Workshop on Spoken Language Translation",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IWSLT. 2017. Homepage of International Work- shop on Spoken Language Translation 2017. http://workshop2017.iwslt.org/.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sign constraints on feature weights improve a joint model of word segmentation and phonology",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Pater",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Staubs",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Joe Pater, Robert Staubs, and Em- manuel Dupoux. 2015. Sign constraints on feature weights improve a joint model of word segmentation and phonology. In Proceedings of North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "LSTM POS tagger",
"authors": [
{
"first": "Aneesh",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aneesh Joshi. 2018. LSTM POS tagger.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Character-based neural machine translation",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics -Short Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015. Character-based neural machine trans- lation. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics - Short Papers.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Robust neural machine translation with joint textual and phonetic embedding",
"authors": [
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2018. Robust neural machine translation with joint textual and phonetic embed- ding. In Proceedings of Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems. Curran Associates, Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hanging on the metaphone",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Philips",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Philips. 1990. Hanging on the metaphone. Computer Language.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adaptation and application of daitch-mokotoff Soundex algorithm on serbian names",
"authors": [
{
"first": "P",
"middle": [],
"last": "Rajkovic",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jankovic",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Conference on Applied Mathematics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P Rajkovic and D Jankovic. 2007. Adaptation and ap- plication of daitch-mokotoff Soundex algorithm on serbian names. In Proceedings of Conference on Ap- plied Mathematics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Language Resources and Evaluation 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the Conference on Language Re- sources and Evaluation 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A method of phonetic indexing",
"authors": [
{
"first": "Robert",
"middle": [
"C"
],
"last": "Russel",
"suffix": ""
}
],
"year": 1918,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C. Russel. 1918. A method of phonetic index- ing. Patent no. 1,261,167.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Course in General Linguistics. Duckworth, London. (trans. Roy Harris). ISBN 9780231527958",
"authors": [
{
"first": "Ferdinand",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Saussure",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1916,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferdinand de Saussure. 1916. Course in General Lin- guistics. Duckworth, London. (trans. Roy Harris). ISBN 9780231527958, 0231527950.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Linguistic input features improve neural machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic in- put features improve neural machine translation. In Proceedings of Conference on Machine Translation.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Part-of-speech tagging with bidirectional long short-term memory recurrent neural network",
"authors": [
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Frank",
"middle": [
"K"
],
"last": "Soong",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilu Wang, Yao Qian, Frank K. Soong, Lei He, and Hai Zhao. 2015. Part-of-speech tagging with bidi- rectional long short-term memory recurrent neural network. Computing Research Repository.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Homepage of Workshop on Statistical Machine Translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Wmt",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WMT. 2014. Homepage of Workshop on Statistical Machine Translation 2014. http://www.statmt.org/wmt14/.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Homepage of Workshop on Statistical Machine Translation",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WMT. 2018. Homepage of Workshop on Sta- tistical Machine Translation 2018: Biomedical task. http://www.statmt.org/wmt18/biomedical- translation-task.html.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The Psychobiology of Language: An Introduction to Dynamic Philology",
"authors": [
{
"first": "George",
"middle": [],
"last": "Zipf",
"suffix": ""
}
],
"year": 1935,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Zipf. 1935. The Psychobiology of Language: An Introduction to Dynamic Philology. M.I.T. Press, Cambridge, Mass.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multi-source neural translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Workflow on how to apply discrete coding in NN-NLP by decomposing (phonetic, logogram, fixoutput-length, or Huffman coding) and recombining (BPE) words.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Combination methods for different NN architectures: (a) concatenation for ConvS2S and XLM; (b) linear interpolation and multi-source encoding for Bi-LSTM with attention; (c) multi-source encoding for Transformer.word embeddings implemented by\u0158eh\u016f\u0159ek and Sojka (2010) on each word (x) and its codeword \u03b3 (\u03b3(x)). We separately train word embedding on code-and textual sentences. Thus, \u03b3 (\u2022) and (\u2022) are different functions.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Translation results in BLEU[%] on WMT'14",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Translation results in BLEU[%] on small task IWSLT'17. FR-EN & EN-FR. BPE: 16k. Baseline is (Vaswani et al., 2017) on words. Dev: test2013-2015; Test: test2017.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Translation results in BLEU[%] on small task IWSLT'17. ZH-EN. BPE: 16k. Baselines are(Gehring et al., 2017;Vaswani et al., 2017) on words. Dev: test2010-2015; Test: test2017.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Translation results in BLEU[%] on small task IWSLT'17. DE-EN, EN-DE. BPE: 16k. Baselines are (Gehring et al., 2017) on words. Dev: test2010-2015; Test: test2017.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Dropout optimum. x-axis: the dropout value.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Figure 2: Examples on different coding schemes. In</td></tr><tr><td>contrast to Pinyin only applies to Chinese, the lo-</td></tr><tr><td>gogram coding Wubi and its variant apply to Japanese</td></tr><tr><td>Kanji and Chinese. Furthermore, phonetic codings, in-</td></tr><tr><td>cluding MetaPhone, Soundex, and NYSIIS, cover most</td></tr><tr><td>western languages. Finally, the artificial codings, i.e.,</td></tr><tr><td>the fixed-output-length and Huffman coding, can be ap-</td></tr><tr><td>plied to any language. Phonetic and logogram codings</td></tr><tr><td>are many-to-one mappings, while fixed-output-length</td></tr><tr><td>and Huffman coding are one-to-one mappings.</td></tr></table>",
"html": null,
"text": ""
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"8\">: Number of Sentences (Sents.) and Running Word</td></tr><tr><td colspan=\"8\">(R.W.) as well as Vocabulary size (Voc.) [M] of WMT'14</td></tr><tr><td colspan=\"6\">News (EN-DE) and WMT'18 Bio (EN-FR)</td><td/></tr><tr><td/><td/><td colspan=\"2\">Before BPE</td><td/><td colspan=\"2\">After BPE</td></tr><tr><td>Task</td><td colspan=\"2\">WMT'14</td><td colspan=\"5\">WMT'18 WMT'14 WMT'18</td></tr><tr><td>Coding</td><td>EN</td><td>DE</td><td colspan=\"5\">FR EN EN DE FR EN</td></tr><tr><td>Baseline</td><td colspan=\"7\">711 1500 366 338 33 35 29 24</td></tr><tr><td>+Soundex</td><td colspan=\"2\">717 1500</td><td>-</td><td>-</td><td>33 33</td><td>-</td><td>-</td></tr><tr><td colspan=\"8\">+Metaphone 904 1500 480 338 34 30 30 21</td></tr><tr><td>+NYSIIS</td><td colspan=\"7\">981 1500 523 338 34 30 30 20</td></tr><tr><td>+EL 9</td><td colspan=\"7\">1400 1500 732 338 34 25 30 18</td></tr><tr><td>+Huffman 9</td><td colspan=\"7\">1400 1500 732 338 34 25 30 16</td></tr></table>",
"html": null,
"text": ""
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Vocabulary size [K] of WMT'14 News (EN-DE) and</td></tr><tr><td>WMT'18 Bio (EN-FR) before and after applying BPE with</td></tr><tr><td>different codings.</td></tr></table>",
"html": null,
"text": ""
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Number of model parameters [M] on WMT'14</td></tr><tr><td>News, WMT'18 Bio, and IWSLT'17 tasks. Baselines are</td></tr><tr><td>ConvS2S and Transformer on word input. Systems by adding</td></tr><tr><td>the codeword inputs on baselines are denoted as \"+..\".</td></tr></table>",
"html": null,
"text": ""
},
"TABREF8": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Number of model parameters [M] on IWSLT'17"
},
"TABREF10": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Training time (in minutes) per epoch/ epoch number."
},
"TABREF12": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "LM PPL improvements on the English part of a subset of WMT'14 News EN-DE and IWSLT'17 EN-FR."
},
"TABREF14": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "POS with phonetic codings Brown corpus."
}
}
}
}