text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1725–1735 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1725 Efficient and Robust Question Answering from Minimal Context over Documents Sewon Min1∗, Victor Zhong2, Richard Socher2, Caiming Xiong2 Seoul National University1, Salesforce Research2 [email protected], {vzhong, rsocher, cxiong}@salesforce.com Abstract Neural models for question answering (QA) over documents have achieved significant performance improvements. Although effective, these models do not scale to large corpora due to their complex modeling of interactions between the document and the question. Moreover, recent work has shown that such models are sensitive to adversarial inputs. In this paper, we study the minimal context required to answer the question, and find that most questions in existing datasets can be answered with a small set of sentences. Inspired by this observation, we propose a simple sentence selector to select the minimal set of sentences to feed into the QA model. Our overall system achieves significant reductions in training (up to 15 times) and inference times (up to 13 times), with accuracy comparable to or better than the state-of-the-art on SQuAD, NewsQA, TriviaQA and SQuAD-Open. Furthermore, our experimental results and analyses show that our approach is more robust to adversarial inputs. 1 Introduction The task of textual question answering (QA), in which a machine reads a document and answers a question, is an important and challenging problem in natural language processing. Recent progress in performance of QA models has been largely due to the variety of available QA datasets (Richardson et al., 2013; Hermann et al., 2015; Rajpurkar et al., 2016; Trischler et al., 2016; Joshi et al., 2017; Koˇcisk`y et al., 2017). ∗All work was done while the author was an intern at Salesforce Research. Many neural QA models have been proposed for these datasets, the most successful of which tend to leverage coattention or bidirectional attention mechanisms that build codependent representations of the document and the question (Xiong et al., 2018; Seo et al., 2017). Yet, learning the full context over the document is challenging and inefficient. In particular, when the model is given a long document, or multiple documents, learning the full context is intractably slow and hence difficult to scale to large corpora. In addition, Jia and Liang (2017) show that, given adversarial inputs, such models tend to focus on wrong parts of the context and produce incorrect answers. In this paper, we aim to develop a QA system that is scalable to large documents as well as robust to adversarial inputs. First, we study the context required to answer the question by sampling examples in the dataset and carefully analyzing them. We find that most questions can be answered using a few sentences, without the consideration of context over entire document. In particular, we observe that on the SQuAD dataset (Rajpurkar et al., 2016), 92% of answerable questions can be answered using a single sentence. Second, inspired by this observation, we propose a sentence selector to select the minimal set of sentences to give to the QA model in order to answer the question. Since the minimum number of sentences depends on the question, our sentence selector chooses a different number of sentences for each question, in contrast with previous models that select a fixed number of sentences. Our sentence selector leverages three simple techniques — weight transfer, data modification and score normalization, which we show to be highly effective on the task of sentence selection. We compare the standard QA model given the full document (FULL) and the QA model given the 1726 N % on % on Document Question sent SQuAD TriviaQA 1 90 56 In 1873, Tesla returned to his birthtown, Smiljan. Shortly after he arrived, (...) Where did Tesla return to in 1873? 2 6 28 After leaving Edison’s company Tesla partnered with two businessmen in 1886, What did Tesla Electric Light & Manufacturing Robert Lane and Benjamin Vail, who agreed to finance an electric lighting do? company in Tesla’s name, Tesla Electric Light & Manufacturing. The company installed electrical arc light based illumination systems designed by Tesla and also had designs for dynamo electric machine commutators, (...) 3↑ 2 4 Kenneth Swezey, a journalist whom Tesla had befriended, confirmed that Tesla Who did Tesla call in the middle of the night? rarely slept . Swezey recalled one morning when Tesla called him at 3 a.m. : ”I was sleeping in my room (...) Suddenly, the telephone ring awakened me ... N/A 2 12 Writers whose papers are in the library are as diverse as Charles Dickens and The papers of which famous English Victorian Beatrix Potter. Illuminated manuscripts in the library dating from (...) author are collected in the library? Table 1: Human analysis of the context required to answer questions on SQuAD and TriviaQA. 50 examples from each dataset are sampled randomly. ‘N sent’ indicates the number of sentences required to answer the question, and ‘N/A’ indicates the question is not answerable even given all sentences in the document. ‘Document’ and ‘Question’ are from the representative example from each category on SQuAD. Examples on TriviaQA are shown in Appendix B. The groundtruth answer span is in red text, and the oracle sentence (the sentence containing the grountruth answer span) is in bold text. No. Description % Sentence Question 0 Correct (Not exactly same 58 Gothic architecture is represented in the majestic churches but also at the burgher What type of architecture is represented as grountruth) houses and fortifications. in the majestic churches? 1 Fail to select precise span 6 Brownlee argues that disobedience in opposition to the decisions of non-governmental Brownlee argues disobedience can be agencies such as trade unions, banks, and private universities can be justified if it justified toward what institutions? reflects ‘a larger challenge to the legal system that permits those decisions to be taken;. 2 Complex semantics in 34 Newton was limited by Denver’s defense, which sacked him seven times and forced him How many times did the Denver defense sentence/question into three turnovers, including a fumble which they recovered for a touchdown. force Newton into turnovers? 3 Not answerable even with 2 He encourages a distinction between lawful protest demonstration, nonviolent civil What type of civil disobedience is full paragraph disobedience, and violent civil disobedience. accompanied by aggression? Table 2: Error cases (on exact match (EM)) of DCN+ given oracle sentence on SQuAD. 50 examples are sampled randomly. Grountruth span is in underlined text, and model’s prediction is in bold text. minimal set of sentences (MINIMAL) on five different QA tasks with varying sizes of documents. On SQuAD, NewsQA, TriviaQA(Wikipedia) and SQuAD-Open, MINIMAL achieves significant reductions in training and inference times (up to 15× and 13×, respectively), with accuracy comparable to or better than FULL. On three of those datasets, this improvements leads to the new stateof-the-art. In addition, our experimental results and analyses show that our approach is more robust to adversarial inputs. On the development set of SQuAD-Adversarial (Jia and Liang, 2017), MINIMAL outperforms the previous state-of-theart model by up to 13%. 2 Task analyses Existing QA models focus on learning the context over different parts in the full document. Although effective, learning the context within the full document is challenging and inefficient. Consequently, we study the minimal context in the document required to answer the question. 2.1 Human studies First, we randomly sample 50 examples from the SQuAD development set, and analyze the minimum number of sentences required to answer the question, as shown in Table 1. We observed that 98% of questions are answerable given the document. The remaining 2% of questions are not answerable even given the entire document. For instance, in the last example in Table 1, the question requires the background knowledge that Charles Dickens is an English Victorian author. Among the answerable examples, 92% are answerable with a single sentence, 6% with two sentences, and 2% with three or more sentences. We perform a similar analysis on the TriviaQA (Wikipedia) development (verified) set. Finding the sentences to answer the question on TriviaQA is more challenging than on SQuAD, since TriviaQA documents are much longer than SQuAD documents (488 vs 5 sentences per document). Nevertheless, we find that most examples are answerable with one or two sentences — among the 88% of examples that are answerable given the full document, 95% can be answered with one or two sentences. 1727 2.2 Analyses on existing QA model Given that the majority of examples are answerable with a single oracle sentence on SQuAD, we analyze the performance of an existing, competitive QA model when it is given the oracle sentence. We train DCN+ (Xiong et al., 2018), one of the state-of-the-art models on SQuAD (details in Section 3.1), on the oracle sentence. The model achieves 83.1 F1 when trained and evaluated using the full document and 85.1 F1 when trained and evaluated using the oracle sentence. We analyze 50 randomly sampled examples in which the model fails on exact match (EM) despite using the oracle sentence. We classify these errors into 4 categories, as shown in Table 2. In these examples, we observed that 40% of questions are answerable given the oracle sentence but the model unexpectedly fails to find the answer. 58% are those in which the model’s prediction is correct but does not lexically match the groundtruth answer, as shown in the first example in Table 2. 2% are those in which the question is not answerable even given the full document. In addition, we compare predictions by the model trained using the full document (FULL) with the model trained on the oracle sentence (ORACLE). Figure 1 shows the Venn diagram of the questions answered correctly by FULL and ORACLE on SQuAD and NewsQA. ORACLE is able to answer 93% and 86% of the questions correctly answered by FULL on SQuAD and NewsQA, respectively. These experiments and analyses indicate that if the model can accurately predict the oracle sentence, the model should be able to achieve comparable performance on overall QA task. Therefore, we aim to create an effective, efficient and robust QA system which only requires a single or a few sentences to answer the question. 3 Method Our overall architecture (Figure 2) consists of a sentence selector and a QA model. The sentence selector computes a selection score for each sentence in parallel. We give to the QA model a reduced set of sentences with high selection scores to answer the question. 3.1 Neural Question Answering Model We study two neural QA models that obtain close to state-of-the-art performance on SQuAD. DCN+ (Xiong et al., 2018) is one of the startFull Oracle Full Oracle 5% 66% 9% 7% 44% 15% 20% 34% SQuAD NewsQA Figure 1: Venn diagram of the questions answered correctly (on exact match (EM)) by the model given a full document (FULL) and the model given an oracle sentence (ORACLE) on SQuAD (left) and NewsQA (right). of-the-art QA models, achieving 83.1 F1 on the SQuAD development set. It features a deep residual coattention encoder, a dynamic pointing decoder, and a mixed objective that combines cross entropy loss with self-critical policy learning. SReader is another competitive QA model that is simpler and faster than DCN+, with 79.9 F1 on the SQuAD development set. It is a simplified version of the reader in DrQA (Chen et al., 2017), which obtains 78.8 F1 on the SQuAD development set. Model details and training procedures are shown in Appendix A. 3.2 Sentence Selector Our sentence selector scores each sentence with respect to the question in parallel. The score indicates whether the question is answerable with this sentence. The model architecture is divided into the encoder module and the decoder module. The encoder is a shared module with S-Reader, which computes sentence encodings and question encodings from the sentence and the question as inputs. First, the encoder computes sentence embeddings D ∈Rhd×Ld, question embeddings Q ∈Rhd×Lq, and question-aware sentence embeddings Dq ∈ Rhd×Ld, where hd is the dimension of word embeddings, and Ld and Lq are the sequence length of the document and the question, respectively. Specifically, question-aware sentence embeddings are obtained as follows. αi = softmax(DT i W1Q) ∈RLq (1) Dq i = Lq X j=1 (αi,jQj) ∈Rhd (2) Here, Di ∈Rhd is the hidden state of sentence embedding for the ith word and W1 ∈Rhd×hd is 1728 Embed Matrix Embed Matrix BiLSTM BiLSTM … LD Sentence … LQ D … … Question Q 𝐷𝑄 … … … Denc Qenc … … Denc Qenc Linear+Softmax … scorestart BiLinear scoreend … Denc … BiLinear … … Denc Qenc Linear+Softmax … Max score BiLinear above threshold? merge Answer Encoder Sentence Selector Decoder QA Model Decoder Linear (a) (b) (c) (d) Figure 2: Our model architecture. (a) Overall pipeline, consisting of sentence selector and QA model. Selection score of each sentence is obtained in parallel, then sentences with selection score above the threshold are merged and fed into QA model. (b) Shared encoder of sentence selector and S-Reader (QA Model), which takes document and the question as inputs and outputs the document encodings Denc and question encodings Qenc. (c) Decoder of S-Reader (QA Model), which takes Denc and Qenc as inputs and outputs the scores for start and end positions. (d) Decoder of sentence selector, which takes Denc and Qenc for each sentence and outputs the score indicating if the question is answerable given the sentence. a trainable weight matrix. After this, sentence encodings and question encodings are obtained using an LSTM (Hochreiter and Schmidhuber, 1997). Denc = BiLSTM([Di; Dq i ]) ∈Rh×Ld (3) Qenc = BiLSTM(Qj) ∈Rh×Lq (4) Here, ‘;’ denotes the concatenation of two vectors, and h is a hyperparameter of the hidden dimension. Next, the decoder is a task-specific module which computes the score for the sentence by calculating bilinear similarities between sentence encodings and question encodings as follows. β = softmax(wT Qenc) ∈RLq (5) ˜ qenc = Lq X j=1 (βjQenc j ) ∈Rh (6) ˜hi = (Denc i W2 ˜ qenc) ∈Rh (7) ˜h = max( ˜h1, ˜h2, · · · , ˜ hLd) (8) score = W T 3 ˜h ∈R2 (9) Here, w ∈Rh, W2 ∈Rh×h×h, W3 ∈Rh×2, are trainable weight matrices. Each dimension in score means the question is answerable or nonanswerable given the sentence. We introduce 3 techniques to train the model. (i) As the encoder module of our model is identical to that of S-Reader, we transfer the weights to the encoder module from the QA model trained on the single oracle sentence (ORACLE). (ii) We modify the training data by treating a sentence as a wrong sentence if the QA model gets 0 F1, even if the sentence is the oracle sentence. (iii) After we 1729 Dataset Domain N word N sent N doc Supervision SQuAD Wikipedia 155 5 Span NewsQA News Articles 803 20 Span TriviaQA (Wikipedia) Wikipedia 11202 488 2 Distant SQuAD-Open Wikipedia 120734 4488 10 Distant SQuAD-Adversarial-AddSent Wikipedia 169 6 Span SQuAD-Adversarial-AddOneSent Wikipedia 165 6 Span Table 3: Dataset used for experiments. ‘N word’, ‘N sent’ and ‘N doc’ refer to the average number of words, sentences and documents, respectively. All statistics are calculated on the development set. For SQuAD-Open, since the task is in open-domain, we calculated the statistics based on top 10 documents from Document Retriever in DrQA (Chen et al., 2017). obtain the score for each sentence, we normalize scores across sentences from the same paragraph, similar to Clark and Gardner (2017). All of these three techniques give substantial improvements in sentence selection accuracy, as shown in Table 4. More details including hyperparameters and training procedures are shown in Appendix A. Because the minimal set of sentences required to answer the question depends on the question, we select the set of sentences by thresholding the sentence scores, where the threshold is a hyperparameter (details in Appendix A). This method allows the model to select a variable number of sentences for each question, as opposed to a fixed number of sentences for all questions. Also, by controlling the threshold, the number of sentences can be dynamically controlled during the inference. We define Dyn (for Dynamic) as this method, and define Top k as the method which simply selects the top-k sentences for each question. 4 Experiments 4.1 Dataset and Evaluation Metrics We train and evaluate our model on five different datasets as shown in Table 3. SQuAD (Rajpurkar et al., 2016) is a wellstudied QA dataset on Wikipedia articles that requires each question to be answered from a paragraph. NewsQA (Trischler et al., 2016) is a dataset on news articles that also provides a paragraph for each question, but the paragraphs are longer than those in SQuAD. TriviaQA (Joshi et al., 2017) is a dataset on a large set of documents from the Wikipedia domain and Web domain. Here, we only use the Wikipedia domain. Each question is given a much longer context in the form of multiple documents. SQuAD-Open (Chen et al., 2017) is an opendomain question answering dataset based on SQuAD. In SQuAD-Open, only the question and the answer are given. The model is responsible for identifying the relevant context from all English Wikipedia articles. SQuAD-Adversarial (Jia and Liang, 2017) is a variant of SQuAD. It shares the same training set as SQuAD, but an adversarial sentence is added to each paragraph in a subset of the development set. We use accuracy (Acc) and mean average precision (MAP) to evaluate sentence selection. We also measure the average number of selected sentences (N sent) to compare the efficiency of our Dyn method and the Top k method. To evaluate the performance in the task of question answering, we measure F1 and EM (Exact Match), both being standard metrics for evaluating span-based QA. In addition, we measure training speed (Train Sp) and inference speed (Infer Sp) relative to the speed of standard QA model (FULL). The speed is measured using a single GPU (Tesla K80), and includes the training and inference time for the sentence selector. 4.2 SQuAD and NewsQA For each QA model, we experiment with three types of inputs. First, we use the full document (FULL). Next, we give the model the oracle sentence containing the groundtruth answer span (ORACLE). Finally, we select sentences using our sentence selector (MINIMAL), using both Top k and Dyn. We also compare this last method with TF-IDF method for sentence selection, which selects sentences using n-gram TF-IDF distance between each sentence and the question. 1730 Model SQuAD NewsQA Top 1 MAP Top 1 Top 3 MAP TF-IDF 81.2 89.0 49.8 72.1 63.7 Our selector 85.8 91.6 63.2 85.1 75.5 Our selector (T) 90.0 94.3 67.1 87.9 78.5 Our selector (T+M, T+M+N) 91.2 95.0 70.9 89.7 81.1 Tan et al. (2018) 92.1 Selection method SQuAD NewsQA N sent Acc N sent Acc Top k (T+M)a 1 91.2 1 70.9 Top k (T+M)a 2 97.2 3 89.7 Top k (T+M)a 3 98.9 4 92.5 Dyn (T+M) 1.5 94.7 2.9 84.9 Dyn (T+M) 1.9 96.5 3.9 89.4 Dyn (T+M+N) 1.5 98.3 2.9 91.8 Dyn (T+M+N) 1.9 99.3 3.9 94.6 Table 4: Results of sentence selection on the dev set of SQuAD and NewsQA. (Top) We compare different models and training methods. We report Top 1 accuracy (Top 1) and Mean Average Precision (MAP). Our selector outperforms the previous state-of-the-art (Tan et al., 2018). (Bottom) We compare different selection methods. We report the number of selected sentences (N sent) and the accuracy of sentence selection (Acc). ‘T’, ‘M’ and ‘N’ are training techniques described in Section 3.2 (weight transfer, data modification and score normalization, respectively). a‘N’ does not change the result on Top k, since Top k depends on the relative scores across the sentences from same paragraph. Figure 3: The distributions of number of sentences that our selector selects using Dyn method on the dev set of SQuAD (left) and NewsQA (right). Results Table 4 shows results in the task of sentence selection on SQuAD and NewsQA. First, our selector outperforms TF-IDF method and the previous state-of-the-art by large margin (up to 2.9% MAP). Second, our three training techniques – weight transfer, data modification and score normalization – improve performance by up to 5.6% MAP. Finally, our Dyn method achieves higher accuracy with less sentences than the Top k method. For example, on SQuAD, Top 2 achieves 97.2 accuracy, whereas Dyn achieves 99.3 accuracy with SQuAD (with S-Reader) F1 EM Train Sp Infer Sp FULL 79.9 71.0 x1.0 x1.0 ORACLE 84.3 74.9 x6.7 x5.1 MINIMAL(Top k) 78.7 69.9 x6.7 x5.1 MINIMAL(Dyn) 79.8 70.9 x6.7 x3.6 SQuAD (with DCN+) FULL 83.1 74.5 x1.0 x1.0 ORACLE 85.1 76.0 x3.0 x5.1 MINIMAL(Top k) 79.2 70.7 x3.0 x5.1 MINIMAL(Dyn) 80.6 72.0 x3.0 x3.7 GNR 75.0a 66.6a FastQA 78.5 70.3 FusionNet 83.6 75.3 NewsQA (with S-Reader) F1 EM Train Sp Infer Sp FULL 63.8 50.7 x1.0 x1.0 ORACLE 75.5 59.2 x18.8 x21.7 MINIMAL(Top k) 62.3 49.3 x15.0 x6.9 MINIMAL(Dyn) 63.2 50.1 x15.0 x5.3 FastQA 56.1 43.7 Table 5: Results on the dev set of SQuAD (First two) and NewsQA (Last). For Top k, we use k = 1 and k = 3 for SQuAD and NewsQA, respectively. We compare with GNR (Raiman and Miller, 2017), FusionNet (Huang et al., 2018) and FastQA (Weissenborn et al., 2017), which are the model leveraging sentence selection for question answering, and the published state-of-the-art models on SQuAD and NewsQA, respectively. aNumbers on the test set. 1.9 sentences per example. On NewsQA, Top 4 achieves 92.5 accuracy, whereas Dyn achieves 94.6 accuracy with 3.9 sentences per example. Figure 3 shows that the number of sentences selected by Dyn method vary substantially on both SQuAD and NewsQA. This shows that Dyn chooses a different number of sentences depending on the question, which reflects our intuition. Table 5 shows results in the task of QA on SQuAD and NewsQA. MINIMAL is more efficient in training and inference than FULL. On SQuAD, S-Reader achieves 6.7× training and 3.6× inference speedup on SQuAD, and 15.0× training and 6.9× inference speedup on NewsQA. In addition to the speedup, MINIMAL achieves comparable result to FULL (using S-Reader, 79.9 vs 79.8 F1 on SQuAD and 63.8 vs 63.2 F1 on NewsQA). We compare the predictions from FULL and MINIMAL in Table 6. In the first two examples, our sentence selector chooses the oracle sentence, 1731 The initial LM model weighed approximately 33,3000 pounds, and allowed surface stays up to around 34 hours. . . . An Extended Lunar Module weighed over 36,200 pounds, and allowed surface stays of over 3 days. For about how long would the extended LM allow a surface stay on the moon? Approximately 1,000 British soldiers were killed or injured. . . . The remaining 500 British troops, led by George Washington, retreated to Virginia. How many casualties did British get? This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth’s history and are still occurring today. In contrast, catastrophism is the theory that Earth’s features formed in single, catastrophic events and remained unchanged thereafter. Which theory states that slow geological processes are still occuring today, and have occurred throughout Earth’s history? Table 6: Examples on SQuAD. Grountruth span (underlined text), the prediction from FULL (blue text) and MINIMAL (red text). Sentences selected by our selector is denoted with . In the above two examples, MINIMAL correctly answer the question by selecting the oracle sentence. In the last example, MINIMAL fails to answer the question, since the inference over first and second sentences is required to answer the question. selected sentence However, in 1883-84 Germany began to build a colonial empire in Africa and the South Pacific, before losing interest in imperialism. The establishment of the German colonial empire proceeded smoothly, starting with German New Guinea in 1884. When did Germany found their first settlement? 1883-84 1884 1884 In the late 1920s, Tesla also befriended George Sylvester Viereck, a poet, writer, mystic, and later, a Nazi propagandist. In middle age, Tesla became a close friend of Mark Twain; they spent a lot of time together in his lab and elsewhere. When did Tesla become friends with Viereck? late 1920s middle age late 1920s Table 7: An example on SQuAD, where the sentences are ordered by the score from our selector. Grountruth span (underlined text), the predictions from Top 1 (blue text), Top 2 (green text) and Dyn (red text). Sentences selected by Top 1, Top 2 and Dyn are denoted with , and , respectively. and the QA model correctly answers the question. In the last example, our sentence selector fails to choose the oracle sentence, so the QA model cannot predict the correct answer. In this case, our selector chooses the second and the third sentences instead of the oracle sentence because the former contains more information relevant to question. In fact, the context over the first and the second sentences is required to correctly answer the question. Table 7 shows an example on SQuAD, which MINIMAL with Dyn correctly answers the question, and MINIMAL with Top k sometimes does not. Top 1 selects one sentence in the first example, thus fails to choose the oracle sentence. Top 2 selects two sentences in the second example, which is inefficient as well as leads to the wrong answer. In both examples, Dyn selects the oracle sentence with minimum number of sentences, and subsequently predicts the answer. More analyses are shown in Appendix B. 4.3 TriviaQA and SQuAD-Open TriviaQA and SQuAD-Open are QA tasks that reason over multiple documents. They do not provide the answer span and only provide the question-answer pairs. For each QA model, we experiment with two types of inputs. First, since TriviaQA and SQuAD-Open have many documents for each question, we first filter paragraphs based on the TF-IDF similarities between the question and the paragraph, and then feed the full paragraphs to the QA model (FULL). On TriviaQA, we choose the top 10 paragraphs for training and inference. On SQuAD-Open, we choose the top 20 paragraphs for training and the top 40 for inferences. Next, we use our sentence selector with Dyn (MINIMAL). We select 5-20 sentences using our sentence selector, from 200 sentences based on TF-IDF. For training the sentence selector, we use two techniques described in Section 3.2, weight transfer and score normalization, but we do not use data modification technique, since there are too many sentences to feed each of them to the QA model. For training the QA model, we transfer the weights from the QA model trained on SQuAD, then finetune. 1732 TriviaQA (Wikipedia) SQuAD-Open n sent Acc Sp F1 EM n sent Acc Sp F1 EM FULL 69 95.9 x1.0 59.6 53.5 124 76.9 x1.0 41.0 33.1 MINIMAL TF-IDF 5 73.0 x13.8 51.9 45.8 5 46.1 x12.4 36.6 29.6 10 79.9 x6.9 57.2 51.5 10 54.3 x6.2 39.8 32.5 Our 5.0 84.9 x13.8 59.5 54.0 5.3 58.9 x11.7 42.3 34.6 Selector 10.5 90.9 x6.6 60.5 54.9 10.7 64.0 x5.8 42.5 34.7 Rank 1 56.0a 51.6a 2376a 77.8 29.8 Rank 2 55.1a 48.6a 37.5 29.1 Rank 3 52.9b 46.9a 2376a 77.8 28.4 Table 8: Results on the dev-full set of TriviaQA (Wikipedia) and the dev set of SQuAD-Open. Full results (including the dev-verified set on TriviaQA) are in Appendix C. For training FULL and MINIMAL on TriviaQA, we use 10 paragraphs and 20 sentences, respectively. For training FULL and MINIMAL on SQuAD-Open, we use 20 paragraphs and 20 sentences, respectively. For evaluating FULL and MINIMAL, we use 40 paragraphs and 5-20 sentences, respectively. ‘n sent’ indicates the number of sentences used during inference. ‘Acc’ indicates accuracy of whether answer text is contained in selected context. ‘Sp’ indicates inference speed. We compare with the results from the sentences selected by TF-IDF method and our selector (Dyn). We also compare with published Rank1-3 models. For TriviaQA(Wikipedia), they are Neural Casecades (Swayamdipta et al., 2018), Reading Twice for Natural Language Understanding (Weissenborn, 2017) and Mnemonic Reader (Hu et al., 2017). For SQuAD-Open, they are DrQA (Chen et al., 2017) (Multitask), R3 (Wang et al., 2018) and DrQA (Plain). aApproximated based on there are 475.2 sentences per document, and they use 5 documents per question bNumbers on the test set. Results Table 8 shows results on TriviaQA (Wikipedia) and SQuAD-Open. First, MINIMAL obtains higher F1 and EM over FULL, with the inference speedup of up to 13.8×. Second, the model with our sentence selector with Dyn achieves higher F1 and EM over the model with TF-IDF selector. For example, on the development-full set, with 5 sentences per question on average, the model with Dyn achieves 59.5 F1 while the model with TF-IDF method achieves 51.9 F1. Third, we outperforms the published state-of-the-art on both dataset. 4.4 SQuAD-Adversarial We use the same settings as Section 4.2. We use the model trained on SQuAD, which is exactly same as the model used for Table 5. For MINIMAL, we select top 1 sentence from our sentence selector to the QA model. Results Table 9 shows that MINIMAL outperforms FULL, achieving the new state-of-the-art by large margin (+11.1 and +11.5 F1 on AddSent and AddOneSent, respectively). Figure 10 compares the predictions by DCN+ FULL (blue) and MINIMAL (red). While FULL selects the answer from the adversarial sentence, MINIMAL first chooses the oracle sentence, and SQuAD-Adversarial AddSent AddOneSent F1 EM Sp F1 EM Sp DCN+ FULL 52.6 46.2 x0.7 63.5 56.8 x0.7 ORACLE 84.2 75.3 x4.3 84.5 75.8 x4.3 MINIMAL 59.7 52.2 x4.3 67.5 60.1 x4.3 S-Reader FULL 57.7 51.1 x1.0 66.5 59.7 x1.0 ORACLE 82.5 74.1 x6.0 82.9 74.6 x6.0 MINIMAL 58.5 51.5 x6.0 66.5 59.5 x6.0 RaSOR 39.5 49.5 ReasoNet 39.4 50.3 Mnemonic Reader 46.6 56.0 Table 9: Results on the dev set of SQuADAdversarial. We compare with RaSOR (Lee et al., 2016), ReasoNet (Shen et al., 2017) and Mnemonic Reader (Hu et al., 2017), the previous state-of-the-art on SQuAD-Adversarial, where the numbers are from Jia and Liang (2017). subsequently predicts the correct answer. These experimental results and analyses show that our approach is effective in filtering adversarial sentences and preventing wrong predictions caused by adversarial sentences. 5 Related Work Question Answering over Documents There has been rapid progress in the task of question answering (QA) over documents along with vari1733 San Francisco mayor Ed Lee said of the highly visible homeless presence in this area ”they are going to have to leave”. Jeff Dean was the mayor of Diego Diego during Champ Bowl 40. Who was the mayor of San Francisco during Super Bowl 50? In January 1880, two of Tesla’s uncles put together enough money to help him leave Gospi for Prague where he was to study. Tadakatsu moved to the city of Chicago in 1881. What city did Tesla move to in 1880? Table 10: Examples on SQuAD-Adversarial. Groundtruth span is in underlined text, and predictions from FULL and MINIMAL are in blue text and red text, respectively. ous datasets and competitive approaches. Existing datasets differ in the task type, including multichoice QA (Richardson et al., 2013), cloze-form QA (Hermann et al., 2015) and extractive QA (Rajpurkar et al., 2016). In addition, they cover different domains, including Wikipedia (Rajpurkar et al., 2016; Joshi et al., 2017), news (Hermann et al., 2015; Trischler et al., 2016), fictional stories (Richardson et al., 2013; Koˇcisk`y et al., 2017), and textbooks (Lai et al., 2017; Xie et al., 2017). Many neural QA models have successfully addressed these tasks by leveraging coattention or bidirectional attention mechanisms (Xiong et al., 2018; Seo et al., 2017) to model the codependent context over the document and the question. However, Jia and Liang (2017) find that many QA models are sensitive to adversarial inputs. Recently, researchers have developed largescale QA datasets, which requires answering the question over a large set of documents in a closed (Joshi et al., 2017) or open-domain (Dunn et al., 2017; Berant et al., 2013; Chen et al., 2017; Dhingra et al., 2017). Many models for these datasets either retrieve documents/paragraphs relevant to the question (Chen et al., 2017; Clark and Gardner, 2017; Wang et al., 2018), or leverage simple non-recurrent architectures to make training and inference tractable over large corpora (Swayamdipta et al., 2018; Yu et al., 2018). Sentence selection The task of selecting sentences that can answer to the question has been studied across several QA datasets (Yang et al., 2015), by modeling relevance between a sentence and the question (Yin et al., 2016; Miller et al., 2016; Min et al., 2017). Several recent works also study joint sentence selection and question answering. Choi et al. (2017) propose a framework that identifies the sentences relevant to the question (property) using simple bag-ofwords representation, then generates the answer from those sentences using recurrent neural networks. Raiman and Miller (2017) cast the task of extractive question answering as a search problem by iteratively selecting the sentences, start position and end position. They are different from our work in that (i) we study of the minimal context required to answer the question, (ii) we choose the minimal context by selecting variable number of sentences for each question, while they use a fixed size of number as a hyperparameter, (iii) our framework is flexible in that it does not require end-to-end training and can be combined with existing QA models, and (iv) they do not show robustness to adversarial inputs. 6 Conclusion We proposed an efficient and robust QA system that is scalable to large documents and robust to adversarial inputs. First, we studied the minimal context required to answer the question in existing datasets and found that most questions can be answered using a small set of sentences. Second, inspired by this observation, we proposed a sentence selector which selects a minimal set of sentences to answer the question to give to the QA model. We demonstrated the efficiency and effectiveness of our method across five different datasets with varying sizes of source documents. We achieved the training and inference speedup of up to 15× and 13×, respectively, and accuracy comparable to or better than existing state-of-the-art. In addition, we showed that our approach is more robust to adversarial inputs. Acknowledgments We thank the anonymous reviewers and the Salesforce Research team members for their thoughtful comments and discussions. References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP. 1734 Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In ACL. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In ACL. Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723 . Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question answering by search and reading. arXiv preprint arXiv:1707.03904 . Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179 . Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In EMNLP. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation . Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017. Mnemonic reader for machine comprehension. arXiv preprint arXiv:1705.02798 . Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2018. Fusionnet: Fusing via fullyaware attention with application to machine comprehension. In ICLR. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In ACL. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1706.02596v2 . Tom´aˇs Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2017. The narrativeqa reading comprehension challenge. arXiv preprint arXiv:1712.07040 . Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In EMNLP. Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436 . Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NIPS. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In EMNLP. Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question answering through transfer learning from large fine-grained supervision data. In ACL. Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. Memen: Multi-layer embedding with memory networks for machine comprehension. arXiv preprint arXiv:1707.09098 . Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Jonathan Raiman and John Miller. 2017. Globally normalized reader. In EMNLP. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research . 1735 Swabha Swayamdipta, Ankur P Parikh, and Tom Kwiatkowski. 2018. Multi-mention learning for reading comprehension with neural cascades. In ICLR. Chuanqi Tan, Furu Wei, Qingyu Zhou, Nan Yang, Bowen Du, Weifeng Lv, and Ming Zhou. 2018. Context-aware answer sentence selection with hierarchical gated recurrent neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing . Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 . Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced reader-ranker for open-domain question answering. In AAAI. Dirk Weissenborn. 2017. Reading twice for natural language understanding. CoRR abs/1706.02596. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural qa as simple as possible but not simpler. In CoNLL. Qizhe Xie, Guokun Lai, Zihang Dai, and Eduard Hovy. 2017. Large-scale cloze test dataset designed by teachers. arXiv preprint arXiv:1711.03225 . Caiming Xiong, Victor Zhong, and Richard Socher. 2018. Dcn+: Mixed objective and deep residual coattention for question answering. In ICLR. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In EMNLP. Wenpeng Yin, Hinrich Schtze, Bing Xiang, and Bowen Zhou. 2016. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. TACL . Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and accurate reading comprehension by combining self-attention and convolution. In ICLR.
2018
160
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1736–1745 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1736 Denoising Distantly Supervised Open-Domain Question Answering Yankai Lin, Haozhe Ji, Zhiyuan Liu∗, Maosong Sun State Key Lab on Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China {linyk14,jhz16}@mails.tsinghua.edu.cn, {liuzy,sms}@tsinghua.edu.cn Abstract Distantly supervised open-domain question answering (DS-QA) aims to find answers in collections of unlabeled text. Existing DS-QA models usually retrieve related paragraphs from a large-scale corpus and apply reading comprehension technique to extract answers from the most relevant paragraph. They ignore the rich information contained in other paragraphs. Moreover, distant supervision data inevitably accompanies with the wrong labeling problem, and these noisy data will substantially degrade the performance of DS-QA. To address these issues, we propose a novel DS-QA model which employs a paragraph selector to filter out those noisy paragraphs and a paragraph reader to extract the correct answer from those denoised paragraphs. Experimental results on real-world datasets show that our model can capture useful information from noisy data and achieve significant improvements on DS-QA as compared to all baselines. The source code and data of this paper can be obtained from https: //github.com/thunlp/OpenQA 1 Introduction Reading comprehension, which aims to answer questions about a document, has recently become a major focus of NLP research. Many reading comprehension systems (Chen et al., 2016; Dhingra et al., 2017a; Cui et al., 2017; Shen et al., 2017; Wang et al., 2017) have been proposed and achieved promising results since their multilayer architectures and attention mechanisms allow them to reason for the question. To some ex∗Corresponding author: Zhiyuan Liu tent, reading comprehension has shown the ability of recent neural models for reading, processing, and comprehending natural language text. Despite their success, existing reading comprehension systems rely on pre-identified relevant texts, which do not always exist in real-world question answering (QA) scenarios. Hence, reading comprehension technique cannot be directly applied to the task of open domain QA. In recent years, researchers attempt to answer opendomain questions with a large-scale unlabeled corpus. Chen et al. (2017) propose a distantly supervised open-domain question answering (DS-QA) system which uses information retrieval technique to obtain relevant text from Wikipedia, and then applies reading comprehension technique to extract the answer. Although DS-QA proposes an effective strategy to collect relevant texts automatically, it always suffers from the noise issue. For example, for the question “Which country’s capital is Dublin?”, we may encounter that: (1) The retrieved paragraph “Dublin is the largest city of Ireland ...” does not actually answer the question; (2) The second “Dublin” in the retrieved paragraph ‘Dublin is the capital of Ireland. Besides, Dublin is one of the famous tourist cities in Ireland and ...” is not the correct token of the answer. These noisy paragraphs and tokens are regarded as valid instances in DS-QA. To address this issue, Choi et al. (2017) separate the answer generation in DS-QA into two modules including selecting a target paragraph in document and extracting the correct answer from the target paragraph by reading comprehension. Further, Wang et al. (2018a) use reinforcement learning to train target paragraph selection and answer extraction jointly. These methods only extract the answer according to the most related paragraph, which will lose a large amount of rich information contained in 1737 p1: As the capital of Ireland, Dublin is … p3: Dublin is the capital of Ireland. Besides, Dublin is one of famous tourist cities in Ireland and ... p1: As the capital of Ireland, Dublin is … p3: Dublin is the capital of Ireland. Besides, Ottawa is one of famous tourist cities in Ireland and ... p1: As the capital of Ireland, Dublin is … p2: Ireland is an island in the North Atlantic… p3: Dublin is the capital of Ireland. Besides, Ottawa is one of famous tourist cities in Ireland and ... Question: What's the capital of Ireland? Answer: Dublin Paragraph Selector Paragraph Reader Figure 1: An overview of our model. For the question ‘What’s the capital of Dublin?”, our paragraph selector selects two paragraphs p1 and p3 which actually correspond to the question from all retrieved paragraphs. And then our paragraph reader extracts the correct answer “Dublin” (in red color) from all selected paragraphs. Finally, our system aggregates the extracted results and obtains the final answer. those neglected paragraphs. In fact, the correct answer is often mentioned in multiple paragraphs, and different aspects of the question may be answered in several paragraphs. Therefore, Wang et al. (2018b) propose to further explicitly aggregate evidence from across different paragraphs to re-rank extracted answers. However, the reranking approach still relies on the answers obtained by existing DS-QA systems, and fails to solve the noise problem of DS-QA substantially. To address these issues, we propose a coarseto-fine denoising model for DS-QA. As illustrated in Fig. 1, our system first retrieves paragraphs according to the question from a large-scale corpus via information retrieval. After that, to utilize all informative paragraphs, we adopt a fast paragraph selector to skim all retrieved paragraphs and filter out those noisy ones. And then we apply a precise paragraph reader to perform careful reading in each selected paragraph for extracting the answer. Finally, we aggregate the derived results of all chosen paragraphs to obtain the final answer. The fast skimming of our paragraph selector and intensive reading of our paragraph reader in our method enables DS-QA to denoise noisy paragraphs as well as maintaining efficiency. The experimental results on real-world datasets including Quasar-T, SearchQA and TriviaQA show that our system achieves significant and consistent improvement as compared to all baseline methods by aggregating extracted answers of all informative paragraphs. In particular, we show that our model can achieve comparable performance by selecting a few informative paragraphs, which greatly speeds up the whole DS-QA system. We will publish all source codes and datasets of this work on Github for further research explorations. 2 Related Work Question answering is one of the most important tasks in NLP. Many efforts have been invested in QA, especially in open-domain QA. Open-domain QA has been first proposed by (Green Jr et al., 1961). The task aims to answer open-domain questions using external resources such as collections of documents (Voorhees et al., 1999), webpages (Kwok et al., 2001; Chen and Van Durme, 2017), structured knowledge graphs (Berant et al., 2013a; Bordes et al., 2015) or automatically extracted relational triples (Fader et al., 2014). Recently, with the development of machine reading comprehension technique (Chen et al., 2016; Dhingra et al., 2017a; Cui et al., 2017; Shen et al., 2017; Wang et al., 2017), researchers attempt to answer open-domain questions via performing reading comprehension on plain texts. Chen et al. (2017) propose a DS-QA system, which retrieves relevant texts of the question from a large-scale corpus and then extracts answers from these texts using reading comprehension models. However, the retrieved texts in DS-QA are always noisy which may hurt the performance of DS-QA. Hence, Choi et al. (2017) and Wang et al. (2018a) attempt to solve the noise problem in DS-QA via separating the question answering into paragraph selection and answer extraction and they both only select the most relevant paragraph among all retrieved paragraphs to extract answers. They lose a large amount of rich information contained in those neglected paragraphs. Hence, Wang et al. (2018b) propose strength-base 1738 and coverage-based re-ranking approaches, which can aggregate the results extracted from each paragraph by existing DS-QA system to better determine the answer. However, the method relies on the pre-extracted answers of existing DS-QA models and still suffers from the noise issue in distant supervision data because it considers all retrieved paragraphs indiscriminately. Different from these methods, our model employs a paragraph selector to filter out those noisy paragraphs and keep those informative paragraphs, which can make full use of the noisy DS-QA data. Our work is also inspired by the idea of coarseto-fine models in NLP. Cheng and Lapata (2016) and Choi et al. (2017) propose a coarse-to-fine model, which first selects essential sentences and then performs text summarization or reading comprehension on the chosen sentences respectively. Lin et al. (2016) utilize selective attention to aggregate the information of all sentences to extract relational facts. Yang et al. (2016) propose a hierarchical attention network which has two levels of attentions applied at the word and sentence level for document classification. Our model also employs a coarse-to-fine model to handle the noise issue in DS-QA, which first selects informative retrieved paragraphs and then extracts answers from those selected paragraphs. 3 Methodology In this section, we will introduce our model in details. Our model aims to extract the answer to a given question in the large-scale unlabeled corpus. We first retrieve paragraphs corresponding to the question from the open-domain corpus using information retrieval technique, and then extract the answer from these retrieved paragraphs. Formally, given a question q = (q1, q2, · · · , q|q|), we retrieve m paragraphs which are defined as P = {p1, p2, · · · , pm} where pi = (p1 i , p2 i , · · · , p|pi| i ) is the i-th retrieved paragraph. Our model measures the probability of extracting answer a given question q and corresponding paragraph set P. As illustrated in Fig. 1, our model contains two parts: 1. Paragraph Selector. Given the question q and the retrieved paragraph P, the paragraph selector measures the probability distribution Pr(pi|q, P) over all retrieved paragraphs, which is used to select the paragraph that really contains the answer of question q. 2. Paragraph Reader. Given the question q and a paragraph pi, the paragraph reader calculates the probability Pr(a|q, pi) of extracting answer a through a multi-layer long short-term memory network. Overall, the probability Pr(a|q, P) of extracting answer a given question q can be calculated as: Pr(a|q, P) = X pi∈P Pr(a|q, pi) Pr(pi|q, P). (1) 3.1 Paragraph Selector Since the wrong labeling problem inevitably occurs in DS-QA data, we need to filter out those noisy paragraphs when exploiting the information of all retrieved paragraphs. It is straightforward that we need to estimate the confidence of each paragraph. Hence, we employ a paragraph selector to measure the probability of each paragraph containing the answer among all paragraphs. Paragraph Encoding. We first represent each word pj i in the paragraph pi as a word vector pj i, and then feed each word vector into a neural network to obtain the hidden representation ˆpj i. Here, we adopt two types of neural networks including: 1. Multi-Layer Perceptron (MLP) ˆpj i = MLP(pj i), (2) 2. Recurrent Neural Network (RNN) {ˆp1 i , ˆp2 i , · · · , ˆp|pi| i } = RNN({p1 i , p2 i , · · · , p|pi| i }), (3) where ˆpj i is expected to encode semantic information of word pj i and its surrounding words. For RNN, we select a single-layer bidirectional long short-term memory network (LSTM) as our RNN unit, and concatenate the hidden states of all layers to obtain ˆpj i. Question Encoding. Similar to paragraph encoding, we also represent each word qi in the question as its word vector qi, and then fed them into a MLP: ˆqj i = MLP(qj i), (4) or a RNN: {ˆq1, ˆq2, · · · , ˆq|q|} = RNN({q1, q2, · · · , q|q|}). (5) where ˆqj is the hidden representation of the word qj and is expected to encode the context information of it. After that, we apply a self attention operation on the hidden representations to obtain the 1739 final representation q of the question q: ˆq = X j αjˆqj, (6) where αj encodes the importance of each question word and is calculated as: αi = exp(wbqi) P j exp(wbqj), (7) where w is a learned weight vector. Next, we calculate the probability of each paragraph via a max-pooling layer and a softmax layer: Pr(pi|q, P) = softmax max j (ˆpj iWq)  , (8) where W is a weight matrix to be learned. 3.2 Paragraph Reader The paragraph reader aims to extract answers from a paragraph pi. Similar to paragraph reader, we first encode each paragraph pi as {¯p1 i , ¯p2 i , · · · , ¯p|pi| i } through a multi-layers bidirectional LSTM . And we also obtain the question embedding ¯q via a self-attention multi-layers bidirectional LSTM. The paragraph reader aims to extract the span of tokens which is most likely the correct answer. And we divide it into predicting the start and end position of the answer span. Hence, the probability of extracting answer a of the question q from the given the paragraph pi can be calculated as: Pr(a|q, pi) = Ps(as)Pe(ae), (9) where as and ae indicate the start and end positions of answer a in the paragraph, Ps(as) and Pe(ae) are the probabilities of as and ae being start and end words respectively, which is calculated by: Ps(j) = softmax(¯pj iWs¯q), (10) Pe(j) = softmax(¯pj iWe¯q), (11) where Ws and We are two weight matrices to be learned. In DS-QA, since we didn’t label the position of the answer manually, we may have several tokens matched to the correct answer in a paragraph. Let {(a1 s, a1 e), (a2 s, a2 e), · · · , (a|a| s , a|a| e )} be the set of the start and end positions of the tokens matched to answer a in the paragraph pi. The equation (9) is further defined using two ways: (1) Max. That is, we assume that only one token in the paragraph indicates the correct answer. In this way, the probability of extracting the answer a can defined by maximizing the probability of all candidate tokens: Pr(a|q, pi) = max j Pr s (aj s) Pr e (aj e) (12) (2) Sum. In this way, we regard all tokens matched to the correct answer equally. And we define the answer extraction probability as: Pr(a|q, pi) = X j Pr s (aj s) Pr e (aj e). (13) Our paragraph reader model is inspired by a previous machine reading comprehension model, Attentive Reader described in (Chen et al., 2016). In fact, other reading comprehension models can also be easily adopted as our paragraph reader. Due to the space limit, in this paper, we only explore the effectiveness of Attentive Reader. 3.3 Learning and Prediction For the learning objective, we define a loss function L using maximum likelihood estimation: L(θ) = − X (¯a,q,P)∈T log Pr(a|q, P) −αR(P), (14) where θ indicates the parameters of our model, a indicates the correct answer, T is the whole training set and R(P) is a regularization term over the paragraph selector to avoid its overfitting. Here, R(P) is defined as the KL divergence between Pr(pi|q, P) and a probability distribution X where Xi = 1 cP (cP is the number of paragraphs containing correct answer in P) if the paragraph contains correct answer, otherwise 0. Specifically, R(P) is defined as: R(P) = X pi∈P Xi log Xi Pr(pi|q, P). (15) To solve the optimization problem, we adopt Adamax to minimize the objective function as described in (Kingma and Ba, 2015). During testing, we extract the answer ˆa with the highest probability as below: ˆa = arg max a Pr(a|q, P) = arg max a X pi∈P Pr(a|q, pi) Pr(pi|q, P).(16) 1740 Here, the paragraph selector can be viewed as a fast skimming over all paragraphs, which determines the probability distribution of containing the answer for each paragraph. Hence, we can simply aggregate the predicting results from those paragraphs with higher probabilities for acceleration. 4 Experiments 4.1 Datasets and Evaluation Metrics We evaluate our model on five public open-domain question answering datasets. Quasar-T1 (Dhingra et al., 2017b) consists of 43, 000 open-domain trivia question, and their answers are extracted from ClueWeb09 data source, and the paragraphs are obtained by retrieving 50 sentences for each question from the ClueWeb09 data source using LUCENE. SearchQA2 (Dunn et al., 2017) is a large-scale open domain question answering dataset, which consists of question-answer pairs crawled from J! Archive, and the paragraphs are obtained by retrieving 50 webpages for each question from Google Search API. TriviaQA3 (Joshi et al., 2017) includes 95, 000 question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, and utilizes Bing Web search API to collect 50 webpages related to the questions. CuratedTREC4 (Voorhees et al., 1999) is based on the benchmark from the TREC QA tasks, which contains 2, 180 questions extracted from the datasets from TREC1999, 2000, 2001 and 2002. WebQuestions5 (Berant et al., 2013b) is designed for answering questions from the Freebase knowledge base, which is built by crawling questions through the Google Suggest API and the paragraphs are retrieved from the English Wikipedia using . For Quasar-T, SearchQA and TriviaQA datasets, we use the retrieved paragraphs provided by (Wang et al., 2018a). For CuratedTREC and WebQuestions datasets, We use the 2016-12-21 1https://github.com/bdhingra/quasar 2https://github.com/nyu-dl/SearchQA 3http://nlp.cs.washington.edu/ triviaqa/ 4https://github.com/brmson/ dataset-factoid-curated/tree/master/trec 5https://github.com/brmson/ dataset-factoid-webquestions dump of English Wikipedia as our knowledge source used to answer the question and then build a Lucene index system on it. After that, we take each input question as a query to retrieve top-50 paragraphs. The statistics of these datasets are shown in Table 1. Dataset #Train #Dev #Test Quasar-T 28,496 3,000 3,000 SearchQA 99,811 13,893 27,247 TriviaQA 66,828 11,313 10,832 CuratedTREC 1,486 694 WebQuestions 3,778 2,032 Table 1: Statistics of the dataset. Following (Chen et al., 2017), we adopt two metrics including ExactMatch (EM) and F1 scores to evaluate our model. EM measures the percentage of predictions that match one of the ground truth answers exactly and F1 score is a metric that loosely measures the average overlap between the prediction and ground truth answer. 4.2 Baselines For comparison, we select several public models as baselines including: (1) GA (Dhingra et al., 2017a), a reading comprehension model which performs multiple hops over the paragraph with gated attention mechanism; (2) BiDAF (Seo et al., 2017), a reading comprehension model with a bi-directional attention flow network. (3) AQA (Buck et al., 2017), a reinforced system learning to re-write questions and aggregate the answers generated by the re-written questions; (4) R3 (Wang et al., 2018a), a reinforced model making use of a ranker for selecting most confident paragraph to train the reading comprehension model. And we also compare our model with its naive version, which regards each paragraph equally and sets a uniform distribution to the paragraph selection. We name our model as “Our+FULL” and its naive version “Our+AVG”. 4.3 Experimental Settings In this paper, we tune our model on the development set and use a grid search to determine the optimal parameters. We select the hidden size of LSTM n ∈{32, 64, 128, · · · , 512}, the number of LSTM layers for document and question encoder among {1, 2, 3, 4}, regularization weight α among {0.1, 0.5, 1.0, 2.0} and the batch size among {4, 8, 16, 32, 64, 128}. The optimal parameters are highlighted with bold faces. For other 1741 parameters, since they have little effect on the results, we simply follow the settings used in (Chen et al., 2017). For training, our Our+FULL model is first initialized by pre-training using Our+AVG model, and we set the iteration number over all the training data as 10. For pre-trained word embeddings, we use the 300-dimensional GloVe6 (Pennington et al., 2014) word embeddings learned from 840B Web crawl data. 4.4 Effect of Different Paragraph Selectors As our model incorporates different types of neural networks including MLP and RNN as our paragraph selector, we investigate the effect of different paragraph selector on the Quasar-T and SearchQA development set. As shown in Table 3, our RNN paragraph selector leads to statistically significant improvements on both Quasar-T and SearchQA. Note that Our+FULL which uses MLP paragraph selector even performs worse on Quasar-T dataset as compared to Our+AVG. It indicates that MLP paragraph selector is insufficient to distinguish whether a paragraph answers the question. As RNN paragraph selector consistently improves all evaluation metrics, we use it as the default paragraph selector in the following experiments. 4.5 Effect of Different Paragraph Readers Here, we compare the performance of different types of paragraph readers and the results are shown in Table 4. From the table, we can see that all models with Sum or Max paragraph readers have comparable performance in most cases, but Our+AVG with Max reader has about 3% increment as compared to the one with Sum reader on the SearchQA dataset. It indicates that the Sum reader is more susceptible to noisy data since it regards all tokens matching to the answer as ground truth. In the following experiments, we select the Max reader as our paragraph reader since it is more stable. 4.6 Overall Results In this part, we will show the performance of different models on five DS-QA datasets and offer some further analysis. The performance of our models are shown in Table 2. From the results, we can observe that: 6http://nlp.stanford.edu/data/glove. 840B.300d.zip (1) Both our models including Our+AVG and Our+FULL achieve better results on most of the datasets as compared to other baselines. The reason is that our models can make full use of the information of all retrieved paragraphs to answer the question, while other baseline models only consider the most relevant paragraph. It verifies our claim that incorporating the rich information of all retrieved paragraphs could help us better extract the answer to the question. (2) On all datasets, Our+FULL model outperforms Our+AVG model significantly and consistently. It indicates that our paragraph selector could effectively filter out those meaningless retrieved paragraphs and alleviate the wrong labeling problem in DS-QA. (3) On TriviaQA dataset, our+AVG model has worse performance as compared to R3 model. After observing the TriviaQA dataset, we find that in this dataset only one or two retrieved paragraphs actually contain the correct answer. Therefore, simply using all retrieved paragraphs equally to extract answer may bring in much noise. On the contrary, Our+FULL model still has a slight improvement by considering the confidence of each retrieved paragraph. (4) On CuratedTREC and WebQuestions datasets, our model only has a slight improvement as compared to R3 model. The reason is that the size of these two datasets is tiny and the performance of these DS-QA systems is heavily influenced by the gap with the dataset used to pre-trained. 4.7 Paragraph Selector Performance Analysis To demonstrate the effectiveness of our paragraph selector in filtering out those noisy retrieved paragraphs, we compare our paragraph selector with traditional information retrieval7 (IR) in this part. We also compare our model with a new baseline named Our+INDEP which trains the paragraph reader and the paragraph selector independently. To train the paragraph selector, we regard all the paragraph containing the correct answer as ground truth and learns it with Eq. 14. First, we show the performance in selecting informative paragraphs. Since distantly supervised data doesn’t have the labeled ground-truth to tell 7The information retrieval model ranks the paragraph with BM25 which is implemented by Lucene. 1742 Datasets Quasar-T SearchQA TriviaQA CuratedTREC WebQuestions Models EM F1 EM F1 EM F1 REM EM F1 GA (Dhingra et al., 2017a) 26.4 26.4 BiDAF (Seo et al., 2017) 25.9 28.5 28.6 34.6 AQA (Buck et al., 2017) 40.5 47.4 R3 (Wang et al., 2018a) 35.3 41.7 49.0 55.3 47.3 53.7 28.4 17.1 24.6 Our + AVG 38.5 45.7 55.6 61.0 42.6 48.2 28.6 17.8 24.5 + FULL 42.2 49.3 58.8 64.5 48.7 56.3 29.1 18.5 25.6 Table 2: Experimental results on four open-domain QA test datasets: Quasar-T, SearchQA, TriviaQA, CuratedTREC and WebQuestions. TriviaQA, CuratedTREC and WebQuestions do not provide the leader board under the open-domain setting. Therefore, there is no public baselines in this setting and we only report the result of the DrQA and R3 baseline. CuratedTREC dataset is evaluated by regular expression matching (REM). Datasets Quasar-T SearchQA Models Selector EM F1 EM F1 Our + AVG 38.6 45.8 57.3 62.7 + FULL MLP 37.1 43.5 59.9 65.1 + FULL RNN 41.7 49.1 62.3 67.9 Table 3: Effect of Different Paragraph Selector on the Quasar-T and SearchQA development set. Datasets Quasar-T SearchQA Models Reader EM F1 EM F1 Our + AVG Max 38.6 45.8 57.3 62.7 + FULL 41.7 49.1 62.3 67.9 Our + AVG Sum 39.1 46.3 54.0 59.4 + FULL 42.3 49.4 61.9 67.4 Table 4: Effect of Different Paragraph Reader on the Quasar-T and SearchQA development set. The paragraph selector used in Our+FULL is RNN. which paragraphs actually answer the question, we adopt a held-out evaluation instead. It evaluates our model by comparing the selected paragraph with pseudo labels: we regard a paragraph as ground-truth if it contains a token matched to the correct answer. We use Hit@N which indicates the proportion of proper paragraphs being ranked in top-N as evaluation metrics. The result is shown in Table 5. From the table, we can observe that: (1) Both Our+INDEP and Our+FULL outperform traditional IR model significantly in selecting informative paragraphs. It indicates that our proposed paragraph selector is capable of catching the semantic correlation between question and paragraphs. (2) Our+FULL has similar performance as compare with Our+SINGLE from Hits@1 to Hits@5 to select valid paragraphs. The reason is that the way of our evaluation of paragraph selection is consistent with the training objective of the ranker in Our+SINGLE. In fact, this way of evaluation may be not enough to distinguish the performance of different paragraph selector. Therefore, we further report the overall answer extraction performance of Our+FULL and Our+INDEP. From the table, we can see that Our+FULL performs better in answer extraction as compared to Our+SINGLE although they have similar performance in paragraph selection. It demonstrates that our paragraph selector can better determine which tokens matched to the answer are actually answering the question by joint training with paragraph reader. 4.8 Performance with different numbers of paragraphs Our paragraph selector can be viewed as a fast skimming step before carefully reading the paragraphs. To show how much our paragraph selector can accelerate the DS-QA system, we compare the performance of our model with top paragraphs selected by our paragraph selector (Our+FULL) or traditional IR model. The results are shown in Fig. 2. There is no doubt that with the number of paragraphs increasing, the performance of our+IR and our+FULL model will increase significantly. From the figure, we can find that on both Quasar-T and SearchQA datasets, our+FULL can use only half of the retrieved paragraphs for answer extraction without performance deterioration, while our+IR suffers from the significant performance degradation when decreasing the number of paragraphs. It demonstrates that our model can extract answer with a few informative paragraphs selected by paragraph selector, which will speed up our whole DS-QA system. 4.9 Potential improvement To show the potential improvement in aggregating extracted answers with answer re-ranking models of our DS-QA system, we provide statistical anal1743 Datasets Quasar-T SearchQA Task Paragraph Selection Overall Paragraph Selection Overall Models Hits@1 Hits@3 Hits@5 EM F1 Hits@1 Hits@3 Hits@5 EM F1 IR 6.3 10.9 15.2 13.7 24.1 32.7 Our + INDEP 26.8 36.3 41.9 40.6 46.9 59.2 70.0 75.7 57.0 62.3 Our + FULL 27.7 36.8 42.6 41.1 48.0 58.9 69.8 75.5 58.8 64.5 Table 5: Comparison of our paragraph selector and traditional information retrieval model in paragraph selection. The Our+AVG and Our+FULL model used in WebQuestions dataset is pre-trained with Quasart-T dataset Question: Who directed the 1946 ‘It’s A Wonderful Life’? Ground Truth: Frank Capra Paragraph1 It’s a Wonderful Life (1946): directed by Frank Capra, starred by James Stewart, Donna Reed ... Paragraph2 It’s a Wonderful Life, the 1946 film produced and directed by Frank Capra and starring ... Paragraph3 It’s a Wonderful Life Guajara in other languages: Spanish, Deutsch, French, Italian ... Question: What famous artist could write with both his left and right hand at the same time Ground Truth: Leonardo Da Vinci Paragraph1 Leonardo Da Vinci was and is best known as an artist,... Paragraph2 ... the reason Leonardo da Vinci used his left hand exclusively was that his right hand was paralyzed. Paragraph3 ... forced me to use my right-hand,... beat my left-hand fingers with ... so that i use the right hand. Table 6: The examples of the answers to the given questions extracted by our model. The token in bold are the extracted answers in each paragraph. The paragraphs are sorted according to the probabilities output by our paragraph selector. 0 10 20 30 40 50 Number of Paragraphs 10 15 20 25 30 35 40 45 Exach Match (%) Our+IR Our+FULL 0 10 20 30 40 50 Number of Paragraphs 10 20 30 40 50 60 Exach Match (%) Our+IR Our+FULL Figure 2: Performance with different numbers of top paragraphs on Quasar-T (up) and SearchQA (bottom) datasets. ysis to the upper bound of our system performance on the development set. Here, we compare our model with R3 model by evaluating the F1/EM scores among the top-k extracted answers. This top-k performance of our system can be viewed as the upper bound of our system to re-rank the top-k extracted answers. Datasets Quasar-T SearchQA Model TOP-k EM F1 EM F1 R3 1 35.3 41.6 51.2 57.3 3 46.2 53.5 63.9 68.9 5 51.0 58.9 69.1 73.9 10 56.1 64.8 75.5 79.6 Our + FULL 1 42.2 49.3 58.8 67.4 3 53.1 62.0 72.9 77.4 5 56.4 66.4 76.9 81.0 10 60.7 71.3 81.2 85.1 Table 7: Potential improvement on DS-QA performance by answer re-ranking. The performance is based on the Quasar-T and SearchQA development dataset. From Table 7, we can see that: (1) There is a clear gap between top-3/5 and top1 DS-QA performance (10-20%). It indicates that our DS-QA model is far from the upper performance and still has a high probability to be improved by answer re-ranking. (2) The Our+FULL model outperforms R3 model in top-1, top-3 and top-5 on both Quasar-T and SearchQA datasets by 5% to 7%. It indicates that aggregating the information from all informative paragraphs can effectively enhance our model in DS-QA, which is more potential using answer re-ranking. 1744 4.10 Case Study Table 6 shows two examples of our models, which illustrates that our model can make full use of informative paragraphs. From the table we find that: (1) For the question “Who directed the 1946 ‘It’s A Wonderful Life’?”, our model extracts the answer “Frank Capra” from both top-2 paragraphs ranked by our paragraph selector. (2) For the question “What famous artist could write with both his left and right hand at the same time?”, our model identifies that “Leonardo Da Vinci” is an artist from the first paragraph and could write with both his left and right hand at the same time from the second paragraph. 5 Conclusion and Future Work In this paper, we propose a denoising distantly supervised open-domain question answering system which contains a paragraph selector to skim over paragraphs and a paragraph reader to perform an intensive reading on the selected paragraphs. Our model can make full use of all informative paragraphs and alleviate the wrong labeling problem in DS-QA. In the experiments, we show that our models significantly and consistently outperforms state-of-the-art DS-QA models. In particular, we demonstrate that the performance of our model is hardly compromised when only using a few topselected paragraphs. In the future, we will explore the following directions: (1) An additional answer re-ranking step can further improve our model. We will explore how to effectively re-rank our extracted answers to further enhance the performance. (2) Background knowledge such as factual knowledge, common sense knowledge can effectively help us in paragraph selection and answer extraction. We will incorporate external knowledge bases into our DS-QA model to improve its performance. Acknowledgments This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61661146007 and 61572273). This paper is also partially funded by Microsoft Research Asia FY17-RES-THEME-017. References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013a. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP. pages 1533–1544. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013b. Semantic parsing on Freebase from question-answer pairs. In Proceedings of EMNLP. pages 1533–1544. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 . Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Andrea Gesmundo, Neil Houlsby, Wojciech Gajewski, and Wei Wang. 2017. Ask the right questions: Active question reformulation with reinforcement learning. arXiv preprint arXiv:1705.07830 . Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of ACL. pages 2358–2367. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the ACL. pages 1870–1879. Tongfei Chen and Benjamin Van Durme. 2017. Discriminative information retrieval for question answering sentence selection. In Proceedings of EACL. pages 719–725. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of ACL. pages 484–494. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of ACL. pages 209– 220. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-overattention neural networks for reading comprehension. In Proceedings of ACL. pages 593–602. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017a. Gatedattention readers for text comprehension. In Proceedings of ACL. pages 1832–1846. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017b. Quasar: Datasets for question answering by search and reading. arXiv preprint arXiv:1707.03904 . Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179 . 1745 Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of SIGKDD. pages 1156–1165. Bert F Green Jr, Alice K Wolf, Carol Chomsky, and Kenneth Laughery. 1961. Baseball: an automatic question-answerer. In Proceedings of IRE-AIEEACM. pages 219–224. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of ACL. pages 1601–1611. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Cody Kwok, Oren Etzioni, and Daniel S Weld. 2001. Scaling question answering to the web. TOIS pages 242–262. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL. pages 2124–2133. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. pages 1532–1543. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In Proceedings of ICLR. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of SIGKDD. ACM, pages 1047–1055. Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Proceedings of TREC. pages 77–82. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018a. R3: Reinforced ranker-reader for open-domain question answering. In Proceedings of AAAI. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2018b. Evidence aggregation for answer re-ranking in open-domain question answering. In Proceedings of ICLR. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of ACL. pages 189–198. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of NAACL. pages 1480–1489.
2018
161
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1746–1755 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1746 Question Condensing Networks for Answer Selection in Community Question Answering Wei Wu1, Xu Sun1, Houfeng Wang1,2 1MOE Key Lab of Computational Linguistics, Peking University, Beijing, 100871, China 2Collaborative Innovation Center for Language Ability, Xuzhou, Jiangsu, 221009, China {wu.wei, xusun, wanghf}@pku.edu.cn Abstract Answer selection is an important subtask of community question answering (CQA). In a real-world CQA forum, a question is often represented as two parts: a subject that summarizes the main points of the question, and a body that elaborates on the subject in detail. Previous researches on answer selection usually ignored the difference between these two parts and concatenated them as the question representation. In this paper, we propose the Question Condensing Networks (QCN) to make use of the subject-body relationship of community questions. In this model, the question subject is the primary part of the question representation, and the question body information is aggregated based on similarity and disparity with the question subject. Experimental results show that QCN outperforms all existing models on two CQA datasets. 1 Introduction Community question answering (CQA) has seen a spectacular increase in popularity in recent years. With the advent of sites like Stack Overflow1 and Quora2, more and more people can freely ask any question and expect a variety of answers. With the influx of new questions and the varied quality of provided answers, it is very time-consuming for a user to inspect them all. Therefore, developing automated tools to identify good answers for a question is of practical importance. A typical example for CQA is shown in Table 1. In this example, Answer 1 is a good answer, because it provides helpful information, e.g., “check 1https://stackoverflow.com/ 2https://www.quora.com/ it to the traffic dept”. Although Answer 2 is relevant to the question, it does not contain any useful information so that it should be regarded as a bad answer. From this example, we can observe two characteristics of CQA that ordinary QA does not possess. First, a question includes both a subject that gives a brief summary of the question and a body that describes the question in detail. The questioners usually convey their main concern and key information in the question subject. Then, they provide more extensive details about the subject, seek help, or express gratitude in the question body. Second, the problem of redundancy and noise is prevalent in CQA (Zhang et al., 2017). Both questions and answers contain auxiliary sentences that do not provide meaningful information. Previous researches (Tran et al., 2015; Joty et al., 2016) usually treat each word equally in the question and answer representation. However, due to the redundancy and noise problem, only part of text from questions and answers is useful to determine the answer quality. To make things worse, they ignored the difference between question subject and body, and simply concatenated them as the question representation. Due to the subject-body relationship described above, this simple concatenation can aggravate the redundancy problem in the question. In this paper, we propose the Question Condensing Networks (QCN) to address these problems. In order to utilize the subject-body relationship in community questions, we propose to treat the question subject as the primary part of the question, and aggregate the question body information based on similarity and disparity with the question subject. The similarity part corresponds to the information that exists in both question subject and body, and the disparity part corresponds to the additional information provided by the ques1747 Question Subject Checking the history of the car. Question body How can one check the history of the car like maintenance, accident or service history. In every advertisement of the car, people used to write “Accident Free", but in most cases, car have at least one or two accident, which is not easily detectable through Car Inspection Company. Share your opinion in this regard. Answer1 Depends on the owner of the car.. if she/he reported the accident/s i believe u can check it to the traffic dept.. but some owners are not doing that especially if its only a small accident.. try ur luck and go to the traffic dept.. Answer2 How about those who claim a low mileage by tampering with the car fuse box? In my sense if you’re not able to detect traces of an accident then it is probably not worth mentioning... For best results buy a new car :) Table 1: An example question and its related answers in CQA. The text is shown in its original form, which may contain errors in typing. tion body. Both information can be important for question representation. In our model, they are processed separately and the results are combined to form the final question representation. In order to reduce the impact of redundancy and noise in both questions and answers, we propose to align the question-answer pairs using the multi-dimensional attention mechanism. Different from previous attention mechanisms that compute a scalar score for each token pair, multidimensional attention, first proposed in Shen et al. (2018), computes one attention score for each dimension of the token embedding. Therefore, it can select the features that can best describe the word’s specific meaning in the given context. Therefore, we can learn the interaction between questions and answers more accurately. The main contributions of our work can be summarized as follows: • We propose to treat the question subject and the question body separately in community question answering. We treat the question subject as the primary part of the question, and aggregate the question body information based on similarity and disparity with the question subject. • We introduce a new method that uses the multi-dimensional attention mechanism to align question-answer pair. With this attention mechanism, the interaction between questions and answers can be learned more accurately. • Our proposed Question Condensing Networks (QCN) achieves the state-of-the-art performance on two SemEval CQA datasets, outperforming all exisiting SOTA models by a large margin, which demonstrates the effectiveness of our model.3 2 Task Description A community question answering consists of four parts, which can be formally defined as a tuple of four elements (S, B, C, y). S = [s1, s2, ..., sl] denotes the subject of a question whose length is l, where each si is a one-hot vector whose dimension equals the size of the vocabulary. Similarly, B = [b1, b2, ..., bm] denotes the body of a question whose length is m. C = [c1, c2, ..., cn] denotes an answer corresponding to that question whose length is n. y ∈Y is the label representing the degree to which it can answer that question. Y = {Good, PotentiallyUseful, Bad} where Good indicates the answer can answer that question well, PotentiallyUseful indicates the answer is potentially useful to the user, and Bad indicates the answer is just bad or useless. Given {S, B, C}, the task of CQA is to assign a label to each answer based on the conditional probability Pr(y|S, B, C). 3 Proposed Model In this paper, we propose Question Condensing Networks (QCN) which is composed of the following modules. The overall architecture of our model is illustrated in Figure 1. 3An implementation of our model is available at https: //github.com/pku-wuwei/QCN. 1748 MLP 𝑆𝑒𝑚𝑏 𝐵𝑒𝑚𝑏 𝐶𝑒𝑚𝑏 𝐶𝑟𝑒𝑝 𝑆𝑝𝑎𝑟𝑎 𝑆r𝑒𝑝 𝑆𝑜𝑟𝑡ℎ 𝐶𝑎𝑡𝑡 𝑆𝑎𝑡𝑡 𝒔𝑠𝑢𝑚 𝒄𝑠𝑢𝑚 𝑦 𝑐𝑜𝑛𝑐𝑎𝑡 𝑝𝑟𝑜𝑗𝑒𝑐𝑡 𝑐𝑜𝑛𝑐𝑎𝑡 Figure 1: Architecture for Question Condensing Network (QCN). Each block represents a vector. 3.1 Word-Level Embedding Word-level embeddings are composed of two components: GloVe (Pennington et al., 2014) word vectors trained on the domain-specific unannotated corpus provided by the task 4, and convolutional neural network-based character embeddings which are similar to (Kim et al., 2016). Web text in CQA forums differs largely from normalized text in terms of spelling and grammar, so specifically trained GloVe vectors can model word interactions more precisely. Character embedding has proven to be very useful for out-of-vocabulary (OOV) words, so it is especially suitable for noisy web text in CQA. We concatenate these two embedding vectors for every word to generate word-level embeddings Semb ∈Rd×l, Bemb ∈Rd×m, Cemb ∈Rd×n, where d is the word-level embedding size. 3.2 Question Condensing In this section, we condense the question representation using subject-body relationship. In most cases, the question subject can be seen as a summary containing key points of the question, the question body is relatively lengthy in that it needs to explain the key points and add more details about the posted question. We propose to cheat the question subject as the primary part of the question representation, and aggregate question body information from two perspectives: similarity and disparity with the question subject. To achieve this goal, we use an orthogonal decomposition strategy, which is first proposed by Wang et al. (2016), to decompose each question body embedding into a parallel component and an orthogonal compo4http://alt.qcri.org/semeval2015/ task3/index.php?id=data-and-tools nent based on every question subject embedding: bi,j para = bj emb · si emb si emb · si emb si emb (1) bi,j orth = bj emb −bi,j para (2) All vectors in the above equations are of length d. Next we describe the process of aggregating the question body information based on the parallel component in detail. The same process can be applied to the orthogonal component, so at the end of the fusion gate we can obtain Sorth and Sorth respectively. The decomposed components are passed through a fully connected layer to compute the multi-dimensional attention weights. Here we use the scaled tanh activation, which is similar to Shen et al. (2018), to prevent large difference among scores while it still has a range large enough for output: ai,j para = c · tanh  Wp1bi,j para + bp1  /c  (3) where Wp1 ∈Rd×d and bp1 ∈Rd are parameters to be learned, and c is a hyper-parameter to be tuned. The obtained word-level alignment tensor Apara ∈Rd×l×m is then normalized along the third dimension to produce the attention weights over the question body for each word in the question subject. The output of this attention mechanism is a weighted sum of the question body embeddings for each word in the question subject: wi,j para = exp  ai,j para  Pm j=1 exp  ai,j para  (4) si ap = m X j=1 wi,j para ⊙bj emb (5) 1749 where ⊙means point-wise product. This multidimensional attention mechanism has the advantage of selecting features of a word that can best describe the word’s specific meaning in the given context. In order to determine the importance between the original word in the question subject and the aggregated information from the question body with respect to this word, a fusion gate is utilized to combine these two representations: Fpara = σ (Wp2Semb + Wp3Sap + bp2) (6) Spara = Fpara ⊙Semb + (1 −Fpara) ⊙Sap (7) where Wp2, Wp3 ∈ Rd×d, and bp2 ∈ Rd are learnable parameters of the fusion gate, and Fpara, Semb, Sap, Spara ∈Rd×l. The final question representation Srep ∈R2d×l is obtained by concatenating Spara and Sorth along the first dimension. 3.3 Answer Preprocessing This module has two purposes. First, we try to map each answer word from embedding space Cemb ∈ Rd×n to the same interaction space Crep ∈R2d×n as the question. Second, similar to Wang and Jiang (2017), a gate is utilized to control the importance of different answer words in determining the question-answer relation: Crep =σ (Wc1Cemb + bc1) ⊙ tanh (Wc2Cemb + bc2) (8) where Wc1, Wc2 ∈Rd×2d and bc1, bc2 ∈R2d are parameters to be learned. 3.4 Question Answer Alignment We apply the multi-dimensional attention mechanism to the question and answer representation Srep and Crep to obtain word-level alignment tensor Aalign ∈R2d×l×n. Similar to the multi-dimensional attention mechanism described above, we can compute attention weights and weighted sum for both the question representation and the answer representation : ˜ai,j align = Wa1si rep + Wa2cj rep + ba (9) ai,j align = c · tanh  ˜ai,j align/c  (10) si ai = n X j=1 exp  ai,j align  Pn j=1 exp  ai,j align  ⊙cj rep (11) cj ai = l X i=1 exp  ai,j align  Pl i=1 exp  ai,j align  ⊙si rep (12) where Wa1, Wa2 ∈R2d×2d and ba ∈R2d are parameters to be learned. To attenuate the effect of incorrect attendance, input and output of this attention mechanism are concatenated and fed to the subsequent layer. Finally, we obtain the question and answer representation Satt ∈R4d×l = [Srep; Sai], Catt ∈R4d×n = [Crep; Cai]. 3.5 Interaction Summarization In this layer, the multi-dimensional self-attention mechanism is employed to summarize two sequences of vectors (Satt and Catt) into two fixedlength vectors ssum ∈R4d and csum ∈R4d. As = Ws2tanh (Ws1Satt + bs1) + bs2 (13) ssum = n X i=1 exp ai s  Pn i=1 exp (ais) ⊙si att (14) where Ws1, Ws2 ∈R4d×4d and bs1, bs2 ∈R4d are parameters to be learned. The same process can be applied to Catt and obtain csum. 3.6 Prediction In this component, ssum and csum are concatenated and fed into a two-layer feed-forward neural network. At the end of the last layer, the softmax function is applied to obtain the conditional probability distribution Pr(y|S, B, C). 4 Experimental Setup 4.1 Datasets We use two community question answering datasets from SemEval (Nakov et al., 2015, 2017) to evaluate our model. The statistics of these datasets are listed in Table 2. The corpora contain data from the QatarLiving forum 5, and are publicly available on the task website. Each dataset 5http://www.qatarliving.com/forum 1750 Statistics SemEval 2015 SemEval 2017 Train Dev Test Train Dev Test Number of questions 2376 266 300 5124 327 293 Number of answers 15013 1447 1793 38638 3270 2930 Average length of subject 6.36 6.08 6.24 6.38 6.16 5.76 Average length of body 39.26 39.47 39.53 43.01 47.98 54.06 Average length of answer 35.82 33.90 37.33 37.67 37.30 39.50 Table 2: Statistics of two CQA datasets. We can see from the statistics that the question body is much lengthier than the question subject. Thus, it is necessary to condense the question representation. consists of questions and a list of answers for each question, and each question consists of a short title and a more detailed description. There are also some metadata associated with them, e.g., user ID, date of posting, and the question category. We do not use the metadata because they failed to boost performance in our model. Since the SemEval 2017 dataset is an updated version of SemEval 2016 6, and shares the same evaluation metrics with SemEval 2016, we choose to use the SemEval 2017 dataset for evaluation. 4.2 Evaluation Metrics In order to facilitate comparison, we adopt the evaluation metrics used in the official task or prior work. For the SemEval 2015 dataset, the official scores are macro-averaged F1 and accuracy over three categories. However, many recent researches (Barrón-Cedeño et al., 2015; Joty et al., 2015, 2016) switched to a binary classification setting, i.e., identifying Good vs. Bad answers. Because binary classification is much closer to a realworld CQA application. Besides, the PotentiallyUseful class is both the smallest and the noisiest class, making it the hardest to predict. To make it worse, its impact is magnified by the macroaveraged F1. Therefore, we adopt the F1 score and accuracy on two categories for evaluation. SemEval 2017 regards answer selection as a ranking task, which is closer to the application scenario. As a result, mean average precision (MAP) is used as an evaluation measure. For a perfect ranking, a system has to place all Good answers above the PotentiallyUseful and Bad answers. The latter two are not actually distinguished and are considered Bad in terms of evaluation. Addition6The SemEval 2017 dataset provides all the data from 2016 for training , and fresh data for testing, but it does not include a development set. Following previous work (Filice et al., 2017), we use the 2016 official test set as the development set. ally, standard classification measures like accuracy and F1 score are also reported. 4.3 Implementation Details We use the tokenizer from NLTK (Bird, 2006) to preprocess each sentence. All word embeddings in the sentence encoder layer are initialized with the 300-dimensional GloVe (Pennington et al., 2014) word vectors trained on the domainspecific unannotated corpus, and embeddings for out-of-vocabulary words are set to zero. We use the Adam Optimizer (Kingma and Ba, 2014) for optimization with a first momentum coefficient of 0.9 and a second momentum coefficient of 0.999. We perform a small grid search over combinations of initial learning rate [1 × 10−6, 3 × 10−6, 1 × 10−5], L2 regularization parameter [1 × 10−7, 3 × 10−7, 1 × 10−6], and batch size [8, 16, 32]. We take the best configuration based on performance on the development set, and only evaluate that configuration on the test set. In order to mitigate the class imbalance problem, median frequency balancing Eigen and Fergus (2015) is used to reweight each class in the cross-entropy loss. Therefore, the rarer a class is in the training set, the larger weight it will get in the cross entropy loss. Early stopping is applied to mitigate the problem of overfitting. For the SemEval 2017 dataset, the conditional probability over the Good class is used to rank all the candidate answers. 5 Experimental Results In this section, we evaluate our QCN model on two community question answering datasets from SemEval shared tasks. 5.1 SemEval 2015 Results Table 3 compares our model with the following baselines: 1751 Methods F1 Acc (1) JAIST 78.96 79.10 (2) HITSZ-ICRC 76.52 76.11 (3) Graph-cut 80.55 79.80 (4) FCCRF 81.50 80.50 (5) BGMN 77.23 78.40 (6) CNN-LSTM-CRF 82.22 82.24 (7) QCN 83.91 85.65 Table 3: Comparisons on the SemEval 2015 dataset. • JAIST (Tran et al., 2015): It used an SVM classifier to incorporate various kinds of features , including topic model based features and word vector representations. • HITSZ-ICRC (Hou et al., 2015): It proposed ensemble learning and hierarchical classification method to classify answers. • Graph-cut (Joty et al., 2015): It modeled the relationship between pairs of answers at any distance in the same question thread, based on the idea that similar answers should have similar labels. • FCCRF (Joty et al., 2016): It used locally learned classifiers to predict the label for each individual node, and applied fully connected CRF to make global inference. • CNN-LSTM-CRF (Xiang et al., 2016): The question and its answers are linearly connected in a sequence and encoded by CNN. An attention-based LSTM with a CRF layer is then applied on the encoded sequence. • BGMN (Wu et al., 2017b): It used the memory mechanism to iteratively aggregate more relevant information which is useful to identify the relationship between questions and answers. Baselines include top systems from SemEval 2015 (1, 2), systems relying on thread level information to make global inference (3, 4), and neural network based systems (5, 6). We observe that our proposed QCN can achieve the state-of-the-art performance on this dataset, outperforming previous best model (6) by 1.7% in terms of F1 and 3.4% in terms of accuracy. Methods MAP F1 Acc (1) KeLP 88.43 69.87 73.89 (2) Beihang-MSRA 88.24 68.40 51.98 (3) ECNU 86.72 77.67 78.43 (4) LSTM 86.32 74.41 75.69 (5) LSTM-subject-body 87.11 74.50 77.28 (6) QCN 88.51 78.11 80.71 Table 4: Comparisons on the SemEval 2017 dataset. Notably, Systems (1, 2, 3, 4) have heavy feature engineering, while QCN only uses automaticallylearned feature vectors, demonstrating that our QCN model is concise as well as effective. Furthermore, our model can outperform systems relying on thread level information to make global inference (3, 4), showing that modeling interaction between the question-answer pair is useful enough for answer selection task. Finally, neural network based systems (5, 6) used attention mechanism in sentence representation but ignored the subjectbody relationship in community questions. QCN can outperform them by a large margin, showing that condensing question representation helps in the answer selection task. 5.2 SemEval 2017 Results Table 4 compares our model with the following baselines: • KeLP (Filice et al., 2017): It used syntactic tree kernels with relational links between questions and answers, together with some standard text similarity measures linearly combined with the tree kernel. • Beihang-MSRA (Feng et al., 2017): It used gradient boosted regression trees to combine traditional NLP features and neural networkbased matching features. • ECNU (Wu et al., 2017a): It combined a supervised model using traditional features and a convolutional neural network to represent the question-answer pair. • LSTM: It is a simple neural network based baseline that we implemented. In this model, the question subject and the question body are concatenated, and an LSTM is used to obtain the question and answer representation. 1752 • LSTM-subject-body: It is another neural network based baseline that we implemented. LSTM is applied on the question subject and body respectively, and the results are concatenated to form question representation. Baselines include top systems from the SemEval 2017 CQA task (1, 2, 3) and two neural network based baselines (4, 5) that we implemented. (5) can outperform (4), showing that treating question subject and body differently can indeed boot model performance. Comparing (6) with (5), we can draw the conclusion that orthogonal decomposition is more effective than simple concatenation, because it can flexibly aggregate related information from the question body with respect to the main subject. In the example listed in Table 1, attention heatmap of Aorth indicates that QCN can effectively find additional information like “maintenance, accident or service history”, while (5) fails to do so. QCN has a great advantage in terms of accuracy. We hypothesize that QCN focuses on modeling interaction between questions and answers, i.e., whether an answer can match the corresponding question. Many pieces of previous work focus on modeling relationship between answers in a question thread, i.e., which answer is more suitable in consideration of all other answers. As a consequence, their models have a greater advantage in ranking while QCN has a greater advantage in classification. Despite all this, QCN can still obtain better ranking performance. 5.3 Ablation Study For thorough comparison, besides the preceding models, we implement nine extra baselines on the SemEval 2017 dataset to analyze the improvements contributed by each part of our QCN model: • w/o task-specific word embeddings where word embeddings are initialized with the 300-dimensional GloVe word vectors trained on Wikipedia 2014 and Gigaword 5. • w/o character embeddings where wordlevel embeddings are only composed of 600dimensional GloVe word vectors trained on the domain-specific unannotated corpus. • subject-body alignment where we use the same attention mechanism as Question Answer Alignment to obtain weighted sum of Model Acc (1) w/o task-specific word embeddings 78.81 (2) w/o character embeddings 78.05 (3) subject-body alignment 77.38 (4) subject-body concatenation 76.06 (5) w/o multi-dimensional attention 78.33 (6) subject only 74.02 (7) body only 75.57 (8) similarity only 79.11 (9) disparity only 78.24 (10) QCN 80.71 Table 5: Ablation studies on the SemEval 2017 dataset. the question body for each question subject word, and then the result is concatenated with Semb to obtain question representation Srep. • subject-body concatenation where we concatenate question subject and body text, and use the preprocessing step described in section 3.3 to obtain Srep. • w/o multi-dimensional attention where the multi-dimensional attention mechanism is replaced by vanilla attention in all modules, i.e., attention score for each token pair is a scalar instead of a vector. • subject only where only question subject is used as question representation. • body only where only question body is used as question representation. • similarity only where the parallel component alone is used in subject-body interaction. • disparity only where the orthogonal component alone is used in subject-body interaction. The results are listed in Table 5. We can see that using task-specific embeddings and character embeddings both contribute to model performance. This is because CQA text is non-standard. There are quantities of informal language usage, such as abbreviations, typos, emoticons, and grammatical mistakes. Using task-specific embeddings and character embeddings can help to attenuate the OOV problem. Using orthogonal decomposition (10) instead of subject-body alignment (3) can bring about significant performance gain. This is because not only 1753 Any recommendation for Travel agencies in Doha to arrange packages Which airline offers the best fares or low cost to Kuala Lumpur (a) Apara Any recommendation for Travel agencies in Doha to arrange packages Which airline offers the best fares or low cost to Kuala Lumpur (b) Aorth Gulf air was the cheapest It transits via Bahrain Which airline offers the best fares or low cost to Kuala Lumpur (c) Aalign Figure 2: Attention probabilities in Apara, Aorth and Aalign. In order to visualize the multi-dimensional attention vector, we use the L2 norm of the attenion vector for representation. the similar part of the question body to the question subject is useful for the question representation, the disparity part can also provide additional information. In the example listed in Table 1, additional information like “maintenance, accident or service history” is also important to determine answer quality. QCN outperforms (4) by a great margin, demonstrating that subject-body relationship in community questions helps to condense question representation. Therefore, QCN can identify the meaningful part of the question representation that helps to determine answer quality. Using the multi-dimensional attention can further boost model performance, showing that the multi-dimensional attention can model the interaction between questions and answers more precisely. Comparing QCN with (6) and (7), we can conclude that both the subject and the body are indispensable for question representation. (8) outperforms (9), demonstrating the parallel component is more useful in subject-body interaction. 6 Qualitative Study To gain a closer view of what dependencies are captured in the subject-body pair and the questionanswer pair, we visualize the attention probabilities Apara, Aorth and Aalign by heatmap. A training example from SemEval 2015 is selected for illustration. In Figure 2, we can draw the following conclusions. First, orthogonal decomposition helps to divide the labor of identifying similar parts in the parallel component and collecting related information in the question body in the orthogonal component. For instance, for the word “Kuala” in the question subject, its parallel alignment score focuses more on “Doha” and “Travel”, while its orthogonal alignment score focuses on “arrange” and “package”, which is the purpose of the travel and therefore is also indispensable for sentence representation. Second, semantically important words such as “airline” and “fares” dominate the attention weights, showing that our QCN model can effectively select words that are most representative for the meaning of the whole sentence. Lastly, words that are useful to determine answer quality stand out in the question-answer interaction matrix, demonstrating that question-answer relationship can be well modeled. For example, “best” and “low” are the words that are more important in the question-answer relationship, they are emphasized in the question-answer alignment matrix. 7 Related Work One main task in community question answering is answer selection, i.e., to rate the answers according to their quality. The SemEval CQA tasks (Nakov et al., 2015, 2016, 2017) provide universal benchmark datasets for evaluating researches on this problem. Earlier work of answer selection in CQA relied heavily on feature engineering, linguistic tools, and external resource. Nakov et al. (2016) investigated a wide range of feature types including similarity features, content features, thread level/meta features, and automatically generated features for SemEval CQA models. Tran et al. (2015) studied the use of topic model based features and word vector representation based features in the answer re-ranking task. Filice et al. (2016) designed various heuristic features and thread-based features 1754 that can signal a good answer. Although achieving good performance, these methods rely heavily on feature engineering, which requires a large amount of manual work and domain expertise. Since answer selection is inherently a ranking task, a few recent researches proposed to use local features to make global ranking decision. BarrónCedeño et al. (2015) was the first work that applies structured prediction model on CQA answer selection task. Joty et al. (2016) approached the task with a global inference process to exploit the information of all answers in the question-thread in the form of a fully connected graph. To avoid feature engineering, many deep learning models have been proposed for answer selection. Among them, Zhang et al. (2017) proposed a novel interactive attention mechanism to address the problem of noise and redundancy prevalent in CQA. Tay et al. (2017) introduced temporal gates for sequence pairs so that questions and answers are aware of what each other is remembering or forgetting. Simple as their model are, they did not consider the relationship between question subject and body, which is useful for question condensing. 8 Conclusion and Future Work We propose Question Condensing Networks (QCN), an attention-based model that can utilize the subject-body relationship in community questions to condense question representation. By orthogonal decomposition, the labor of identifying similar parts and collecting related information in the question body can be well divided in two different alignment matrices. To better capture the interaction between the subject-body pair and the question-answer pair, the multi-dimensional attention mechanism is adopted. Empirical results on two community question answering datasets in SemEval demonstrate the effectiveness of our model. In future work, we will try to incorporate more hand-crafted features in our model. Furthermore, since thread-level features have been explored in previous work (Barrón-Cedeño et al., 2015; Joty et al., 2015, 2016), we will verify their effectiveness in our architecture. 9 Acknowledgments We would like to thank anonymous reviewers for their insightful comments. Our work is supported by National Natural Science Foundation of China under Grant No.61433015 and the National Key Research and Development Program of China under Grant No.2017YFB1002101. The corresponding author of this paper is Houfeng Wang. References Alberto Barrón-Cedeño, Simone Filice, Giovanni Da San Martino, Shafiq R. Joty, Lluís Màrquez, Preslav Nakov, and Alessandro Moschitti. 2015. Thread-level information for comment classification in community question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers. pages 687–693. Steven Bird. 2006. NLTK: the natural language toolkit. In ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006. David Eigen and Rob Fergus. 2015. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015. pages 2650–2658. Wenzheng Feng, Yu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2017. Beihang-msra at semeval-2017 task 3: A ranking system with neural matching features for community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017. pages 280–286. Simone Filice, Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2016. Kelp at semeval-2016 task 3: Learning semantic relations between questions and answers. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016. pages 1116–1123. Simone Filice, Giovanni Da San Martino, and Alessandro Moschitti. 2017. Kelp at semeval-2017 task 3: Learning pairwise patterns in community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017. pages 326–333. Yongshuai Hou, Cong Tan, Xiaolong Wang, Yaoyun Zhang, Jun Xu, and Qingcai Chen. 2015. HITSZICRC: exploiting classification approach for answer selection in community question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2015, Denver, Colorado, USA, June 4-5, 2015. pages 196– 202. 1755 Shafiq R. Joty, Alberto Barrón-Cedeño, Giovanni Da San Martino, Simone Filice, Lluís Màrquez, Alessandro Moschitti, and Preslav Nakov. 2015. Global thread-level inference for comment classification in community question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015. pages 573–578. Shafiq R. Joty, Lluís Màrquez, and Preslav Nakov. 2016. Joint learning with global inference for comment classification in community question answering. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. pages 703–713. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 1217, 2016, Phoenix, Arizona, USA.. pages 2741– 2749. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Preslav Nakov, Doris Hoogeveen, Lluís Màrquez, Alessandro Moschitti, Hamdy Mubarak, Timothy Baldwin, and Karin Verspoor. 2017. Semeval-2017 task 3: Community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017. pages 27–48. Preslav Nakov, Lluís Màrquez, Walid Magdy, Alessandro Moschitti, Jim Glass, and Bilal Randeree. 2015. Semeval-2015 task 3: Answer selection in community question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2015, Denver, Colorado, USA, June 4-5, 2015. pages 269–281. Preslav Nakov, Lluís Màrquez, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, Jim Glass, and Bilal Randeree. 2016. Semeval-2016 task 3: Community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACLHLT 2016, San Diego, CA, USA, June 16-17, 2016. pages 525–545. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1532–1543. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In AAAI Conference on Artificial Intelligence. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2017. Cross temporal recurrent networks for ranking question answer pairs. CoRR abs/1711.07656. Quan Hung Tran, Vu Tran, Tu Vu, Minh Nguyen, and Son Bao Pham. 2015. JAIST: combining multiple features for answer selection in community question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2015, Denver, Colorado, USA, June 4-5, 2015. pages 215–219. Shuohang Wang and Jing Jiang. 2017. A compareaggregate model for matching text sequences. In Proceedings of the International Conference on Learning Representations (ICLR). Zhiguo Wang, Haitao Mi, and Abraham Ittycheriah. 2016. Sentence similarity learning by lexical decomposition and composition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. GuoShun Wu, Yixuan Sheng, Man Lan, and Yuanbin Wu. 2017a. ECNU at semeval-2017 task 3: Using traditional and deep learning methods to address community question answering task. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017. pages 365–369. Wei Wu, Houfeng Wang, and Sujian Li. 2017b. Bidirectional gated memory networks for answer selection. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data. LNAI 10565, Springer. pages 251–262. Yang Xiang, Xiaoqiang Zhou, Qingcai Chen, Zhihui Zheng, Buzhou Tang, Xiaolong Wang, and Yang Qin. 2016. Incorporating label dependency for answer quality tagging in community question answering via CNN-LSTM-CRF. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan. pages 1231–1241. Xiaodong Zhang, Sujian Li, Lei Sha, and Houfeng Wang. 2017. Attentive interactive neural networks for answer selection in community question answering. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA.. pages 3525–3531.
2018
162
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1756–1766 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1756 Towards Robust Neural Machine Translation Yong Cheng⋆, Zhaopeng Tu⋆, Fandong Meng⋆, Junjie Zhai⋆and Yang Liu† ⋆Tencent AI Lab, China †State Key Laboratory of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China Beijing Advanced Innovation Center for Language Resources [email protected] {zptu, fandongmeng, jasonzhai}@tencent.com [email protected] Abstract Small perturbations in the input can severely distort intermediate representations and thus impact translation quality of neural machine translation (NMT) models. In this paper, we propose to improve the robustness of NMT models with adversarial stability training. The basic idea is to make both the encoder and decoder in NMT models robust against input perturbations by enabling them to behave similarly for the original input and its perturbed counterpart. Experimental results on Chinese-English, English-German and English-French translation tasks show that our approaches can not only achieve significant improvements over strong NMT systems but also improve the robustness of NMT models. 1 Introduction Neural machine translation (NMT) models have advanced the state of the art by building a single neural network that can better learn representations (Cho et al., 2014; Sutskever et al., 2014). The neural network consists of two components: an encoder network that encodes the input sentence into a sequence of distributed representations, based on which a decoder network generates the translation with an attention model (Bahdanau et al., 2015; Luong et al., 2015). A variety of NMT models derived from this encoder-decoder framework have further improved the performance of machine translation systems (Gehring et al., 2017; Vaswani et al., 2017). NMT is capable of generalizing better to unseen text by exploiting word similarities in embeddings and capturing long-distance reordering by conditioning on larger contexts in a continuous way. Input tamen bupa kunnan zuochu weiqi AI. Output They are not afraid of difficulties to make Go AI. Input tamen buwei kunnan zuochu weiqi AI. Output They are not afraid to make Go AI. Table 1: The non-robustness problem of neural machine translation. Replacing a Chinese word with its synonym (i.e., “bupa” →“buwei”) leads to significant erroneous changes in the English translation. Both “bupa” and “buwei” can be translated to the English phrase “be not afraid of.” However, studies reveal that very small changes to the input can fool state-of-the-art neural networks with high probability (Goodfellow et al., 2015; Szegedy et al., 2014). Belinkov and Bisk (2018) confirm this finding by pointing out that NMT models are very brittle and easily falter when presented with noisy input. In NMT, due to the introduction of RNN and attention, each contextual word can influence the model prediction in a global context, which is analogous to the “butterfly effect.” As shown in Table 1, although we only replace a source word with its synonym, the generated translation has been completely distorted. We investigate severe variations of translations caused by small input perturbations by replacing one word in each sentence of a test set with its synonym. We observe that 69.74% of translations have changed and the BLEU score is only 79.01 between the translations of the original inputs and the translations of the perturbed inputs, suggesting that NMT models are very sensitive to small perturbations in the input. The vulnerability and instability of NMT models limit their applicability to a broader range of tasks, which require robust performance on noisy inputs. For example, simultaneous translation systems use auto1757 matic speech recognition (ASR) to transcribe input speech into a sequence of hypothesized words, which are subsequently fed to a translation system. In this pipeline, ASR errors are presented as sentences with noisy perturbations (the same pronunciation but incorrect words), which is a significant challenge for current NMT models. Moreover, instability makes NMT models sensitive to misspellings and typos in text translation. In this paper, we address this challenge with adversarial stability training for neural machine translation. The basic idea is to improve the robustness of two important components in NMT: the encoder and decoder. To this end, we propose two approaches to constructing noisy inputs with small perturbations to make NMT models resist them. As important intermediate representations encoded by the encoder, they directly determine the accuracy of final translations. We introduce adversarial learning to make behaviors of the encoder consistent for both an input and its perturbed counterpart. To improve the stability of the decoder, our method jointly maximizes the likelihoods of original and perturbed data. Adversarial stability training has the following advantages: 1. Improving both the robustness and translation performance: Our adversarial stability training is capable of not only improving the robustness of NMT models but also achieving better translation performance. 2. Applicable to arbitrary noisy perturbations: In this paper, we propose two approaches to constructing noisy perturbations for inputs. However, our training framework can be easily extended to arbitrary noisy perturbations. Especially, we can design task-specific perturbation methods. 3. Transparent to network architectures: Our adversarial stability training does not depend on specific NMT architectures. It can be applied to arbitrary NMT systems. Experiments on Chinese-English, EnglishFrench and English-German translation tasks show that adversarial stability training achieves significant improvements across different languages pairs. Our NMT system outperforms the state-of-the-art RNN-based NMT system (GNMT) (Wu et al., 2016) and obtains comparable performance with the CNN-based NMT system (Gehring et al., 2017). Related experimental analyses validate that our training approach can improve the robustness of NMT models. 2 Background NMT is an end-to-end framework which directly optimizes the translation probability of a target sentence y = y1, ..., yN given its corresponding source sentence x = x1, ..., xM: P(y|x; θ) = N Y n=1 P(yn|y<n, x; θ) (1) where θ is a set of model parameters and y<n is a partial translation. P(y|x; θ) is defined on a holistic neural network which mainly includes two core components: an encoder encodes a source sentence x into a sequence of hidden representations Hx = H1, ..., HM, and a decoder generates the n-th target word based on the sequence of hidden representations: P(yn|y<n, x; θ) ∝exp{g(yn−1, sn, Hx; θ)} (2) where sn is the n-th hidden state on target side. Thus the model parameters of NMT include the parameter sets of the encoder θenc and the decoder θdec: θ = {θenc, θdec}. The standard training objective is to minimize the negative log-likelihood of the training corpus S = {⟨x(s), y(s)⟩}|S| s=1: ˆθ = argmin θ L(x, y; θ) = argmin θ n X ⟨x,y⟩∈S −log P(y|x; θ) o (3) Due to the vulnerability and instability of deep neural networks, NMT models usually suffer from a drawback: small perturbations in the input can dramatically deteriorate its translation results. Belinkov and Bisk (2018) point out that characterbased NMT models are very brittle and easily falter when presented with noisy input. We find that word-based and subword-based NMT models also confront with this shortcoming, as shown in Table 1. We argue that the distributed representations should fulfill the stability expectation, which is the underlying concept of the proposed approach. Recent work has shown that adversarially trained models can be made robust to such perturbations (Zheng et al., 2016; Madry et al., 2018). Inspired by this, in this work, we improve the robustness of encoder representations against noisy perturbations with adversarial learning (Goodfellow et al., 2014). 1758 x’ x +perturbations Encoder Hx Hx’ Decoder Discriminator Linv(x, x’) Ltrue(x, y) Lnoisy(x’, y) Figure 1: The architecture of NMT with adversarial stability training. The dark solid arrow lines represent the forward-pass information flow for the input sentence x, while the red dashed arrow lines for the noisy input sentence x′, which is transformed from x by adding small perturbations. 3 Approach 3.1 Overview The goal of this work is to propose a general approach to make NMT models learned to be more robust to input perturbations. Our basic idea is to maintain the consistency of behaviors through the NMT model for the source sentence x and its perturbed counterpart x′. As aforementioned, the NMT model contains two procedures for projecting a source sentence x to its target sentence y: the encoder is responsible for encoding x as a sequence of representations Hx, while the decoder outputs y with Hx as input. We aim at learning the perturbation-invariant encoder and decoder. Figure 1 illustrates the architecture of our approach. Given a source sentence x, we construct a set of perturbed sentences N(x), in which each sentence x′ is constructed by adding small perturbations to x. We require that x′ is a subtle variation from x and they have similar semantics. Given the input pair (x, x′), we have two expectations: (1) the encoded representation Hx′ should be close to Hx; and (2) given Hx′, the decoder is able to generate the robust output y. To this end, we introduce two additional objectives to improve the robustness of the encoder and decoder: • Linv(x, x′) to encourage the encoder to output similar intermediate representations Hx and Hx′ for x and x′ to achieve an invariant encoder, which benefits outputting the same translations. We cast this objective in the adversarial learning framework. • Lnoisy(x′, y) to guide the decoder to generate output y given the noisy input x′, which is modeled as −log P(y|x′). It can also be defined as KL divergence between P(y|x) and P(y|x′) that indicates using P(y|x) to teach P(y|x′). As seen, the two introduced objectives aim to improve the robustness of the NMT model which can be free of high variances in target outputs caused by small perturbations in inputs. It is also natural to introduce the original training objective L(x, y) on x and y, which can guarantee good translation performance while keeping the stability of the NMT model. Formally, given a training corpus S, the adversarial stability training objective is J (θ) = X ⟨x,y⟩∈S  Ltrue(x, y; θenc, θdec) +α X x′∈N(x) Linv(x, x′; θenc, θdis) +β X x′∈N(x) Lnoisy(x′, y; θenc, θdec)  (4) where Ltrue(x, y) and Lnoisy(x′, y) are calculated using Equation 3, and Linv(x, x′) is the adversarial loss to be described in Section 3.3. α and β control the balance between the original translation task and the stability of the NMT model. θ = {θenc, θdec, θdis} are trainable parameters of the encoder, decoder, and the newly introduced discriminator used in adversarial learning. As seen, the parameters of encoder θenc and decoder θdec are trained to minimize both the translation loss Ltrue(x, y) and the stability losses (Lnoisy(x′, y) and Linv(x, x′)). Since Lnoisy(x′, y) evaluates the translation loss on the perturbed neighbour x′ and its corresponding target sentence y, it means that we augment the training data by adding perturbed neighbours, which can potentially improve the translation performance. In this way, our approach not only makes the output of NMT models more robust, but also improves the performance on the original translation task. 1759 In the following sections, we will first describe how to construct perturbed inputs with different strategies to fulfill different goals (Section 3.2), followed by the proposed adversarial learning mechanism for the perturbation-invariant encoder (Section 3.3). We conclude this section with the training strategy (Section 3.4). 3.2 Constructing Perturbed Inputs At each training step, we need to generate a perturbed neighbour set N(x) for each source sentence x for adversarial stability training. In this paper, we propose two strategies to construct the perturbed inputs at multiple levels of representations. The first approach generates perturbed neighbours at the lexical level. Given an input sentence x, we randomly sample some word positions to be modified. Then we replace words at these positions with other words in the vocabulary according to the following distribution: P(x|xi) = exp {cos (E[xi], E[x])} P x∈Vx\xi exp {cos (E[xi], E[x])} (5) where E[xi] is the word embedding for word xi, Vx\xi is the source vocabulary set excluding the word xi, and cos (E[xi], E[x]) measures the similarity between word xi and x. Thus we can change the word to another word with similar semantics. One potential problem of the above strategy is that it is hard to enumerate all possible positions and possible types to generate perturbed neighbours. Therefore, we propose a more general approach to modifying the sentence at the feature level. Given a sentence, we can obtain the word embedding for each word. We add the Gaussian noise to a word embedding to simulate possible types of perturbations. That is E[x′ i] = E[xi] + ϵ, ϵ ∼N(0, σ2I) (6) where the vector ϵ is sampled from a Gaussian distribution with variance σ2. σ is a hyper-parameter. We simply introduce Gaussian noise to all of word embeddings in x. The proposed scheme is a general framework where one can freely define the strategies to construct perturbed inputs. We just present two possible examples here. The first strategy is potentially useful when the training data contains noisy words, while the latter is a more general strategy to improve the robustness of common NMT models. In practice, one can design specific strategies for particular tasks. For example, we can replace correct words with their homonyms (same pronunciation but different meanings) to improve NMT models for simultaneous translation systems. 3.3 Adversarial Learning for the Perturbation-invariant Encoder The goal of the perturbation-invariant encoder is to make the representations produced by the encoder indistinguishable when fed with a correct sentence x and its perturbed counterpart x′, which is directly beneficial to the output robustness of the decoder. We cast the problem in the adversarial learning framework (Goodfellow et al., 2014). The encoder serves as the generator G, which defines the policy that generates a sequence of hidden representations Hx given an input sentence x. We introduce an additional discriminator D to distinguish the representation of perturbed input Hx′ from that of the original input Hx. The goal of the generator G (i.e., encoder) is to produce similar representations for x and x′ which could fool the discriminator, while the discriminator D tries to correctly distinguish the two representations. Formally, the adversarial learning objective is Linv(x, x′; θenc, θdis) = Ex∼S[−log D(G(x))] + Ex′∼N(x)  −log(1 −D(G(x′)))  (7) The discriminator outputs a classification score given an input representation, and tries to maximize D(G(x)) to 1 and minimize D(G(x′)) to 0. The objective encourages the encoder to output similar representations for x and x′, so that the discriminator fails to distinguish them. The training procedure can be regarded as a min-max two-player game. The encoder parameters θenc are trained to maximize the loss function to fool the discriminator. The discriminator parameters θdis are optimized to minimize this loss for improving the discriminating ability. For efficiency, we update both the encoder and the discriminator simultaneously at each iteration, rather than the periodical training strategy that is commonly used in adversarial learning. Lamb et al. (2016) also propose a similar idea to use Professor Forcing to make the behaviors of RNNs be indistinguishable when training and sampling the networks. 1760 3.4 Training As shown in Figure 1, our training objective includes three sets of model parameters for three modules. We use mini-batch stochastic gradient descent to optimize our model. In the forward pass, besides a mini-batch of x and y, we also construct a mini-batch consisting of the perturbed neighbour x′ and y. We propagate the information to calculate these three loss functions according to arrows. Then, gradients are collected to update three sets of model parameters. Except for the gradients of Linv with respect to θenc are multiplying by −1, other gradients are normally backpropagated. Note that we update θinv and θenc simultaneously for training efficiency. 4 Experiments 4.1 Setup We evaluated our adversarial stability training on translation tasks of several language pairs, and reported the 4-gram BLEU (Papineni et al., 2002) score as calculated by the multi-bleu.perl script. Chinese-English We used the LDC corpus consisting of 1.25M sentence pairs with 27.9M Chinese words and 34.5M English words respectively. We selected the best model using the NIST 2006 set as the validation set (hyper-parameter optimization and model selection). The NIST 2002, 2003, 2004, 2005, and 2008 datasets are used as test sets. English-German We used the WMT 14 corpus containing 4.5M sentence pairs with 118M English words and 111M German words. The validation set is newstest2013, and the test set is newstest2014. English-French We used the IWSLT corpus which contains 0.22M sentence pairs with 4.03M English words and 4.12M French words. The IWLST corpus is very dissimilar from the NIST and WMT corpora. As they are collected from TED talks and inclined to spoken language, we want to verify our approaches on the nonnormative text. The IWSLT 14 test set is taken as the validation set and 15 test set is used as the test set. For English-German and English-French, we tokenize both English, German and French words using tokenize.perl script. We follow Sennrich et al. (2016b) to split words into subword units. The numbers of merge operations in byte pair encoding (BPE) are set to 30K, 40K and 30K respectively for Chinese-English, English-German, and English-French. We report the case-sensitive tokenized BLEU score for English-German and English-French and the caseinsensitive tokenized BLEU score for ChineseEnglish. Our baseline system is an in-house NMT system. Following Bahdanau et al. (2015), we implement an RNN-based NMT in which both the encoder and decoder are two-layer RNNs with residual connections between layers (He et al., 2016b). The gating mechanism of RNNs is gated recurrent unit (GRUs) (Cho et al., 2014). We apply layer normalization (Ba et al., 2016) and dropout (Hinton et al., 2012) to the hidden states of GRUs. Dropout is also added to the source and target word embeddings. We share the same matrix between the target word embeedings and the pre-softmax linear transformation (Vaswani et al., 2017). We update the set of model parameters using Adam SGD (Kingma and Ba, 2015). Its learning rate is initially set to 0.05 and varies according to the formula in Vaswani et al. (2017). Our adversarial stability training initializes the model based on the parameters trained by maximum likelihood estimation (MLE). We denote adversarial stability training based on lexical-level perturbations and feature-level perturbations respectively as ASTlexical and ASTfeature. We only sample one perturbed neighbour x′ ∈N(x) for training efficiency. For the discriminator used in Linv, we adopt the CNN discriminator proposed by Kim (2014) to address the variable-length problem of the sequence generated by the encoder. In the CNN discriminator, the filter windows are set to 3, 4, 5 and rectified linear units are applied after convolution operations. We tune the hyperparameters on the validation set through a grid search. We find that both the optimal values of α and β are set to 1.0. The standard variance in Gaussian noise used in the formula (6) is set to 0.01. The number of words that are replaced in the sentence x during lexical-level perturbations is taken as max(0.2|x|, 1) in which |x| is the length of x. The default beam size for decoding is 10. 4.2 Translation Results 4.2.1 NIST Chinese-English Translation Table 2 shows the results on Chinese-English translation. Our strong baseline system significantly outperforms previously reported results on 1761 System Training MT06 MT02 MT03 MT04 MT05 MT08 Shen et al. (2016) MRT 37.34 40.36 40.93 41.37 38.81 29.23 Wang et al. (2017) MLE 37.29 – 39.35 41.15 38.07 – Zhang et al. (2018) MLE 38.38 – 40.02 42.32 38.84 – this work MLE 41.38 43.52 41.50 43.64 41.58 31.60 ASTlexical 43.57 44.82 42.95 45.05 43.45 34.85 ASTfeature 44.44 46.10 44.07 45.61 44.06 34.94 Table 2: Case-insensitive BLEU scores on Chinese-English translation. System Architecture Training BLEU Shen et al. (2016) Gated RNN with 1 layer MRT 20.45 Luong et al. (2015) LSTM with 4 layers MLE 20.90 Kalchbrenner et al. (2017) ByteNet with 30 layers MLE 23.75 Wang et al. (2017) DeepLAU with 4 layers MLE 23.80 Wu et al. (2016) LSTM with 8 layers RL 24.60 Gehring et al. (2017) CNN with 15 layers MLE 25.16 Vaswani et al. (2017) Self-attention with 6 layers MLE 28.40 this work Gated RNN with 2 layers MLE 24.06 ASTlexical 25.17 ASTfeature 25.26 Table 3: Case-sensitive BLEU scores on WMT 14 English-German translation. Training tst2014 tst2015 MLE 36.92 36.90 ASTlexical 37.35 37.03 ASTfeature 38.03 37.64 Table 4: Case-sensitive BLEU scores on IWSLT English-French translation. Chinese-English NIST datasets trained on RNNbased NMT. Shen et al. (2016) propose minimum risk training (MRT) for NMT, which directly optimizes model parameters with respect to BLEU scores. Wang et al. (2017) address the issue of severe gradient diffusion with linear associative units (LAU). Their system is deep with an encoder of 4 layers and a decoder of 4 layers. Zhang et al. (2018) propose to exploit both left-to-right and right-to-left decoding strategies for NMT to capture bidirectional dependencies. Compared with them, our NMT system trained by MLE outperforms their best models by around 3 BLEU points. We hope that the strong baseline systems used in this work make the evaluation convincing. We find that introducing adversarial stability training into NMT can bring substantial improvements over previous work (up to +3.16 BLEU points over Shen et al. (2016), up to +3.51 BLEU points over Wang et al. (2017) and up to +2.74 BLEU points over Zhang et al. (2018)) and our system trained with MLE across all the datasets. Compared with our baseline system, ASTlexical achieves +1.75 BLEU improvement on average. ASTfeature performs better, which can obtain +2.59 BLEU points on average and up to +3.34 BLEU points on NIST08. 4.2.2 WMT 14 English-German Translation In Table 3, we list existing NMT systems as comparisons. All these systems use the same WMT 14 English-German corpus. Except that Shen et al. (2016) and Wu et al. (2016) respectively adopt MRT and reinforcement learning (RL), other systems all use MLE as training criterion. All the systems except for Shen et al. (2016) are deep NMT models with no less than four layers. Google’s neural machine translation (GNMT) (Wu et al., 2016) represents a strong RNN-based NMT system. Compared with other RNN-based NMT systems except for GNMT, our baseline system with two layers can achieve better performance than theirs. When training our NMT system with ASTleixcal, significant improvement (+1.11 1762 Synthetic Type Training 0 Op. 1 Op. 2 Op. 3 Op. 4 Op. 5 Op. Swap MLE 41.38 38.86 37.23 35.97 34.61 32.96 ASTlexical 43.57 41.18 39.88 37.95 37.02 36.16 ASTfeature 44.44 42.08 40.20 38.67 36.89 35.81 Replacement MLE 41.38 37.21 31.40 27.43 23.94 21.03 ASTlexical 43.57 40.53 37.59 35.19 32.56 30.42 ASTfeature 44.44 40.04 35.00 30.54 27.42 24.57 Deletion MLE 41.38 38.45 36.15 33.28 31.17 28.65 ASTlexical 43.57 41.89 38.56 36.14 34.09 31.77 ASTfeature 44.44 41.75 39.06 36.16 33.49 30.90 Table 5: Translation results of synthetic perturbations on the validation set in Chinese-English translation. “1 Op.” denotes that we conduct one operation (swap, replacement or deletion) on the original sentence. Source zhongguo dianzi yinhang yewu guanli xingui jiangyu sanyue yiri qi shixing Reference china’s new management rules for e-banking operations to take effect on march 1 MLE china’s electronic bank rules to be implemented on march 1 ASTlexical new rules for business administration of china ’s electronic banking industry will come into effect on march 1 . ASTfeature new rules for business management of china ’s electronic banking industry to come into effect on march 1 Perturbed Source zhongfang dianzi yinhang yewu guanli xingui jiangyu sanyue yiri qi shixing MLE china to implement new regulations on business management ASTlexical the new regulations for the business administrations of the chinese electronics bank will come into effect on march 1 . ASTfeature new rules for business management of china’s electronic banking industry to come into effect on march 1 Table 6: Example translations of a source sentence and its perturbed counterpart by replacing a Chinese word “zhongguo” with its synonym “zhongfang.” BLEU points) can be observed. ASTfeature can obtain slightly better performance. Our NMT system outperforms the state-of-the-art RNN-based NMT system, GNMT, with +0.66 BLEU point and performs comparably with Gehring et al. (2017) which is based on CNN with 15 layers. Given that our approach can be applied to any NMT systems, we expect that the adversarial stability training mechanism can further improve performance upon the advanced NMT architectures. We leave this for future work. 4.2.3 IWSLT English-French Translation Table 4 shows the results on IWSLT EnglishFrench Translation. Compared with our strong baseline system trained by MLE, we observe that our models consistently improve translation performance in all datasets. ASTfeature can achieve significant improvements on the tst2015 although ASTlexical obtains comparable results. These demonstrate that our approach maintains good performance on the non-normative text. 4.3 Results on Synthetic Perturbed Data In order to investigate the ability of our training approaches to deal with perturbations, we experiment with three types of synthetic perturbations: • Swap: We randomly choose N positions from a sentence and then swap the chosen words with their right neighbours. • Replacement: We randomly replace sampled words in the sentence with other words. • Deletion: We randomly delete N words from each sentence in the dataset. As shown in Table 5, we can find that our training approaches, ASTlexical and ASTfeature, consistently outperform MLE against perturbations on all the numbers of operations. This means that our 1763 Ltrue Lnoisy Ladv BLEU √ × × 41.38 √ × √ 41.91 × √ × 42.20 √ √ × 42.93 √ √ √ 43.57 Table 7: Ablation study of adversarial stability training ASTlexical on Chinese-English translation. “√” means the loss function is included in the training objective while “×” means it is not. approaches have the capability of resisting perturbations. Along with the number of operations increasing, the performance on MLE drops quickly. Although the performance of our approaches also drops, we can see that our approaches consistently surpass MLE. In ASTlexical, with 0 operation, the difference is +2.19 (43.57 Vs. 41.38) for all synthetic types, but the differences are enlarged to +3.20, +9.39, and +3.12 respectively for the three types with 5 operations. In the Swap and Deletion types, ASTlexical and ASTfeature perform comparably after more than four operations. Interestingly, ASTlexical performs significantly better than both of MLE and ASTfeature after more than one operation in the Replacement type. This is because ASTlexical trains the model specifically on perturbation data that is constructed by replacing words, which agrees with the Replacement Type. Overall, ASTlexical performs better than ASTfeature against perturbations after multiple operations. We speculate that the perturbation method for ASTlexical and synthetic type are both discrete and they keep more consistent. Table 6 shows example translations of a Chinese sentence and its perturbed counterpart. These findings indicate that we can construct specific perturbations for a particular task. For example, in simultaneous translation, an automatic speech recognition system usually generates wrong words with the same pronunciation of correct words, which dramatically affects the quality of machine translation system. Therefore, we can design specific perturbations aiming for this task. 4.4 Analysis 4.4.1 Ablation Study Our training objective function Eq. (4) contains three loss functions. We perform an ablation Iterations 0 20 40 60 80 100 120 140 160 180 200 BLEU 34 36 38 40 42 44 46 × 103 ASTlexical ASTfeature Figure 2: BLEU scores of ASTlexical over iterations on Chinese-English validation set. Iterations 0 50 100 150 200 Cost 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 × 103 Lnoisy Ltrue Linv Figure 3: Learning curves of three loss functions, Ltrue, Linv and Lnoisy over iterations on ChineseEnglish validation set. study on the Chinese-English translation to understand the importance of these loss functions by choosing ASTlexical as an example. As Table 7 shows, if we remove Ladv, the translation performance decreases by 0.64 BLEU point. However, when Lnoisy is excluded from the training objective function, it results in a significant drop of 1.66 BLEU point. Surprisingly, only using Lnoisy is able to lead to an increase of 0.88 BLEU point. 4.4.2 BLEU Scores over Iterations Figure 2 shows the changes of BLEU scores over iterations respectively for ASTlexical and ASTfeature. They behave nearly consistently. Initialized by the model trained by MLE, their performance drops rapidly. Then it starts to go up quickly. Compared with the starting point, the 1764 maximal dropping points reach up to about 7.0 BLEU points. Basically, the curves present the state of oscillation. We think that introducing random perturbations and adversarial learning can make the training not very stable like MLE. 4.4.3 Learning Curves of Loss Functions Figure 3 shows the learning curves of three loss functions, Ltrue, Linv and Lnoisy. We can find that their costs of loss functions decrease not steadily. Similar to the Figure 2, there still exist oscillations in the learning curves although they do not change much sharply. We find that Linv converges to around 0.68 after about 100K iterations, which indicates that discriminator outputs probability 0.5 for both positive and negative samples and it cannot distinguish them. Thus the behaviors of the encoder for x and its perturbed neighbour x′ perform nearly consistently. 5 Related Work Our work is inspired by two lines of research: (1) adversarial learning and (2) data augmentation. Adversarial Learning Generative Adversarial Network (GAN) (Goodfellow et al., 2014) and its related derivative have been widely applied in computer vision (Radford et al., 2015; Salimans et al., 2016) and natural language processing (Li et al., 2017; Yang et al., 2018). Previous work has constructed adversarial examples to attack trained networks and make networks resist them, which has proved to improve the robustness of networks (Goodfellow et al., 2015; Miyato et al., 2016; Zheng et al., 2016). Belinkov and Bisk (2018) introduce adversarial examples to training data for character-based NMT models. In contrast to theirs, adversarial stability training aims to stabilize both the encoder and decoder in NMT models. We adopt adversarial learning to learn the perturbation-invariant encoder. Data Augmentation Data augmentation has the capability to improve the robustness of NMT models. In NMT, there is a number of work that augments the training data with monolingual corpora (Sennrich et al., 2016a; Cheng et al., 2016; He et al., 2016a; Zhang and Zong, 2016). They all leverage complex models such as inverse NMT models to generate translation equivalents for monolingual corpora. Then they augment the parallel corpora with these pseudo corpora to improve NMT models. Some authors have recently endeavored to achieve zero-shot NMT through transferring knowledge from bilingual corpora of other language pairs (Chen et al., 2017; Zheng et al., 2017; Cheng et al., 2017) or monolingual corpora (Lample et al., 2018; Artetxe et al., 2018). Our work significantly differs from these work. We do not resort to any complicated models to generate perturbed data and do not depend on extra monolingual or bilingual corpora. The way we exploit is more convenient and easy to implement. We focus more on improving the robustness of NMT models. 6 Conclusion We have proposed adversarial stability training to improve the robustness of NMT models. The basic idea is to train both the encoder and decoder robust to input perturbations by enabling them to behave similarly for the original input and its perturbed counterpart. We propose two approaches to construct perturbed data to adversarially train the encoder and stabilize the decoder. Experiments on Chinese-English, English-German and English-French translation tasks show that the proposed approach can improve both the robustness and translation performance. As our training framework is not limited to specific perturbation types, it is interesting to evaluate our approach in natural noise existing in practical applications, such as homonym in the simultaneous translation system. It is also necessary to further validate our approach on more advanced NMT architectures, such as CNN-based NMT (Gehring et al., 2017) and Transformer (Vaswani et al., 2017). Acknowledgments We thank the anonymous reviewers for their insightful comments and suggestions. We also thank Xiaoling Li for analyzing experimental results and providing valuable examples. Yang Liu is supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No. 61761166008, No. 61522204), Beijing Advanced Innovation Center for Language Resources, and the NExT++ project supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative. 1765 References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proceedings of ICLR. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In Proceedings of ICLR. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zeroresource neural machine translation. In Proceedings of ACL. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Proceedings of ACL. Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot-based neural machine translation. In Proceedings of IJCAI. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of ICML. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of NIPS. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of ICLR. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016a. Dual learning for machine translation. In Proceedings of NIPS. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016b. Deep residual learning for image recognition. In Proceedings of CVPR. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2017. Neural machine translation in linear time. In Proceedings of ICML. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Proceedings of NIPS. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of ICLR. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of EMNLP. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Aleksander Madry, Makelov Aleksandar, Schmidt Ludwig, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of ICLR. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2016. Distributional smoothing with virtual adversarial training. In Proceedings of ICLR. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a methof for automatic evaluation of machine translation. In Proceedings of ACL. Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Proceedings of NIPS. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving nerual machine translation models with monolingual data. In Proceedings of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of ACL. 1766 Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of ACL. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceddings of NIPS. Christian Szegedy, Wojciech Zaremba, Sutskever Ilya, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of ICML. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS. Mingxuan Wang, Zhengdong Lu, Jie Zhou, and Qun Liu. 2017. Deep neural machine translation with linear associative unit. In Proceedings of ACL. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Z. Yang, W. Chen, F. Wang, and B. Xu. 2018. Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets. In Proceedings of NAACL. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of EMNLP. Xiangwen Zhang, Jinsong Su, Yue Qin, Yang Liu, Rongrong Ji, and Hongji Wang. 2018. Asynchronous Bidirectional Decoding for Neural Machine Translation. In Proeedings of AAAI. Hao Zheng, Yong Cheng, and Yang Liu. 2017. Maximum expected likelihood estimation for zeroresource neural machine translation. In Proceedings of IJCAI. Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. 2016. Improving the robustness of deep neural networks via stability training. In Proceedings of CVPR.
2018
163
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1767–1776 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1767 Attention Focusing for Neural Machine Translation by Bridging Source and Target Embeddings Shaohui Kuang1 Junhui Li1 Ant´onio Branco2 Weihua Luo3 Deyi Xiong1∗ 1School of Computer Science and Technology, Soochow University, Suzhou, China [email protected], {lijunhui, dyxiong}@suda.edu.cn 2University of Lisbon, NLX-Natural Language and Speech Group, Department of Informatics Faculdade de Ciˆencias, Campo Grande, 1749-016 Lisboa, Portuga [email protected] 3Alibaba Group, Hangzhou, China [email protected] Abstract In neural machine translation, a source sequence of words is encoded into a vector from which a target sequence is generated in the decoding phase. Differently from statistical machine translation, the associations between source words and their possible target counterparts are not explicitly stored. Source and target words are at the two ends of a long information processing procedure, mediated by hidden states at both the source encoding and the target decoding phases. This makes it possible that a source word is incorrectly translated into a target word that is not any of its admissible equivalent counterparts in the target language. In this paper, we seek to somewhat shorten the distance between source and target words in that procedure, and thus strengthen their association, by means of a method we term bridging source and target word embeddings. We experiment with three strategies: (1) a source-side bridging model, where source word embeddings are moved one step closer to the output target sequence; (2) a target-side bridging model, which explores the more relevant source word embeddings for the prediction of the target sequence; and (3) a direct bridging model, which directly connects source and target word embeddings seeking to minimize errors in the translation of ones by the others. Experiments and analysis presented in this paper demonstrate that the proposed bridging models are able to significantly ∗Corresponding author ݏ௧ ݕ௧ ܿ௧ ݄ଵ … ݄௝ ்݄ … ݔଵ … ݔ௝ ݔ் … source target VKRUWHQHGGLVWDQFH Figure 1: Schematic representation of seq2seq NMT, where x1, . . . , xT and h1, . . . , hT represent source-side word embeddings and hidden states respectively, ct represents a source-side context vector, st a target-side decoder RNN hidden state, and yt a predicted word. Seeking to shorten the distance between source and target word embeddings, in what we term bridging, is the key insight for the advances presented in this paper. improve quality of both sentence translation, in general, and alignment and translation of individual source words with target words, in particular. 1 Introduction Neural machine translation (NMT) is an endto-end approach to machine translation that has achieved competitive results vis-a-vis statistical machine translation (SMT) on various language pairs (Bahdanau et al., 2015; Cho et al., 2014; Sutskever et al., 2014; Luong and Manning, 2015). In NMT, the sequence-to-sequence (seq2seq) model learns word embeddings for both source and target words synchronously. However, as illustrated in Figure 1, source and target word embeddings are at the two ends of a long information processing procedure. The individual associations between them will gradually become loose due to the separation of source-side hidden states (represented by h1, . . . , hT in Fig. 1) and a target1768 ৲࣐↻⯮Ӫ ߜྕՊ(winter olympics) Ⲵ⌅ഭ䘀ࣘઈ䖭䂹(honors) 䘄എᐤ哾eos the french athletes , who have participated in the disabled , have returned to paris . eos french athletes participating in special winter olympics returned to paris with honors Source Reference Baseline ᯟ䟼ޠ঑Ӕᡈৼᯩ਼᜿ᵜ(this) ᴸ(month) лᰜ(late) ൘ᰕ޵⬖䈸ࡔeos sir lanka UNK to hold talks in geneva eos Reference two warring sides in sri lanka agreed to hold talks in geneva late this month Baseline Source (a) (b) Figure 2: Examples of NMT output with incorrect alignments of source and target words that cannot be the translation of each other in any possible context. side hidden state (represented by st in Fig. 1). As a result, in the absence of a more tight interaction between source and target word pairs, the seq2seq model in NMT produces tentative translations that contain incorrect alignments of source words with target counterparts that are non-admissible equivalents in any possible translation context. Differently from SMT, in NMT an attention model is adopted to help align output with input words. The attention model is based on the estimation of a probability distribution over all input words for each target word. Word alignments with attention weights can then be easily deduced from such distributions and support the translation. Nevertheless, sometimes one finds translations by NMT that contain surprisingly wrong word alignments, that would unlikely occur in SMT. For instance, Figure 2 shows two Chineseto-English translation examples by NMT. In the top example, the NMT seq2seq model incorrectly aligns the target side end of sentence mark eos to 下旬/late with a high attention weight (0.80 in this example) due to the failure of appropriately capturing the similarity, or the lack of it, between the source word 下旬/late and the target eos. It is also worth noting that, as 本/this and 月/month end up not being translated in this example, inappropriate alignment of target side eos is likely the responsible factor for under translation in NMT as the decoding process ends once a target eos is generated. Statistics on our development data show that as much as 50% of target side eos do not properly align to source side eos. The second example in Figure 2 shows another case where source words are translated into target items that are not their possible translations in that or in any other context. In particular, 冬奥 会/winter olympics is incorrectly translated into a target comma “,” and 载誉/honors into have. In this paper, to address the problem illustrated above, we seek to shorten the distance within the seq2seq NMT information processing procedure between source and target word embeddings. This is a method we term as bridging, and can be conceived as strengthening the focus of the attention mechanism into more translation-plausible source and target word alignments. In doing so, we hope that the seq2seq model is able to learn more appropriate word alignments between source and target words. We propose three simple yet effective strategies to bridge between word embeddings. The inspiring insight in all these three models is to move source word embeddings closer to target word embeddings along the seq2seq NMT information processing procedure. We categorize these strategies in terms of how close the source and target word embeddings are along that procedure, schematically depicted in Fig. 1. (1) Source-side bridging model: Our first strategy for bridging, which we call source-side bridging, is to move source word embeddings just one step closer to the target end. Each source word embedding is concatenated with the respective source hidden state at the same position so that the attention model can more closely benefit from source word embeddings to produce word alignments. (2) Target-side bridging model: In a second more bold strategy, we seek to incorporate relevant source word embeddings more closely into the prediction of the next target hidden state. In particular, the most appropriate source words are selected according to their attention weights and they are made to more closely interact with target hidden states. (3) Direct bridging model: The third model consists of directly bridging between source and target word embeddings. The training objective is optimized towards minimizing the distance between target word embeddings and their most relevant source word embeddings, selected according to the attention model. Experiments on Chinese-English translation with extensive analysis demonstrate that directly bridging word embeddings at the two ends can produce better word alignments and thus achieve better translation. 1769 ݄ଵ ݄ଵ … ݔଵ … ݄ଶ ݄ଶ ்݄ ்݄ BiRNN Encoder ݔଶ ݔ் ݔଵ ݔଶ ݔ் Figure 3: Architecture of the source-side bridging model. ݄ଵ ݄ଵ … ݄ଶ ݄ଶ ்݄ ்݄ Attention ܽݎ݃݉ܽݔ௝ሺߙ௧௝ሻ ܿ௧ ߙ௧ଵ ߙ௧ଶ ߙ௧் ݔଵ … ߙ௧ଵ ݔଶ ߙ௧ଶ ݔ் ߙ௧் ݔ௧כ ݏ௧ ݏ௧ିଵ ݕ௧ିଵ Figure 4: Architecture of target-side bridging model. 2 Bridging Models As suggested by Figure 1, there may exist different ways to bridge between x and yt. We concentrate on the folowing three bridging models. 2.1 Source-side Bridging Model Figure 3 illustrates the source-side bridging model. The encoder reads a word sequence equipped with word embeddings and generates a word annotation vector for each position. Then we simply concatenate the word annotation vector with its corresponding word embedding as the final annotation vector. For example, the final annotation vector hj for the word xj in Figure 3 is [−→ hj; ←− hj; xj], where the first two sub-items [−→ hj; ←− hj] are the source-side forward and backward hidden states and xj is the corresponding word embedding. In this way, the word embeddings will not only have a more strong contribution in the computation of attention weights, but also be part of the annotation vector to form the weighted source context vector and consequently have a more strong impact in the prediction of target words. 2.2 Target-side Bridging Model While the above source-side bridging method uses the embeddings of all words for every target word, in the target-side method only more relevant source word embeddings for bridging are explored, rather than all of them. This is parDecoder word embedding loss Attention ܿ௧ ߙ௧ଵ ߙ௧ଶ ߙ௧் ݄ଵ ݄ଵ … ݄ଶ ݄ଶ ்݄ ்݄ ݔଵ ݔଶ ݔ் ܽݎ݃݉ܽݔ௝ሺߙ௧௝ሻ ݔଵ … ߙ௧ଵ ݔଶ ߙ௧ଶ ݔ் ߙ௧் ݔ௧כ ݕ௧ Figure 5: Architecture of direct bridging model. tially inspired by the word alignments from SMT, where words from the two ends are paired as they are possible translational equivalents of each other and those pairs are explicitly recorded and enter into the system inner workings. In particular, for a given target word, we explicitly determine the most likely source word aligned to it and use the word embedding of this source word to support the prediction of the target hidden state of the next target word to be generated. Figure 4 schematically illustrates the target-side bridging method, where the input for computing the hidden state st of the decoder is augmented by xt∗, as follows: st = f(st−1, yt−1, ct, xt∗) (1) where xt∗is the word embedding of the selected source word which has the highest attention weight: t∗= arg maxj(αtj) (2) where αtj is the attention weight of each hidden state hj computed by the attention model 2.3 Direct Bridging Model Further to the above two bridging methods, which use source word embeddings to predict target words, we seek to bridge the word embeddings of the two ends in a more direct way. This is done by resorting to an auxiliary objective function to narrow the discrepancy between word embeddings of the two sides. Figure 5 is a schematic representation of our direct bridging method, with an auxiliary objective function. More specifically, the goal is to let the learned word embeddings on the two ends be transformable, i.e. if a target word ei aligns with a source word fj, a transformation matrix W is learned with the hope that the discrepancy of Wxi and yj tends to be zero. Accordingly, we update 1770 the objective function of training for a single sentence with its following extended formulation: L(θ) = − Ty X t=1 (log p(yt|y<t, x) −∥Wxt∗−yt∥2) (3) where log p(yt|y<t, x) is the original objective function of the NMT model, and the term ∥Wxt∗−yt∥2 measures and penalizes the difference between target word yt and its aligned source word xt∗, i.e. the one with the highest attention weight, as computed in Equation 2. Similar to Mi et al. (2016), we view the two parts of the loss in Equation 3 as equally important. At this juncture, it is worth noting the following: • Our direct bridging model is an extension of the source-side bridging model, where the source word embeddings are part of the final annotation vector of the encoder. We have also tried to place the auxiliary object function directly on the NMT baseline model. However, our empirical study showed that the combined objective consistently worsens the translation quality. We blame this on that the learned word embeddings on two sides by the baseline model are too heterogeneous to be constrained. • Rather than using a concrete source word embedding xt∗in Equation 3, we could also use a weighted sum of source word embeddings, i.e. P j αtjhj. However, our preliminary experiments showed that the performance gap between these two methods is very small. Therefore, we use xt∗to calculate the new training objective as shown in Equation 3 in all experiments. 3 Experiments As we have presented above three different methods to bridge between source and target word embeddings, in the present section we report on a series of experiments on Chinese to English translation that are undertaken to assess the effectiveness of those bridging methods. 3.1 Experimental Settings We resorted to Chinese-English bilingual corpora that contain 1.25M sentence pairs extracted from LDC corpora, with 27.9M Chinese words and 34.5M English words respectively.1 We chose the NIST06 dataset as our development set, and the NIST02, NIST03, NIST04, NIST08 datasets as our test sets. We used the case-insensitive 4-gram NIST BLEU score as our evaluation metric (Papineni et al., 2002) and the script ‘mteval-v11b.pl’ to compute BLEU scores. We also report TER scores on our dataset (Snover et al., 2006). For the efficient training of the neural networks, we limited the source (Chinese) and target (English) vocabularies to the most frequent 30k words, covering approximately 97.7% and 99.3% of the two corpora respectively. All the out-ofvocabulary words were mapped to the special token UNK. The dimension of word embedding was 620 and the size of the hidden layer was 1000. All other settings were the same as in Bahdanau et al. (2015). The maximum length of sentences that we used to train the NMT model in our experiments was set to 50, for both the Chinese and English sides. Additionally, during decoding, we used the beam-search algorithm and set the beam size to 10. The model parameters were selected according to the maximum BLEU points on the development set. We compared our proposed models against the following two systems: • cdec (Dyer et al., 2010): this is an open source hierarchical phrase-based SMT system (Chiang, 2007) with default configuration and a 4-gram language model trained on the target side of the training data. • RNNSearch*: this is an attention-based NMT system, taken from the dl4mt tutorial with slight changes. It improves the attention model by feeding the lastly generated word. For the activation function f of an RNN, we use the gated recurrent unit (GRU) (Chung et al., 2014). Dropout was applied only on the output layer and the dropout (Hinton et al., 2012) rate was set to 0.5. We used the stochastic gradient descent algorithm with mini-batch and Adadelta (Zeiler, 2012) to train the NMT models. The minibatch was set to 80 sentences and decay rates 1 The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. 1771 Model NIST06 NIST02 NIST03 NIST04 NIST08 Avg BLEU cdec (SMT) 34.00 35.81 34.70 37.15 25.28 33.23 RNNSearch* 35.92 37.88 36.21 38.83 26.30 34.81 Source bridging 36.79‡ 38.71‡ 37.24‡ 40.28‡ 27.40‡ 35.91 Target bridging 36.69 39.04‡ 37.63‡ 40.41‡ 27.98‡ 36.27 Direct bridging 36.97‡ 39.77‡ 38.02‡ 40.83‡ 27.85‡ 36.62 TER cdec (SMT) 58.29 59.65 59.28 58.12 61.54 59.64 RNNSearch* 59.56 57.79 59.25 57.88 64.22 59.78 Source bridging 58.13 56.25 57.33 56.32 62.13 58.01 Target bridging 58.01 56.27 57.76 56.33 62.12 58.12 Direct bridging 57.20 56.68 57.29 55.62 62.49 58.02 Table 1: BLEU and TER scores on the NIST Chinese-English translation tasks. The BLEU scores are case-insensitive. Avg means the average scores on all test sets. “‡”: statistically better than RNNSearch* (p < 0.01). Higher BLEU (or lower TER) scores indicate better translation quality. ρ and ε of Adadelta were set to 0.95 and 10−6. For our NMT system with the direct bridging model, we use a simple pre-training strategy to train our model. We first train a regular attentionbased NMT model, then use this trained model to initialize the parameters of the NMT system equipped with the direct bridging model and randomly initialize the additional parameters of the direct bridging model in this NMT system. The reason of using pre-training strategy is that the embedding loss requires well-trained word alignment as a starting point. 3.2 Experimental Results Table 1 displays the translation performance measured in terms of BLEU and TER scores. Clearly, every one of the three NMT models we proposed, with some bridging method, improve the translation accuracy over all test sets in comparison to the SMT (cdec) and NMT (RNNSearch*) baseline systems. Parameters The three proposed models introduce new parameters in different ways. The source-side bridging model augments source hidden states from a dimension of 2,000 to 2,620, requiring 3.7M additional parameters to accommodate the hidden states that are appended. The target-side bridging model introduces 1.8M additional parameters for catering xt∗in calculating target side state, as in Equation 1. Based on the source-side bridging model, the direct bridging model requires extra 0.4M parameters (i.e. the transformation matrix W in Equation 3 is 620 ∗620), resulting in 4.1M additional parameters over the baseline. Given that the baseline model has 74.8M parameters, the System Percentage (%) RNNSearch* 49.82 Direct bridging 81.30 Table 2: Percentage of target side eos translated from source side eos on the development set. size of extra parameters in our proposed models are comparably small. Comparison with the baseline systems The results in Table 1 indicate that all NMT systems outperform the SMT system taking into account the evaluation metrics considered, BLEU and TER. This is consistent with other studies on Chinese to English machine translation (Mi et al., 2016; Tu et al., 2016; Li et al., 2017). Additionally, all the three NMT models with bridging mechanisms we proposed outperform the baseline NMT model RNNSearch*. With respect to BLEU scores, we observe a consistent trend that the target-side bridging model works better than the source-side bridging model, while the direct bridging model achieves the best accuracy over all test sets, with the only exception of NIST MT 08. On all test sets, the direct bridging model outperforms the baseline RNNSearch* by 1.81 BLEU points and outperforms the other two bridging-improved NMT models by 0.4∼0.6 BLEU points. Though all models are not tuned on TER score, our three models perform favorably well with similar average improvement, of about 1.70 TER points, below the baseline model. 4 Analysis As the direct bridging system proposed achieves the best performance, we further look at it and at 1772 System Target POS Tag Source POS Tag V N CD JJ AD RNNSearch* V 64.95 12.09 N 7.31 39.24 CD 33.37 53.40 JJ 26.79 14.67 Direct bridging V 66.29 10.94 N 7.19 39.71 CD 32.25 56.29 JJ 26.12 15.22 Table 3: Confusion matrix for translation by POS, in percentage. To cope with fine-grained differences among verbs (e.g., VV, VC and VE in Chinese, and VB, VBD, VBP, etc. in English), we merge all verbs into V. Similarly, we merged all nouns into N. CD stands for Cardinal numbers, JJ for adjectives or modifiers, AD for adverbs. These POS tags exist in both Chinese and English. For the sake of simplicity, for each target POS tag, we present only the two source POS tags that are more frequently aligned with it. the RNNSearch* baseline system to gain further insight on how bridging may help in translation. Our approach presents superior results along all the dimensions assessed. 4.1 Analysis of Word Alignment Since our improved model strengthens the focus of attention between pairs of translation equivalents by explicitly bridging source and target word embeddings, we expect to observe improved word alignment quality. The quality of the word alignment is examined from the following three aspects. Better eos translation As a special symbol marking the end of sentence, target side eos has a critical impact on controlling the length of the generated translation. A target eos is a correct translation when is aligned with the source eos. Table 2 displays the percentage of target side eos that are translations of the source side eos. It indicates that our model improved with bridging substantially achieves better translation of source eos. Better word translation To have a further insight into the quality of word translation, we group generated words by their part-of-speech (POS) tags and examine the POS of their aligned source words. 2 Table 3 is a confusion matrix for translations by POS. For example, under RNNSearch*, 64.95% of target verbs originate from verbs in the source 2We used Stanford POS tagger (Toutanova et al., 2003) to get POS tags for the words in source sentences and their translations. System SAER AER RNNSearch* 62.68 47.61 Direct bridging 59.72 44.71 Table 4: Alignment error rate (AER) and soft AER. quality. A lower score indicates better alignment. side. This is enhanced to 66.29% in our direct bridging model. From the data in that table, one observes that in general more target words align to source words with the same POS tags in our improved system than in the baseline system. Better word alignment Next we report on the quality of word alignment taking into account a manually aligned dataset, from Liu and Sun (2015), which contains 900 manually aligned Chinese-English sentence pairs. We forced the decoder to output reference translations in order to get automatic alignments between input sentences and their reference translations yielded by the translation systems. To evaluate alignment performance, we measured the alignment error rate (AER) (Och and Ney, 2003) and the soft AER (SAER) (Tu et al., 2016), which are registered in Table 4. The data in this Table 4 indicate that, as expected, bridging improves the alignment quality as a consequence of its favoring of a stronger relationship between the source and target word embeddings of translational equivalents. 4.2 Analysis of Long Sentence Translation Following Bahdanau et al. (2015), we partition sentences by their length and compute the respec1773 20 25 30 35 40 (0,10] (10,20] (20,30] (30,40] (40,50] (50,100] BLEU Score Length of Source Sentence cdec RNNSearch* Direct Link Figure 6: BLEU scores for the translation of sentences with different lengths. tive BLEU scores, which are presented in Figure 6. These results indicate that our improved system outperforms RNNSearch* for all the sentence lengths. They also reveal that the performance drops substantially when the length of the input sentence increases. This trend is consistent with the findings in (Cho et al., 2014; Tu et al., 2016; Li et al., 2017). One also observes that the NMT systems perform very badly on sentences of length over 50, when compared to the performance of the baseline SMT system (cdec). We think that the degradation of NMT systems performance over long sentences is due to the following reasons: (1) during training, the maximum source sentence length limit is set to 50, thus making the learned models not ready to cope well with sentences over this maximum length limit; (2) for long input sentences, NMT systems tend to stop early in the generation of the translation. 4.3 Analysis of Over and Under Translation To assess the expectation that improved translation of eos improves the appropriate termination of the translations generated by the decoder, we analyze the performance of our best model with respect to over translation and under translation, which are both notoriously a hard problem for NMT. To estimate the over translation generated by an NMT system, we follow Li et al. (2017) and report the ratio of over translation (ROT)3, which is computed as the total number of times of over translation of words in a word set (e.g., all nouns in the source part of the test set) divided by the number of words in the word set. Table 5 displays ROTs of words grouped by some prominent POS tags. These data indicate that both systems have higher over translation with proper nouns (NR) and other nouns (NN) than 3please refer to (Li et al., 2017) for more details of ROT. System POS ROT(%) RNNSearch* NN 8.63 NR 12.92 DT 4.01 CD 7.05 ALL 5.28 Direct bridging NN 7.56 NR 10.88 DT 2.37 CD 4.79 ALL 4.49 Table 5: Ratios of over translation (ROT) on test sets. NN stands for nouns excluding proper nouns and temporal nouns, NR for proper nouns, DT for determiners, and CD for cardinal numbers. System 1-gram BLEU cdec (SMT) 77.09 RNNSearch* 72.70 Direct bridging 74.22 Table 6: 1-gram BLEU scores averaged on test sets, supporting the assessment of under translation. A larger score indicates less under translation. with other POS tags, which is consistent with the results in (Li et al., 2017). The likely reason is that these two POS tags usually have more unknown words, which are words that tend to be over translated. Importantly, these data also show that our direct bridging model alleviates the over translation issue by 15%, as ROT drops from 5.28% to 4.49%. While it is hard to obtain an accurate estimation of under translation, we simply report 1-gram BLEU score that measures how many words in the translation outcome appear in the reference translation, roughly indicating the proportion of source words that are translated. Table 6 presents the average 1-gram BLEU scores on our test datasets. These data indicate that our improved system has a higher score than RNNSearch*, suggesting that it is less prone to under translation. It is also worth noting that the SMT baseline (cdec) presents the highest 1-gram BLEU score, as expected, given that under translation is known to be less of an issue for SMT. 4.4 Analysis of Learned Word Embeddings In the direct bridging model, we introduced a transformation matrix to convert a source-side word embedding into its counterpart on the target side. We seek now to assess the contribution of 1774 Src Transformation Lexical Table 是 is is 和 and and 及 and and 将 will will 会 will will 国 countries countries 发展 development development 经济 economic economic 问题 question issue 人民 people people Table 7: Top 10 more frequent source words and their closest translations obtained, respectively, by embedding transformation in NMT and from the lexical translation table in SMT. this transformation. Given a source word xi, we obtain its closest target word y∗via: y∗= arg miny(∥wxi −y∥) (4) Table 7 lists the 10 more frequent source words and their corresponding closest target words. For the sake of comparison, it also displays their most likely translations from the lexical translation table in SMT. These results suggest that the closest target words obtained via the transformation matrix of our direct bridging method are very consistent with those obtained from the SMT lexical table, containing only admissible translation pairs. These data thus suggest that our improved model has a good capability of capturing the translation equivalence between source and target word embeddings. 5 Related Work Since the pioneer work of Bahdanau et al. (2015) to jointly learning alignment and translation in NMT, many effective approaches have been proposed to further improve the alignment quality. The attention model plays a crucial role in the alignment quality and thus its enhancement has continuously attracted further efforts. To obtain better attention focuses, Luong et al. (2015) propose global and local attention models; and Cohn et al. (2016) extend the attentional model to include structural biases from word based alignment models, including positional bias, Markov conditioning, fertility and agreement over translation directions. In contrast, we did not delve into the attention model or sought to redesign it in our new bridging proposal. And yet we achieve enhanced alignment quality by inducing the NMT model to capture more favorable pairs of words that are translation equivalents of each other under the effect of the bridging mechanism. Recently there have been also studies towards leveraging word alignments from SMT models. Mi et al. (2016) and Liu et al. (2016) use preobtained word alignments to guide the NMT attention model in the learning of favorable word pairs. Arthur et al. (2016) leverage a pre-obtained word dictionary to constrain the prediction of target words. Despite these approaches having a somewhat similar motivation of using pairs of translation equivalents to benefit the NMT translation, in our new bridging approach we do not use extra resources in the NMT model, but let the model itself learn the similarity of word pairs from the training data. 4 Besides, there exist also studies on the learning of cross-lingual embeddings for machine translation. Mikolov et al. (2013) propose to first learn distributed representation of words from large monolingual data, and then learn a linear mapping between vector spaces of languages. Gehring et al. (2017) introduce source word embeddings to predict target words. These approaches are somewhat similar to our source-side bridging model. However, inspired by the insight of shortening the distance between source and target embeddings in the seq2seq processing chain, in the present paper we propose more strategies to bridge source and target word embeddings and with better results. 6 Conclusion We have presented three models to bridge source and target word embeddings for NMT. The three models seek to shorten the distance between source and target word embeddings along the extensive information procedure in the encoderdecoder neural network. Experiments on Chinese to English translation shows that the proposed models can significantly improve the translation quality. Further in-depth analysis demonstrate that our models are able (1) to learn better word alignments than the baseline NMT, (2) to alleviate the notorious problems of over and under translation in NMT, and (3) to learn direct mappings between source and target words. 4Though the pre-obtained word alignments or word dictionaries can be learned from MT training data in an unsupervised fashion, these are still extra knowledge with respect to to the NMT models. 1775 In future work, we will explore further strategies to bridge the source and target side for sequence-to-sequence and tree-based NMT. Additionally, we also intend to apply these methods to other sequence-to-sequence tasks, including natural language conversation. Acknowledgment The present research was partly supported by the National Natural Science Foundation of China (Grant No. 61622209), the CNPTDeepMT grant of the Portugal-China Bilateral Exchange Program (ANI/3279/2016) and the Infrastructure for the Science and Technology of the Portuguese Language (PORTULAN / CLARIN). References Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In Proceddings of EMNLP 2016. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015. David Chiang. 2007. Hierarchical phrase-based translation. computational linguistics, 33(2):201–228. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724–1734. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proceedings of Deep Learning and Representation Learning Workshop in NIPS 2014. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In Proceedings of NAACL 2016, pages 876–885. Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Ganitkevitch, Phil Blunsom, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of the ACL 2010 System Demonstrations, pages 7–12. Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. In Proceedings of ACL 2017. Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In In Proceddings of COLING 2016. Yang Liu and Maosong Sun. 2015. Contrastive unsupervised word alignment with non-local features. In Proceedings of AAAI 2015, pages 2295–2301. Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of the International Workshop on Spoken Language Translation. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP 2015. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proceddings of EMNLP 2016. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311–318. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceddings of AMTA 2006. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS 2014, pages 3104– 3112. 1776 Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of HLT-NAACL 2003, pages 252– 259. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL 2016, pages 76–85. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
2018
164
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1777–1788 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1777 Reliability and Learnability of Human Bandit Feedback for Sequence-to-Sequence Reinforcement Learning Julia Kreutzer1 and Joshua Uyheng3∗and Stefan Riezler1,2 1Computational Linguistics & 2IWR, Heidelberg University, Germany {kreutzer,riezler}@cl.uni-heidelberg.de 3Departments of Psychology & Mathematics, Ateneo de Manila University, Philippines [email protected] Abstract We present a study on reinforcement learning (RL) from human bandit feedback for sequence-to-sequence learning, exemplified by the task of bandit neural machine translation (NMT). We investigate the reliability of human bandit feedback, and analyze the influence of reliability on the learnability of a reward estimator, and the effect of the quality of reward estimates on the overall RL task. Our analysis of cardinal (5-point ratings) and ordinal (pairwise preferences) feedback shows that their intra- and inter-annotator αagreement is comparable. Best reliability is obtained for standardized cardinal feedback, and cardinal feedback is also easiest to learn and generalize from. Finally, improvements of over 1 BLEU can be obtained by integrating a regressionbased reward estimator trained on cardinal feedback for 800 translations into RL for NMT. This shows that RL is possible even from small amounts of fairly reliable human feedback, pointing to a great potential for applications at larger scale. 1 Introduction Recent work has received high attention by successfully scaling reinforcement learning (RL) to games with large state-action spaces, achieving human-level (Mnih et al., 2015) or even superhuman performance (Silver et al., 2016). This success and the ability of RL to circumvent the data annotation bottleneck in supervised learning has led to renewed interest in RL in sequenceto-sequence learning problems with exponential ∗The work for this paper was done while the second author was an intern in Heidelberg. output spaces. A typical approach is to combine REINFORCE (Williams, 1992) with policies based on deep sequence-to-sequence learning (Bahdanau et al., 2015), for example, in machine translation (Bahdanau et al., 2017), semantic parsing (Liang et al., 2017), or summarization (Paulus et al., 2017). These RL approaches focus on improving performance in automatic evaluation by simulating reward signals by evaluation metrics such as BLEU, F1-score, or ROUGE, computed against gold standards. Despite coming from different fields of application, RL in games and sequence-to-sequence learning share firstly the existence of a clearly specified reward function, e.g., defined by winning or losing a game, or by computing an automatic sequence-level evaluation metric. Secondly, both RL applications rely on a sufficient exploration of the action space, e.g., by evaluating multiple game moves for the same game state, or various sequence predictions for the same input. The goal of this paper is to advance the stateof-the-art of sequence-to-sequence RL, exemplified by bandit learning for neural machine translation (NMT). Our aim is to show that successful learning from simulated bandit feedback (Sokolov et al., 2016b; Kreutzer et al., 2017; Nguyen et al., 2017; Lawrence et al., 2017) does in fact carry over to learning from actual human bandit feedback. The promise of bandit NMT is that human feedback on the quality of translations is easier to obtain in large amounts than human references, thus compensating the weaker nature of the signals by their quantity. However, the human factor entails several differences to the above sketched simulation scenarios of RL. Firstly, human rewards are not well-defined functions, but complex and inconsistent signals. For example, in general every input sentence has a multitude of correct translations, each of which humans may judge differ1778 ently, depending on many contextual and personal factors. Secondly, exploration of the space of possible translations is restricted in real-world scenarios where a user judges one displayed translation, but cannot be expected to rate an alternative translation, let alone large amounts of alternatives. In this paper we will show that despite the fact that human feedback is ambiguous and partial in nature, a catalyst for successful learning from human reinforcements is the reliability of the feedback signals. The first deployment of bandit NMT in an e-commerce translation scenario conjectured lacking reliability of user judgments as the reason for disappointing results when learning from 148k user-generated 5-star ratings for around 70k product title translations (Kreutzer et al., 2018). We thus raise the question of how human feedback can be gathered in the most reliable way, and what effect reliability will have in downstream tasks. In order to answer these questions, we measure intra- and inter-annotator agreement for two feedback tasks for bandit NMT, using cardinal feedback (on a 5-point scale) and ordinal feedback (by pairwise preferences) for 800 translations, conducted by 16 and 14 human raters, respectively. Perhaps surprisingly, while relative feedback is often considered easier for humans to provide (Thurstone, 1927), our investigation shows that α-reliability (Krippendorff, 2013) for intra- and inter-rater agreement is similar for both tasks, with highest inter-rater reliability for standardized 5-point ratings. In a next step, we address the issue of machine learnability of human rewards. We use deep learning models to train reward estimators by regression against cardinal feedback, and by fitting a Bradley-Terry model (Bradley and Terry, 1952) to ordinal feedback. Learnability is understood by a slight misuse of the machine learning notion of learnability (Shalev-Shwartz et al., 2010) as the question how well reward estimates can approximate human rewards. Our experiments reveal that rank correlation of reward estimates with TER against human references is higher for regression models trained on standardized cardinal rewards than for Bradley-Terry models trained on pairwise preferences. This emphasizes the influence of the reliability of human feedback signals on the quality of reward estimates learned from them. Lastly, we investigate machine learnability of the overall NMT task, in the sense of Green et al. (2014) who posed the question of how well an MT system can be tuned on post-edits. We use an RL approach for tuning, where a crucial difference of our work to previous work on RL from human rewards (Knox and Stone, 2009; Christiano et al., 2017) is that our RL scenario is not interactive, but rewards are collected in an offline log. RL then can proceed either by off-policy learning using logged single-shot human rewards directly, or by using estimated rewards. An expected advantage of estimating rewards is to tackle a simpler problem first — learning a reward estimator instead of a full RL task for improving NMT — and then to deploy unlimited feedback from the reward estimator for off-policy RL. Our results show that significant improvements can be achieved by training NMT from both estimated and logged human rewards, with best results for integrating a regression-based reward estimator into RL. This completes the argumentation that high reliability influences quality of reward estimates, which in turn affects the quality of the overall NMT task. Since the size of our training data is tiny in machine translation proportions, this result points towards a great potential for larger-scaler applications of RL from human feedback. 2 Related Work Function approximation to learn a “critic” instead of using rewards directly has been embraced in the RL literature under the name of “actor-critic” methods (see Konda and Tsitsiklis (2000), Sutton et al. (2000), Kakade (2001), Schulman et al. (2015), Mnih et al. (2016), inter alia). In difference to our approach, actor-critic methods learn online while our approach estimates rewards in an offline fashion. Offline methods in RL, with and without function approximation, have been presented under the name of “off-policy” or “counterfactual” learning (see Precup et al. (2000), Precup et al. (2001), Bottou et al. (2013), Swaminathan and Joachims (2015a), Swaminathan and Joachims (2015b), Jiang and Li (2016), Thomas and Brunskill (2016), inter alia). Online actorcritic methods have been applied to sequenceto-sequence RL by Bahdanau et al. (2017) and Nguyen et al. (2017). An approach to off-policy RL under deterministic logging has been presented by Lawrence et al. (2017). However, all these approaches have been restricted to simulated rewards. 1779 RL from human feedback is a growing area. Knox and Stone (2009) and Christiano et al. (2017) learn a reward function from human feedback and use that function to train an RL system. The actor-critic framework has been adapted to interactive RL from human feedback by Pilarski et al. (2011) and MacGlashan et al. (2017). These approaches either update the reward function from human feedback intermittently or perform learning only in rounds where human feedback is provided. A framework that interpolates a human critique objective into RL has been presented by Judah et al. (2019). None of these works systematically investigates the reliability of the feedback and its impact of the down-stream task. Kreutzer et al. (2018) have presented the first application of off-policy RL for learning from noisy human feedback obtained for deterministic logs of e-commerce product title translations. While learning from explicit feedback in the form of 5-star ratings fails, Kreutzer et al. (2018) propose to leverage implicit feedback embedded in a search task instead. In simulation experiments on the same domain, the methods proposed by Lawrence et al. (2017) succeeded also for neural models, allowing to pinpoint the lack of reliability in the human feedback signal as the reason for the underwhelming results when learning from human 5-star ratings. The goal of showing the effect of highly reliable human bandit feedback in downstream RL tasks was one of the main motivations for our work. For the task of machine translation, estimating human feedback, i.e. quality ratings, is related to the task of sentence-level quality estimation (sQE). However, there are crucial differences between sQE and the reward estimation in our work: sQE usually has more training data, often from more than one machine translation model. Its gold labels are inferred from post-edits, i.e. corrections of the machine translation output, while we learn from weaker bandit feedback. Although this would in principle be possible, sQE predictions have not (yet) been used to directly reinforce predictions of MT systems, mostly because their primary purpose is to predict post-editing effort, i.e. give guidance how to further process a translation. State-of-the-art models for sQE such as (Martins et al., 2017) and (Kim et al., 2017) are unsuitable for the direct use in this task since they rely on linguistic input features, stacked architecFigure 1: Rating interface for 5-point ratings. Figure 2: Rating interface for pairwise ratings. tures or post-edit or word-level supervision. Similar to approaches for generative adversarial NMT (Yu et al., 2017; Wu et al., 2017) we prefer a simpler convolutional architecture based on word embeddings for the human reward estimation. 3 Human MT Rating Task 3.1 Data We translate a subset of the TED corpus with a general-domain and a domain-adapted NMT model (see §6.2 for NMT and data), postprocess the translations (replacing special characters, restoring capitalization) and filter out identical out-of-domain and in-domain translations. In order to compose a homogeneous data set, we first select translations with references of length 20 to 40, then sort the translation pairs by difference in character n-gram F-score (chrF, β = 3) (Popovi´c, 2015) and length, and pick the top 400 translation pairs with the highest difference in chrF but lowest difference in length. This yields translation pairs of similar length, but different quality. 3.2 Rating Task The pairs were treated as 800 separate translations for a 5-point rating task. From the original 400 translation pairs, 100 pairs (or 200 individual translations) were randomly selected for 1780 Inter-rater Intra-rater Type α Mean α Stdev. α 5-point 0.2308 0.4014 0.1907 5-point norm. 0.2820 5-point norm. part. 0.5059 0.5527 0.0470 5-point norm. trans. 0.3236 0.3845 0.1545 Pairwise 0.2385 0.5085 0.2096 Pairwise filt. part. 0.3912 0.7264 0.0533 Pairwise filt. trans. 0.3519 0.5718 0.2591 Table 1: Inter- and intra-reliability measured by Krippendorff’s α for 5-point and pairwise ratings of 1,000 translations of which 200 translations are repeated twice. The filtered variants are restricted to either a subset of participants (part.) or a subset of translations (trans.). repetition. This produced a total of 1,000 individual translations, with 600 occurring once, and 200 occurring twice. The translations were shuffled and separated into five sections of 200 translations, each with 120 translations from the unrepeated pool, and 80 translations from the repeated pool, ensuring that a single translation does not occur more than once in each section. For a pairwise task, the same 100 pairs were repeated from the original 400 translation pairs. This produced a total of 500 translation pairs. The translations were also shuffled and separated into five sections of 100 translation pairs, each with 60 translation pairs from the unrepeated pool, and 40 translation pairs from the repeated pool. None of the pairs were repeated within each section. We recruited 14 participants for the pairwise rating task and 16 for the 5-point rating task. The participants were university students with fluent or native language skills in German and English. The rating interface is shown in Figures 1 and 2. Rating instructions are given in the supplementary material. Note that no reference translations were presented since the objective is to model a realistic scenario for bandit learning.1 4 Reliability of Human MT Ratings 4.1 Inter-rater and Intra-rater Reliability In the following, we report inter- and intra-rater reliability of the cardinal and ordinal feedback tasks described in §3 with respect to Krippendorff’s α 1The collection of ratings can be downloaded from http://www.cl.uni-heidelberg.de/ statnlpgroup/humanmt/. (Krippendorff, 2013) evaluated at interval and ordinal scale, respectively. As shown in Table 1, measures of inter-rater reliability show small differences between the 5point and pairwise task. The inter-rater reliability in the 5-point task (α = 0.2308) is roughly the same as that of the pairwise task (α = 0.2385). Normalization of ratings per participant (by standardization to Z-scores), however, shows a marked improvement of overall inter-rater reliability for the 5-point task (α = 0.2820). A one-way analysis of variance taken over inter-rater reliabilities between pairs of participants suggests statistically significant differences across tasks (F (2, 328) = 6.399, p < 0.01), however, a post hoc Tukey’s (Larsen and Marx, 2012) honest significance test attributes statistically significant differences solely between the 5-point tasks with and without normalization. These scores indicate that the overall agreement between human ratings is roughly the same, regardless of whether participants are being asked to provide cardinal or ordinal ratings. Improvement in inter-rater reliability via participant-level normalization suggests that participants may indeed have individual biases toward certain regions of the 5-point scale, which the normalization process corrects. In terms of intra-rater reliability, a better mean was observed among participants in the pairwise task (α = 0.5085) versus the 5-point task (α = 0.4014). This suggests that, on average, human raters provide more consistent ratings with themselves in comparing between two translations versus rating single translations in isolation. This may be attributed to the fact that seeing multiple translations provides raters with more cues with which to make consistent judgments. However, at the current sample size, a Welch twosample t-test (Larsen and Marx, 2012) between 5-point and pairwise intra-rater reliabilities shows no significant difference between the two tasks (t (26.92) = 1.4362, p = 0.1625). Thus, it remains difficult to infer whether one task is definitively superior to the other in eliciting more consistent responses. Intra-rater reliability is the same for the 5-point task with and without normalization, as participants are still compared against themselves. 1781 Figure 3: Improvements in inter-rater reliability using intra-rater consistency filter. Figure 4: Improvements in inter-rater reliability using item variance filter. 4.2 Rater and Item Variance The succeeding analysis is based on two assumptions: first, that human raters vary in that they do not provide equally good judgments of translation quality, and second, rating items vary in that some translations may be more difficult to judge than others. This allows to investigate the influence of rater variance and item variance on inter-rater reliability by an ablation analysis where low-quality judges and difficult translations are filtered out. Using intra-rater reliability as an index of how well human raters judge translation quality, Figure 3 shows a filtering process whereby human raters with α scores lower than a moving threshold are dropped from the analysis. As the reliability threshold is increased from 0 to 1, overall inter-rater reliability is measured. Figure 4 shows a similar filtering process implemented using variance in translation scores. Item variances are normalized on a scale from 0 to 1 and subtracted from 1 to produce an item variance threshold. As the threshold increases, overall inter-rater reliability is likewise measured as high-variance items are progressively dropped from the analysis. As the plots demonstrate, inter-rater reliability generally increases with consistency and variance filtering. For consistency filtering, Figure 3 shows how the inter-rater reliability of the 5-point task experiences greater increases than the pairwise task with lower filtering thresholds, especially in the normalized case. This may be attributed to the fact that more participants in the 5-point task had low intra-rater reliability. Pairwise tasks, on the other hand, require higher thresholds before large gains are observed in overall inter-rater reliability. This is because more participants in the pairwise task had relatively high intra-rater reliability. In the normalized 5-point task, selecting a threshold of 0.49 as a cutoff for intra-rater reliability retains 8 participants with an inter-rater reliability of 0.5059. For the pairwise task, a threshold of 0.66 leaves 5 participants with an inter-rater reliability of 0.3912. The opposite phenomenon is observed in the case of variance filtering. As seen in Figure 4, the overall inter-rater reliability of the pairwise task quickly overtakes that of the 5-point task, with and without normalization. This may be attributed to how, in the pairwise setup, more items can be a source of disagreement among human judges. Ambiguous cases, that will be discussed in §4.3, may result in higher item variance. This problem is not as pronounced in the 5-point task, where judges must simply judge individual translations. It may be surmised that this item variance accounts for why, on average, judges in the pairwise task demonstrate higher intra-rater reliability than those in the 5-point task, yet the overall inter-rater reliability of the pairwise task is lower. By selecting a variance threshold such that at least 70% of items are retained in the analysis, the improved inter-rater reliabilities were 0.3236 for the 5-point task and 0.3519 for the pairwise task. 4.3 Qualitative Analysis On completion of the rating task, we asked the participants for a subjective judgment of difficulty on a scale from 1 (very difficult) to 10 (very easy). On average, the pairwise rating task (mean 5.69) was perceived slightly easier than the 5-point rating task (mean 4.8). They also had to state which as1782 pects of the tasks they found difficult: The biggest challenge for 5-point ratings seemed to be the weighing of different error types and the rating of long sentences with very few, but essential errors. For pairwise ratings, difficulties lie in distinguishing between similar, or similarly bad translations. Both tasks showed difficulties with ungrammatical or incomprehensible sources. Comparing items with high and low agreement across raters allows conclusions about objective difficulty. We assume that high inter-rater agreement indicates an ease of judgment, while difficulties in judgment are manifested in low agreement. A list of examples is given in the supplementary material. For 5-point ratings, difficulties arise with ungrammatical sources and omissions, whereas obvious mistakes in the target, such as over-literal translations, make judgment easier. Preference judgments tend to be harder when both translations contain errors and are similar. When there is a tie, the pairwise rating framework does not allow to indicate whether both translations are of high or low quality. Since there is no normalization strategy for pairwise ratings, individual biases or rating schemes can hence have a larger negative impact on the inter-rater agreement. 5 Learnability of a Reward Estimator from MT Ratings 5.1 Learning a Reward Estimator The numbers of ratings that can be obtained directly from human raters in a reasonable amount of time is tiny compared to the millions of sentences used for standard NMT training. By learning a reward estimator on the collection of human ratings, we seek to generalize to unseen translations. The model for this reward estimator should ideally work without time-consuming feature extraction so it can be deployed in direct interaction with a learning NMT system, estimating rewards on the fly, and most importantly generalize well so it can guide the NMT towards good local optima. Learning from Cardinal Feedback. The inputs to the reward estimation model are sources x and their translations y. Given cardinal judgments for these inputs, a regression model with parameters ψ is trained to minimize the mean squared error (MSE) for a set of n predicted rewards ˆr and judgments r: LMSE(ψ) = 1 n n X i=1 (r(yi) −ˆrψ(yi))2. In simulation experiments, where all translations can be compared to existing references, r may be computed by sentence-BLEU (sBLEU). For our human 5-point judgments, we first normalize the judgments per rater as described in §4, then average the judgments across raters and finally scale them linearly to the interval [0.0, 1.0]. Learning from Pairwise Preference Feedback. When pairwise preferences are given instead of cardinal judgments, the Bradley-Terry model allows us to train an estimator of r. Following Christiano et al. (2017), let ˆPψ[y1 ≻y2] be the probability that any translation y1 is preferred over any other translation y2 by the reward estimator: ˆPψ[y1 ≻y2] = exp ˆrψ(y1) exp ˆrψ(y1) + exp ˆrψ(y2). Let Q[y1 ≻y2] be the probability that translation y1 is preferred over translation y2 by a gold standard, e.g. the human raters or in comparison to a reference translation. With this supervision signal we formulate a pairwise (PW) training loss for the reward estimation model with parameters ψ: LPW (ψ) = −1 n n X i=1 Q[y1 i ≻y2 i ] log ˆPψ[y1 i ≻y2 i ] +Q[y2 i ≻y1 i ] log ˆPψ[y2 i ≻y1 i ]. For simulation experiments — where we lack a genuine supervision for preferences — we compute Q comparing the sBLEU scores for both translations, i.e. translation preferences are modeled according to their difference in sBLEU: Q[y1 ≻y2] = exp sBLEU(y1) exp sBLEU(y1) + exp sBLEU(y2). When obtaining preference jugdments directly from raters, Q[y1 ≻y2] is simply the relative frequency of y1 being preferred over y2 by a rater. 5.2 Experiments Data. The 1,000 ratings collected as described in §3 are leveraged to train regression models and pairwise preference models. In addition, we train models on simulated rewards (sBLEU) for a comparison with arguably “clean” feedback for the 1783 Model Feedback ρ MSE Simulated -0.2571 PW Simulated -0.1307 MSE Human -0.2193 PW Human -0.1310 MSE Human filt. -0.2341 PW Human filt. -0.1255 Table 2: Spearman’s rank correlation ρ between estimated rewards and TER for models trained with simulated rewards and human rewards (also filtered subsets). same set of translations. In order to augment this very small collection of ratings, we leverage the available out-of-domain bitext as auxiliary training data. We sample translations for a subset of the out-of-domain sources and store sBLEU scores as rewards, collecting 90k out-of-domain training samples in total (see supplementary material for details). During training, each mini-batch is sampled from the auxiliary data with probability paux, from the original training data with probability 1 −paux. Adding this auxiliary data as a regularization through multi-task learning prevents the model from overfitting to the small set of human ratings. In the experiments paux was tuned to 0.8. Architecture. We choose the following neural architecture for the reward estimation (details see supplementary material): Inputs are padded source and target subword embeddings, which are each processed with a biLSTM (Hochreiter and Schmidhuber, 1997). Their outputs are concatenated for each time step, further fed to a 1Dconvolution with max-over-time pooling and subsequently a leaky ReLU (Maas et al., 2013) output layer. This architecture can be seen as a biLSTMenhanced bilingual extension to the convolutional model for sentence classification proposed by Kim (2014). It has the advantage of not requiring any feature extraction but still models n-gram features on an abstract level. Evaluation Method. The quality of the reward estimation models is tested by measuring Spearman’s ρ with TER on a held-out test set of 1,314 translations following the standard in sQE evaluations. Hyperparameters are tuned on another 1,200 TED translations. Results. Table 2 reports the results of reward estimators trained on simulated and human rewards. When trained from cardinal rewards, the model of simulated scores performs slightly better than the model of human ratings. This advantage is lost when moving to preference judgments, which might be explained by the fact that the softmax over sBLEUs with respect to a single reference is just not as expressive as the preference probabilities obtained from several raters. Filtering by participants (retaining 8 participants for cardinal rewards and 5 for preference jugdments, see Section 4) improves the correlation further for cardinal rewards, but slightly hurts for preference judgments. The overall correlation scores are relatively low — especially for the PW models — which we suspect is due to overfitting to the small set of training data. From these experiments we conclude that when it comes to estimating translation quality, cardinal human jugdments are more useful than pairwise preference jugdments. 6 Reinforcement Learning from Direct and Estimated Rewards in MT 6.1 NMT Objectives Supervised Learning. Most commonly, NMT models are trained with Maximum Likelihood Estimation (MLE) on a parallel corpus of source and target sequences D = {(x(s), y(s))}S s=1: LMLE(θ) = S X s=1 log pθ(y(s)|x(s)). The MLE objective requires reference translations and is agnostic to rewards. In the experiments it is used to train the out-of-domain baseline model as a warm start for reinforcement learning from indomain rewards. Reinforcement Learning from Estimated or Simulated Direct Rewards. Deploying NMT in a reinforcement learning scenario, the goal is to maximize the expectation of a reward r over all source and target sequences (Wu et al., 2016), leading to the following REINFORCE (Williams, 1992) objective: RRL(θ) =Ep(x)pθ(y|x) [r(y)] (1) ≈ S X s=1 k X i=1 pτ θ(˜y(s) i |x(s)) r(˜yi) (2) 1784 The reward r can either come from a reward estimation model (estimated reward) or be computed with respect to a reference in a simulation setting (simulated direct reward). In order to counteract high variance in the gradient updates, the running average of rewards is subtracted from r for learning. In practice, Equation 1 is approximated with k samples from pθ(y|x) (see Equation 2). When k = 1, this is equivalent to the expected loss minimization in Sokolov et al. (2016a,b); Kreutzer et al. (2017), where the system interactively learns from online bandit feedback. For k > 1 this is similar to the minimum-risk training for NMT proposed in Shen et al. (2016). Adding a temperature hyper-parameter τ ∈(0.0, ∞] to the softmax over the model output o allows us to control the sharpness of the sampling distribution pτ θ(y|x) = softmax(o/τ), i.e. the amount of exploration during training. With temperature τ < 1, the model’s entropy decreases and samples closer to the onebest output are drawn. We seek to keep the exploration low to prevent the NMT to produce samples that lie far outside the training domain of the reward estimator. Off-Policy Learning from Direct Rewards. When rewards can not be obtained for samples from a learning system, but were collected for a static deterministic system (e.g. in a production environment), we are in an off-policy learning scenario. The challenge is to improve the MT system from a log L = {(x(h), y(h), r(y(h)))}H h=1 of rewarded translations. Following Lawrence et al. (2017) we define the following off-policy learning (OPL) objective to learn from logged rewards: ROPL(θ) = 1 H H X h=1 r(y(h)) ¯pθ(y(h)|x(h)), with reweighting over the current mini-batch B: ¯pθ(y(h)|x(h)) = pθ(y(h)|x(h)) PB b=1 pθ(y(b)|x(b)).2 In contrast to the RL objective, only logged translations are reinforced, i.e. there is no exploration in learning. 6.2 Experiments Data. We use the WMT 2017 data3 for training a general domain (here: out-of-domain) model for 2Lawrence et al. (2017) propose reweighting over the whole log, but this is infeasible for NMT. Here B ≪H. 3Pre-processed data available at http://www. statmt.org/wmt17/translation-task.html. WMT TED Model BLEU METEOR BEER BLEU METEOR BEER WMT 27.2 31.8 60.08 27.0 30.7 59.48 TED 26.3 31.3 59.49 34.3 34.6 64.94 Table 3: Results on test data for in- and out-ofdomain fully-supervised models. Both are trained with MLE, the TED model is obtained by finetuning the WMT model in TED data. translations from German to English. The training data contains 5.9M sentence pairs, the development data 2,999 sentences (WMT 2016 test set) and the test data 3,004 sentences. For in-domain data, we choose the translations of TED talks4 as used in IWSLT evaluation campaigns. The training data contains 153k, the development data 6,969, and the test data 6,750 parallel sentences. Architecture. Our NMT model is a standard subword-based encoder-decoder architecture with attention (Bahdanau et al., 2015). An encoder Recurrent Neural Network (RNN) reads in the source sentence and a decoder RNN generates the target sentence conditioned on the encoded source. We implemented RL and OPL objectives in Neural Monkey (Helcl and Libovick´y, 2017).5 The NMT has a bidirectional encoder and a singlelayer decoder with 1,024 GRUs each, and subword embeddings of size 500 for a shared vocabulary of subwords obtained from 30k byte-pair merges (Sennrich et al., 2016). For model selection we use greedy decoding, for test set evaluation beam search with a beam of width 10. We sample k = 5 translations for RL models and set the softmax temperature τ = 0.5. Further hyperparameters are given in the supplementary material. Evaluation Method. Trained models are evaluated with respect to BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2011) using MULTEVAL (Clark et al., 2011) and BEER (Stanojevi´c and Sima’an, 2014) to cover a diverse set of automatic measures for translation quality.6 We test for statistical significance with approximate randomization (Noreen, 1989). 4Pre-processing and data splits as described in https: //github.com/rizar/actor-critic-public/ tree/master/exp/ted. 5The code is available in the Neural Monkey fork https://github.com/juliakreutzer/ bandit-neuralmonkey/tree/acl2018. 6Since tendencies of improvement turn out to be consistent across metrics, we only discuss BLEU in the text. 1785 Model Rewards BLEU METEOR BEER Baseline 27.0 30.7 59.48 RL D S 32.5⋆ ±0.01 33.7⋆ ±0.01 63.47⋆ ±0.10 OPL D S 27.5⋆ 30.9⋆ 59.62⋆ RL+MSE E S 28.2⋆ ±0.09 31.6⋆ ±0.04 60.23⋆ ±0.14 RL+PW E S 27.8⋆ ±0.01 31.2⋆ ±0.01 59.83⋆ ±0.04 OPL D H 27.5⋆ 30.9⋆ 59.72⋆ RL+MSE E H 28.1⋆ ±0.01 31.5⋆ ±0.01 60.21⋆ ±0.12 RL+PW E H 27.8⋆ ±0.09 31.3⋆ ±0.09 59.88⋆ ±0.23 RL+MSE E F 28.1⋆ ±0.20 31.6⋆ ±0.10 60.29⋆ ±0.13 Table 4: Results on TED test data for training with estimated (E) and direct (D) rewards from simulation (S), humans (H) and filtered (F) human ratings. Significant (p ≤0.05) differences to the baseline are marked with ⋆. For RL experiments we show three runs with different random seeds, mean and standard deviation in subscript. The out-of-domain model is trained with MLE on WMT. The task is now to improve the generalization of this model to the TED domain. Table 3 compares the out-of-domain baseline with domain-adapted models that were further trained on TED in a fully-supervised manner (supervised fine-tuning as introduced by Freitag and AlOnaizan (2016); Luong and Manning (2015)). The supervised domain-adapted model serves as an upper bound for domain adaptation with human rewards: if we had references, we could improve up to 7 BLEU. What if references are not available, but we can obtain rewards for sample translations? Results for RL from Simulated Rewards. First we simulate “clean” and deterministic rewards by comparing sample translations to references using GLEU (Wu et al., 2016) for RL, and smoothed sBLEU for estimated rewards and OPL. Table 4 lists the results for this simulation experiment in rows 2-5 (S). If unlimited clean feedback was given (RL with direct simulated rewards), improvements of over 5 BLEU can be achieved. When limiting the amount of feedback to a log of 800 translations, the improvements over the baseline are only marginal (OPL). When replacing the direct reward by the simulated reward estimators from §5, i.e. having unlimited amounts of approximately clean rewards, however, improvements of 1.2 BLEU for MSE estimators (RL+MSE) and 0.8 BLEU for pairwise estimators (RL+PW) are found. This suggests that the reward estimation model helps to tackle the challenge of generalization over a small set of ratings. Results for RL from Human Rewards. Knowing what to expect in an ideal setting with nonnoisy feedback, we now move to the experiments with human feedback. OPL is trained with the logged normalized, averaged and re-scaled human reward (see §5). RL is trained with the direct reward provided by the reward estimators trained on human rewards from §5. Table 4 shows the results for training with human rewards in rows 68: The improvements for OPL are very similar to OPL with simulated rewards, both suffering from overfitting. For RL we observe that the MSEbased reward estimator (RL+MSE) leads to significantly higher improvements as a the pairwise reward estimator (RL+PW) — the same trend as for simulated ratings. Finally, the improvement of 1.1 BLEU over the baseline showcases that we are able to improve NMT with only a small number of human rewards. Learning from estimated filtered 5-point ratings, does not significantly improve over these results, since the improvement of the reward estimator is only marginal (see § 5). 7 Conclusion In this work, we sought to find answers to the questions of how cardinal and ordinal feedback differ in terms of reliability, learnability and effectiveness for RL training of NMT, with the goal of improving NMT with human bandit feedback. Our rating study, comparing 5-point and preference ratings, showed that their reliability is comparable, whilst cardinal ratings are easier to learn and to generalize from, and also more suitable for RL in our experiments. Our work reports improvements of NMT leveraging actual human bandit feedback for RL, leaving the safe harbor of simulations. Our experiments show that improvements of over 1 BLEU are achievable by learning from a dataset that is tiny in machine translation proportions. Since this type of feedback, in contrast to post-edits and references, is fast and cheap to elicit from nonprofessionals, our results bear a great potential for future applications on larger scale. Acknowledgments. This work was supported in part by DFG Research Grant RI 2221/4-1, and by an internship program of the IWR at Heidelberg University. 1786 References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In Proceedings of the International Conference on Learning Representations (ICLR). Toulon, France. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). San Diego, CA, USA. L´eon Bottou, Jonas Peters, Joaquin Qui˜noneroCandela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipanakar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research 14:3207–3260. Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika 39(3-4):324– 345. Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems (NIPS). Long Beach, CA, USA. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACLHLT). Portland, OR, USA. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT). Edinburgh, Scotland. Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. CoRR abs/1612.06897. Spence Green, Sida I. Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D. Manning. 2014. Human effort and machine learnability in computer aided translation. In Proceedings the onference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar. Jindˇrich Helcl and Jindˇrich Libovick´y. 2017. Neural Monkey: An Open-source Tool for Sequence Learning. The Prague Bulletin of Mathematical Linguistics (107):5–17. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Nan Jiang and Lihong Li. 2016. Doubly robust offpolicy value evaluation for reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML). New York, NY, USA. Kshitij Judah, Saikat Roy, Alan Fern, and Thomas G. Dietterich. 2019. Reinforcement learning via practice and critique advice. In Proceedings of the 24th AAAI Conference on Artificial Intelligence. Atlanta, GA, USA. Sham Kakade. 2001. A natural policy gradient. In Advances in Neural Information Processing Systems (NIPS). Vancouver, Canada. Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation (WMT). Copenhagen, Denmark. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar. W. Bradley Knox and Peter Stone. 2009. Interactively shaping agents via human reinforcement: The TAMER framework. In Proceedings of the International Conference on Knowledge Capture (K-CAP). Redondo Beach, CA, USA. Vijay R. Konda and John N. Tsitsiklis. 2000. Actorcritic algorithms. In Advances in Neural Information Processing Systems (NIPS). Vancouver, Canada. Julia Kreutzer, Shahram Khadivi, Evgeny Matusov, and Stefan Riezler. 2018. Can neural machine translation be improved with user feedback? In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Industry Track (NAACL-HLT). New Orleans, LA, USA. Julia Kreutzer, Artem Sokolov, and Stefan Riezler. 2017. Bandit structured prediction for neural sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada. Klaus Krippendorff. 2013. Content Analysis. An Introduction to Its Methodology. Sage, third edition. Richard Larsen and Morris Marx. 2012. An Introduction to Mathematical Statistics and Its Applications. Prentice Hall, fifth edition. Carolin Lawrence, Artem Sokolov, and Stefan Riezler. 2017. Counterfactual learning from bandit feedback under deterministic logging: A case study in statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Copenhagen, Denmark. 1787 Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada. Minh-Thang Luong and Christopher D. Manning. 2015. Stanford neural machine translation systems for spoken language domains. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT). Da Nang, Vietnam. Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In ICML Workshop on Deep Learning for Audio, Speech and Language Processing. Atlanta, GA, USA. James MacGlashan, Mark K. Ho, Robert Loftin, Bei Peng, Guan Wang, David L. Roberts, Matthew E. Taylor, and Michael L. Littman. 2017. Interactive learning from policy-dependent human feedback. In Proceedings of the 34th International Conference on Machine Learning (ICML). Sydney, Australia. Andr´e Martins, Marcin Junczys-Dowmunt, Fabio Kepler, Ram´on Astudillo, Chris Hokamp, and Roman Grundkiewicz. 2017. Pushing the limits of translation quality estimation. Transactions of the Association for Computational Linguistics (TACL) 5:205– 218. Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML). New York, NY, USA. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518:529–533. Khanh Nguyen, Hal Daum´e, and Jordan Boyd-Graber. 2017. Reinforcement learning for bandit neural machine translation with simulated feedback. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Copenhagen, Denmark. Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses. An Introduction. Wiley. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL). Philadelphia, PA, USA. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. CoRR abs/1705.04304. Patrick M. Pilarski, Michael R. Dawson, Thomas Degris, Farbod Fahimi, Jason P. Carey, and Richard S. Sutton. 2011. Online human training of a myoelectric prosthesis controller via actor-critic reinforcement learning. In Proceedings of the IEEE International Conference on Rehabilitation Robotics. Z¨urich, Switzerland. Maja Popovi´c. 2015. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation (WMT). Lisbon, Portugal. Doina Precup, Richard S. Sutton, and Sanjoy Dasgupta. 2001. Off-policy temporal-difference learning with function approximation. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML). Williams College, MA, USA. Doina Precup, Richard S. Sutton, and Satinder P. Singh. 2000. Eligibility traces for off-policy policy evaluation. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML). San Francisco, CA, USA. John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. 2015. Trust region policy optimization. In Proceedings of the 31st International Conferene on Machine Learning (ICML). Lille, France. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany. Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. 2010. Learnability, stability and uniform convergence. Journal of Machine Learning Research 11:2635–2670. Shiqi Shen, Yong Cheng, Zongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529:484– 489. 1788 Artem Sokolov, Julia Kreutzer, Christopher Lo, and Stefan Riezler. 2016a. Learning structured predictors from bandit feedback for interactive NLP. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany. Artem Sokolov, Julia Kreutzer, Christopher Lo, and Stefan Riezler. 2016b. Stochastic structured prediction under bandit feedback. In Advances in Neural Information Processing Systems (NIPS). Barcelona, Spain. Miloˇs Stanojevi´c and Khalil Sima’an. 2014. Fitting sentence level translation evaluation with many dense features. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar. Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processings Systems (NIPS). Vancouver, Canada. Adith Swaminathan and Thorsten Joachims. 2015a. Counterfactual risk minimization: Learning from logged bandit feedback. In International Conference on Machine Learning (ICML). Lille, France. Adith Swaminathan and Thorsten Joachims. 2015b. The self-normalized estimator for counterfactual learning. In Advances in Neural Information Processing Systems (NIPS). Montreal, Canada. Philip S. Thomas and Emma Brunskill. 2016. Dataefficient off-policy policy evaluation for reinforcement learning. In Proceedings of the 33nd International Conference on Machine Learning (ICML). New York, NY, USA. Louis Leon Thurstone. 1927. A law of comparative judgement. Psychological Review 34:278–286. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning 8:229–256. Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2017. Adversarial neural machine translation. CoRR abs/1704.06933. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI). San Francisco, CA, USA.
2018
165
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1789–1798 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1789 Accelerating Neural Transformer via an Average Attention Network Biao Zhang1,2, Deyi Xiong3 and Jinsong Su1,2∗ Xiamen University, Xiamen, China 3610051 Beijing Advanced Innovation Center for Language Resources2 Soochow University, Suzhou, China 2150063 [email protected], [email protected], [email protected] Abstract With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1 1 Introduction The past few years have witnessed the rapid development of neural machine translation (NMT), which translates a source sentence into the target language with an encoder-attention-decoder framework (Sutskever et al., 2014; Bahdanau et al., 2015). Under this framework, various advanced neural architectures have been explored ∗Corresponding author. 1Source code is available at https://github.com/bzhangXMU/transformer-aan. 1 RNN 2 CNN 3 Transformer Figure 1: Illustration of the decoding procedure under different neural architectures. We show which previous target words are required to predict the current target word yj in different NMT architectures. k indicates the filter size of the convolution layer. as the backbone network for translation, ranging from recurrent neural networks (RNN) (Sutskever et al., 2014; Luong et al., 2015), convolutional neural networks (CNN) (Gehring et al., 2017a,b) to full attention networks without recurrence and convolution (Vaswani et al., 2017). Particularly, the neural Transformer, relying solely on attention networks, has refreshed state-of-the-art performance on several language pairs (Vaswani et al., 2017). Most interestingly, the neural Transformer is capable of being fully parallelized at the training phase and modeling intra-/inter-dependencies of source and target sentences within a short path. The parallelization property enables training NMT very quickly, while the dependency modeling property endows the Transformer with strong ability in inducing sentence semantics as well as translation correspondences. However, the decoding of the Transformer cannot enjoy the speed strength of parallelization due to the auto-regressive generation schema in the decoder. And the self-attention 1790 Input Layer Average Layer Gating Layer Figure 2: Visualization of the proposed model. For clarity, we show an example with only four words. network in the decoder even further slows it. We explain this using Figure 1, where we provide a comparison to RNN- and CNN-based NMT systems. To capture dependencies from previously predicted target words, the self-attention in the neural Transformer requires to calculate adaptive attention weights on all these words (Figure 1 (3)). By contrast, CNN only requires previous k target words (Figure 1 (2)), while RNN merely 1 (Figure 1 (1)). Due to the auto-regressive generation schema, decoding inevitably follows a sequential manner in the Transformer. Therefore the decoding procedure cannot be parallelized. Furthermore, the more target words are generated, the more time the self-attention in the decoder will take to model dependencies. Therefore, preserving the training efficiency of the Transformer on the one hand and accelerating its decoding on the other hand becomes a new and serious challenge. In this paper, we propose an average attention network (AAN) to handle this challenge. We show the architecture of AAN in Figure 2, which consists of two layers: an average layer and gating layer. The average layer summarizes history information via a cumulative average operation over previous positions. This is equivalent to a simple attention network where original adaptively computed attention weights are replaced with averaged weights. Upon this layer, we stack a feed forward gating layer to improve the model’s expressiveness in describing its inputs. We use AAN to replace the self-attention part of the neural Transformer’s decoder. Considering the characteristic of the cumulative average operation, we develop a masking method to enable parallel computation just like the original selfattention network in the training. In this way, the whole AAN model can be trained totally in parallel so that the training efficiency is ensured. As for the decoding, we can substantially accelerate it by feeding only the previous hidden state to the Transformer decoder just like RNN does. This is achieved with a dynamic programming method. In spite of its simplicity, our model is capable of modeling complex dependencies. This is because AAN regards each previous word as an equal contributor to current word representation. Therefore, no matter how long the input is, our model can always build up connection signals with previous inputs, which we argue is very crucial for inducing long-range dependencies for machine translation. We examine our model on WMT17 translation tasks. On 6 different language pairs, our model achieves a speed-up of over 4 times with almost no loss in both translation quality and training speed. In-depth analyses further demonstrate the convergency and advantages of translating long sentences of the proposed AAN. 2 Related Work GRU (Chung et al., 2014) or LSTM (Hochreiter and Schmidhuber, 1997) RNNs are widely used for neural machine translation to deal with longrange dependencies as well as the gradient vanishing issue. A major weakness of RNNs lies at its sequential architecture that completely disables parallel computation. To cope with this problem, Gehring et al. (2017a) propose to use CNN-based encoder as an alternative to RNN, and Gehring et al. (2017b) further develop a completely CNNbased NMT system. However, shallow CNN can only capture local dependencies. Hence, CNNbased NMT normally develops deep archictures to model long-distance dependencies. Different from these studies, Vaswani et al. (2017) propose the Transformer, a neural architecture that abandons recurrence and convolution. It fully relies on attention networks to model translation. The properties of parallelization and short dependency path significantly improve the training speed as well as model performance for the Transformer. Unfortunately, as we have mentioned in Section 1, it suffers from decoding inefficiency. The attention mechanism is originally proposed to induce translation-relevant source information for predicting next target word in NMT. It contributes a lot to make NMT outperform SMT. Recently, a variety of efforts are made to further improve its accuracy and capability. Luong et al. 1791 (2015) explore several attention formulations and distinguish local attention from global attention. Zhang et al. (2016) treat RNN as an alternative to the attention to improve model’s capability in dealing with long-range dependencies. Yang et al. (2017) introduce a recurrent cycle on the attention layer to enhance the model’s memorization of previous translated source words. Zhang et al. (2017a) observe the weak discrimination ability of the attention-generated context vectors and propose a GRU-gated attention network. Kim et al. (2017) further model intrinsic structures inside attention through graphical models. Shen et al. (2017) introduce a direction structure into a selfattention network to integrate both long-range dependencies and temporal order information. Mi et al. (2016) and Liu et al. (2016) employ standard word alignment to supervise the automatically generated attention weights. Our work also focus on the evolution of attention network, but unlike previous work, we seek to simplify the selfattention network so as to accelerate the decoding procedure. The design of our model is partially inspired by the highway network (Srivastava et al., 2015) and the residual network (He et al., 2015). In the respect of speeding up the decoding of the neural Transformer, Gu et al. (2018) change the auto-regressive architecture to speed up translation by directly generating target words without relying on any previous predictions. However, compared with our work, their model achieves the improvement in decoding speed at the cost of the drop in translation quality. Our model, instead, not only achieves a remarkable gain in terms of decoding speed, but also preserves the translation performance. Developing fast and efficient attention module for the Transformer, to the best of our knowledge, has never been investigated before. 3 The Average Attention Network Given an input layer y = {y1, y2, . . . , ym}, AAN first employs a cumulative-average operation to generate context-sensitive representation for each input embedding as follows (Figure 2 Average Layer): gj = FFN 1 j j X k=1 yk ! (1) where FFN (·) denotes the position-wise feedforward network proposed by Vaswani et al. (2017), and both yk and gj have a dimensionality of d. Intuitively, AAN replaces the original dynamically computed attention weights by the self-attention network in the decoder of the neural Transformer with simple and fixed average weights (1 j ). In spite of its simplicity, the cumulative-average operation is very crucial for AAN because it builds up dependencies with previous input embeddings so that the generated representations are not independent of each other. Another benefit from the cumulative-average operation is that no matter how long the input is, the connection strength with each previous input embedding is invariant, which ensures the capability of AAN in modeling long-range dependencies. We treat gj as a contextual representation for the j-th input, and apply a feed-forward gating layer upon it as well as yj to enrich the non-linear expressiveness of AAN: ij, fj = σ (W [yj; gj]) ˜hj = ij ⊙yj + fj ⊙gj (2) where [·; ·] denotes concatenation operation, and ⊙indicates element-wise multiplication. ij and fj are the input and forget gate respectively. Via this gating layer, AAN can control how much past information can be preserved from previous context gj and how much new information can be captured from current input yj. This helps our model to detect correlations inside input embeddings. Following the architecture design in the neural Transformer (Vaswani et al., 2017), we employ a residual connection between the input layer and gating layer, followed by layer normalization to stabilize the scale of both output and gradient: hj = LayerNorm  yj + ˜hj  (3) We refer to the whole procedure formulated in Eq. (1∼3) as original AAN (·) in following sections. 3.1 Parallelization in Training A computation bottleneck of the original AAN described above is that the cumulative-average operation in Eq. (1) can only be performed sequentially. That is, this operation can not be parallelized. Fortunately, as the average is not a complex computation, we can use a masking trick to enable full parallelization of this operation. We show the masking trick in Figure 3, where input embeddings are directly converted into their corresponding cumulative-averaged outputs 1792 Model Complexity Sequential Operations Maximum Path Length Self-attention O n2 · d + n · d2 O (1) O (1) Original AAN O n · d2 O (n) O (1) Masked AAN O n2 · d + n · d2 O (1) O (1) Table 1: Maximum path lengths, model complexity and minimum number of sequential operations for different models. n is the sentence length and d is the representation dimension. Mask Matrix Figure 3: Visualization of parallel implementation for the cumulative-average operation enabled by a mask matrix. {y1, y2, y3, y4} are the input embeddings. through a masking matrix. In this way, all the components inside AAN (·) can enjoy full parallelization, assuring its computational efficiency. We refer to this AAN as masked AAN. 3.2 Model Analysis In this section, we provide a thorough analysis for AAN in comparison to the original self-attention model used by Vaswani et al. (2017). Unlike our AAN, the self-attention model leverages a scaled dot-product function rather than the average operation to compute attention weights: Q, K, V = f (Y) Self-Attention (Q, K, V) = softmax QKT √ d  V (4) where Y ∈Rn×d is the input matrix, f (·) is a mapping function and Q, K, V ∈Rn×d are the corresponding queries, keys and values. Following Vaswani et al. (2017), we compare both models in terms of computational complexity, minimum number of sequential operations required and maximum path length that a dependency signal between any two positions has to traverse in the network. Table 1 summarizes the comparison results. Our AAN has a maximum path length of O (1), because it can directly capture dependencies between any two input embeddings. For the original AAN, the nature of its sequential computation enlarges its minimum number sequential operations to O (n). However, due to its lack of positionwise masked projection, it only consumes a computational complexity of O n · d2 . By contrast, both self-attention and masked AAN have a computational complexity of O n2 · d + n · d2 , and require only O (1) sequential operation. Theoretically, our masked AAN performs very similarly to the self-attention according to Table 1. We therefore use the masked version of AAN during training throughout all our experiments. 3.3 Decoding Acceleration Differing noticeably from the self-attention in the Transformer, our AAN can be accelerated in the decoding phase via dynamic programming thanks to the simple average calculation. Particularly, we can decompose Eq. (1) into the following two steps: ˜gj = ˜gj−1 + yj (5) gj = FFN  ˜gj j  (6) where ˜g0 = 0. In doing so, our model can compute the j-th input representation based on only one previous state ˜gj−1, instead of relying on all previous states as the self-attention does. In this way, our model can be substantially accelerated during the decoding phase. 4 Neural Transformer with AAN The neural Transformer models translation through an encoder-decoder framework, with each layer involving an attention network followed by a feed forward network (Vaswani et al., 2017). We apply our masked AAN to replace the self-attention network in its decoder part, and illustrate the overall architecture in Figure 4. Given a source sentence x = {x1, x2, . . . , xn}, the Transformer leverages its encoder to induce source-side semantics and dependencies so as to 1793 Multi-Head Attention Feed Forward Add & Norm Input Embedding Add & Norm Positional Encoding Inputs Multi-Head Attention Feed Forward Add & Norm Output Embedding Add & Norm Positional Encoding Outputs (shifted right) Average Attention Linear Softmax Output Probabilities N x x N Figure 4: The new Transformer architecture with the proposed average attention network. enable its decoder to recover the encoded information in a target language. The encoder is composed of a stack of N = 6 identical layers, each of which has two sub-layers: ˜hl = LayerNorm  hl−1 + MHAtt  hl−1, hl−1 hl = LayerNorm  ˜hl + FFN  ˜hl (7) where the superscript l indicates layer depth, and MHAtt denotes the multi-head attention mechanism proposed by Vaswani et al. (2017). Based on the encoded source representation hN, the Transformer relies on its decoder to generate corresponding target translation y = {y1, y2, . . . , ym}. Similar to the encoder, the decoder also consists of a stack of N = 6 identical layers. For each layer in our architecture, the first sub-layer is our proposed average attention network, aiming at capturing target-side dependencies with previous predicted words: ˜sl = AAN  sl−1 (8) Carrying these dependencies, the decoder stacks another two sub-layers to seek translation-relevant source semantics for bridging the gap between the source and target language: sl c = LayerNorm  ˜sl + MHAtt  ˜sl, hN sl = LayerNorm  sl c + FFN  sl c  (9) We use subscript c to denote the source-informed target representation. Upon the top layer of this decoder, translation is performed where a linear transformation and softmax activation are applied to compute the probability of the next token based on sN To memorize position information, the Transformer augments its input layer h0 = x, s0 = y with frequency-based positional encodings. The whole model is a large, single neural network, and can be trained on a large-scale bilingual corpus with a maximum likelihood objective. We refer readers to (Vaswani et al., 2017) for more details. 5 Experiments 5.1 WMT14 English-German Translation We examine various aspects of our AAN on this translation task. The training data consist of 4.5M sentence pairs, involving about 116M English words and 110M German words. We used newstest2013 as the development set for model selection, and newstest2014 as the test set. We evaluated translation quality via case-sensitive BLEU metric (Papineni et al., 2002). 5.1.1 Model Settings We applied byte pair encoding algorithm (Sennrich et al., 2016) to encode all sentences and limited the vocabulary size to 32K. All out-ofvocabulary words were mapped to an unique token “unk”. We set the dimensionality d of all input and output layers to 512, and that of innerFFN layer to 2048. We employed 8 parallel attention heads in both encoder and decoder layers. We batched sentence pairs together so that they were approximately of the same length, and each batch had roughly 25000 source and target tokens. During training, we used label smoothing with value ϵls = 0.1, attention dropout and residual dropout with a rate of p = 0.1. During decoding, we employed beam search algorithm and set the beam size to 4. Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98 and ϵ = 10−9 was used to tune model parameters, and the learning rate was varied under a warm-up strategy with warmup steps = 4000 (Vaswani et al., 2017). 1794 Model BLEU Transformer 26.37 Our Model 26.31 Our Model w/o FFN 26.05 Our Model w/o Gate 25.91 Table 2: Case-sensitive tokenized BLEU score on WMT14 English-German translation. BLEU scores are calculated using multi-bleu.perl. The maximum number of training steps was set to 100K. Weights of target-side embedding and output weight matrix were tied for all models. We implemented our model with masking tricks based on the open-sourced thumt (Zhang et al., 2017b)2, and trained and evaluated all models on a single NVIDIA GeForce GTX 1080 GPU. For evaluation, we averaged last five models saved with an interval of 1500 training steps. 5.1.2 Translation Performance Table 2 reports the translation results. On the same dataset, the Transformer yields a BLEU score of 26.37, while our model achieves 26.31. Both results are almost the same with no significant difference. Clearly, our model is capable of capturing complex translation correspondences so as to generate high-quality translations as effective as the Transformer. We also show an ablation study in terms of the FFN(·) network in Eq. (1) and the gating layer in Eq. (2). Table 2 shows that without the FFN network, the performance of our model drops 0.26 BLEU points. This degeneration is enlarged to 0.40 BLEU points when the gating layer is not available. In order to reach comparable performance with the original Transformer, integrating both components is desired. 5.1.3 Analysis on Convergency Different neural architectures might require different number of training steps to converge. In this section, we testify whether our AAN would reveal different characteristics with respect to convergency. We show the loss curve of both the Transformer and our model in Figure 5. Surprisingly, both model show highly similar tendency, and successfully converge in the end. To train a high-quality translation system, our model consumes almost the same number of training steps as the Transformer. This strongly suggests 2https://github.com/thumt/THUMT 0 1 2 3 4 5 6 7 8 9 1 101 201 301 401 501 601 701 801 901 Loss Training Steps Transformer Our Model Figure 5: Convergence visualization. The horizontal axis denotes training steps scaled by 102, and the vertical axis indicates training loss. Roughly, our model converges similarly to the Transformer. Transformer Our Model △r Training 0.2474 0.2464 1.00 Decoding beam=4 0.1804 0.0488 3.70 beam=8 0.3576 0.0881 4.06 beam=12 0.5503 0.1291 4.26 beam=16 0.7323 0.1700 4.31 beam=20 0.9172 0.2122 4.32 Table 3: Time required for training and decoding. Training denotes the number of global training steps processed per second; Decoding indicates the amount of time in seconds required for translating one sentence, which is averaged over the whole newstest2014 dataset. △r shows the ratio between the Transformer and our model. that replacing the self-attention network with our AAN does not have negative impact on the convergency of the entire model. 5.1.4 Analysis on Speed In Section 3, we demonstrate in theory that our AAN is as efficient as the self-attention during training, but can be substantially accelerated during decoding. In this section, we provide quantitative evidences to examine this point. We show the training and decoding speed of both the Transformer and our model in Table 3. During training, our model performs approximately 0.2464 training steps per second, while the Transformer processes around 0.2474. This indicates that our model shares similar computational strengths with the Transformer during training, which resonates with the computational analysis in Section 3. When it comes to decoding procedure, the time of our model required to translate one sentence 1795 (0, 8) [16, 24) [32, 40) [48, 56) [64, 72) 20 25 30 35 40 Sentence Length BLEU Scores Transformer Our Model 1 (0, 8) [16, 24) [32, 40) [48, 56) [64, 72) 20 40 60 Sentence Length Average Length of Translation Transformer Our Model 1 Figure 6: Translation statistics on WMT14 English-German test set (newstest14) with respect to the length of source sentences. The top figure shows tokenized BLEU score, and the bottom one shows the average length of translations, both visa-vis sentence length is only a quarter of that of the Transformer, with beam size ranging from 4 to 20. Another noticeable feature is that as the beam size increases, the ratio of required decoding time between the Transformer and our model is consistently enlarged. This demonstrates empirically that our model, enhanced with the dynamic decoding acceleration algorithm (Section 3.3), can significantly improve the decoding speed of the Transformer. 5.1.5 Effects on Sentence Length A serious common challenge for NMT is to translate long source sentences as handling longdistance dependencies and under-translation issues becomes more difficult for longer sentences. Our proposed AAN uses simple cumulativeaverage operations to deal with long-range depen(0, 8) [16, 24) [32, 40) [48, 56) [64, 72) 0 0.2 0.4 0.6 0.8 1 Sentence Length Decoding Time per Sentence (seconds) Transformer Our Model 1 Figure 7: Average time required for translating one source sentence vs. the length of the source sentence. With the increase of sentence length, our model shows more clear and significant advantage over the Transformer in terms of the decoding speed. dencies. We want to examine the effectiveness of these operations on long sentence translation. For this, we provide the translation results along sentence length in Figure 6. We find that both the Transformer and our model generate very similar translations in terms of BLEU score and translation length, and obtain rather promising performance on long source sentences. More specifically, our model yields relatively shorter translation length on the longest source sentences but significantly better translation quality. This suggests that in spite of the simplicity of the cumulative-average operations, our AAN can indeed capture long-range dependences desired for translating long source sentences. Generally, the decoder takes more time for translating longer sentences. When it comes to the Transformer, this time issue of translating long sentences becomes notably severe as all previous predicted words must be included for estimating both self-attention weights and word prediction. We show the average time required for translating a source sentence with respect to its sentence length in Figure 7. Obviously, the decoding time of the Transformer grows dramatically with the increase of sentence length, while that of our model rises rather slowly. We contribute this great decoding advantage of our model over the Transformer to the average attention architecture which enables 1796 Case-sensitive BLEU Case-insensitive BLEU winner Transformer Our Model △d winner Transformer Our Model △d En→De 28.3 27.33 27.22 -0.11 28.9 27.92 27.80 -0.12 De→En 35.1 32.63 32.73 +0.10 36.5 34.06 34.13 +0.07 En→Fi 20.7 21.00 20.87 -0.13 21.1 21.54 21.47 -0.07 Fi→En 20.5 25.19 24.78 -0.41 21.4 26.22 25.74 -0.48 En→Lv 21.1 16.83 16.63 -0.20 21.6 17.42 17.23 -0.19 Lv→En 21.9 17.57 17.51 -0.06 22.9 18.48 18.30 -0.18 En→Ru 29.8 27.82 27.73 -0.09 29.8 27.83 27.74 -0.09 Ru→En 34.7 31.51 31.36 -0.15 35.6 32.59 32.36 -0.23 En→Tr 18.1 12.11 11.59 -0.52 18.4 12.56 12.03 -0.53 Tr→En 20.1 16.19 15.84 -0.35 20.9 16.93 16.57 -0.36 En→Cs 23.5 21.53 21.12 -0.41 24.1 22.07 21.66 -0.41 Cs→En 30.9 27.49 27.45 -0.04 31.9 28.41 28.33 -0.08 Table 4: Detokenized BLEU scores for WMT17 translation tasks. Results are reported with multi-bleudetok.perl. “winner” denotes the translation results generated by the WMT17 winning systems. △d indicates the difference between our model and the Transformer. our model to perform next-word prediction by calculating information just from the previous hidden state, rather than considering all previous inputs like the self-attention in the Transformer’s decoder. 5.2 WMT17 Translation Tasks We further demonstrate the effectiveness of our model on six WMT17 translation tasks in both directions (12 translation directions in total). These tasks contain the following language pairs: • En-De: The English-German language pair. This training corpus consists of 5.85M sentence pairs, with 141M English words and 135M German words. We used the concatenation of newstest2014, newstest2015 and newstest2016 as the development set, and the newstest2017 as the test set. • En-Fi: The English-Finnish language pair. This training corpus consists of 2.63M sentence pairs, with 63M English words and 45M Finnish words. We used the concatenation of newstest2015, newsdev2015, newstest2016 and newstestB2016 as the development set, and the newstest2017 as the test set. • En-Lv: The English-Latvian language pair. This training corpus consists of 4.46M sentence pairs, with 63M English words and 52M Latvian words. We used the newsdev2017 as the development set, and the newstest2017 as the test set. • En-Ru: The English-Russian language pair. This training corpus consists of 25M sentence pairs, with 601M English words and 567M Russian words. We used the concatenation of newstest2014, newstest2015 and newstest2016 as the development set, and the newstest2017 as the test set. • En-Tr: The English-Turkish language pair. This training corpus consists of 0.21M sentence pairs, with 5.2M English words and 4.6M Turkish words. We used the concatenation of newsdev2016 and newstest2016 as the development set, and newstest2017 as the test set. • En-Cs: The English-Czech language pair. This training corpus consists of 52M sentence pairs, with 674M English words and 571M Czech words. We used the concatenation of newstest2014, newstest2015 and newstest2016 as the development set, and the newstest2017 as the test set. Interestingly, these translation tasks involves training corpora with different scales (ranging from 0.21M to 52M sentence pairs). This help us thoroughly examine the ability of our model on different sizes of training data. All these preprocessed datasets are publicly available, and can be downloaded from WMT17 official website.3 We used the same modeling settings as in the WMT14 English-German translation task except for the number of training steps for En-Fi and EnTr, which we set to 60K and 10K respectively. In addition, to compare with official results, we reported both case-sensitive and case-insensitive detokenized BLEU scores. 3http://data.statmt.org/wmt17/translationtask/preprocessed/ 1797 Transformer Our Model △r En→De 0.1411 0.02871 4.91 De→En 0.1255 0.02422 5.18 En→Fi 0.1289 0.02423 5.32 Fi→En 0.1285 0.02336 5.50 En→Lv 0.1850 0.03167 5.84 Lv→En 0.1980 0.03123 6.34 En→Ru 0.1821 0.03140 5.80 Ru→En 0.1595 0.02778 5.74 En→Tr 0.2078 0.02968 7.00 Tr→En 0.1886 0.03027 6.23 En→Cs 0.1150 0.02425 4.74 Cs→En 0.1178 0.02659 4.43 Table 5: Average seconds required for decoding one source sentence on WMT17 translation tasks. 5.2.1 Translation Results Table 4 shows the overall results on 12 translation directions. We also provide the results from WMT17 winning systems4. Notice that unlike the Transformer and our model, these winner systems typically use model ensemble, system combination and large-scale monolingual corpus. Although different languages have different linguistic and syntactic structures, our model consistently yields rather competitive results against the Transformer on all language pairs in both directions. Particularly, on the De→En translation task, our model achieves a slight improvement of 0.10/0.07 case-sensitive/case-insensitive BLEU points over the Transformer. The largest performance gap between our model and the Transformer occurs on the En→Tr translation task, where our model is lower than the Transformer by 0.52/0.53 case-sensitive/case-insensitive BLEU points. We conjecture that this difference may be due to the small training corpus of the En-Tr task. In all, these results suggest that our AAN is able to perform comparably to Transformer on different language pairs with different scales of training data. We also show the decoding speed of both the Transformer and our model in Table 5. On all languages in both directions, our model yields significant and consistent improvements over the Transformer in terms of decoding speed. Our model decodes more than 4 times faster than the Transformer. Surprisingly, our model just consumes 0.02968 seconds to translate one source sentence on the En→Tr language pair, only a seventh of the decoding time of the Transformer. These results show that the benefit of decoding accelera4http://matrix.statmt.org/matrix tion from the proposed average attention structure is language-invariant, and can be easily adapted to other translation tasks. 6 Conclusion and Future Work In this paper, we have described the average attention network that considerably alleviates the decoding bottleneck of the neural Transformer. Our model employs a cumulative average operation to capture important contextual clues from previous target words, and a feed forward gating layer to enrich the expressiveness of learned hidden representations. The model is further enhanced with a masking trick and a dynamic programming method to accelerate the Transformer’s decoder. Extensive experiments on one WMT14 and six WMT17 language pairs demonstrate that the proposed average attention network is able to speed up the Transformer’s decoder by over 4 times. In the future, we plan to apply our model on other sequence to sequence learning tasks. We will also attempt to improve our model to enhance its modeling ability so as to consistently outperform the original neural Transformer. 7 Acknowledgments The authors were supported by Beijing Advanced Innovation Center for Language Resources, National Natural Science Foundation of China (Nos. 61672440 and 61622209), the Fundamental Research Funds for the Central Universities (Grant No. ZK1024), and Scientific Research Project of National Language Committee of China (Grant No. YB135-49). Biao Zhang greatly acknowledges the support of the Baidu Scholarship. We also thank the reviewers for their insightful comments. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Junyoung Chung, C¸ aglar G¨ulc¸ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR. Jonas Gehring, Michael Auli, David Grangier, and Yann N. Dauphin. 2017a. A convolutional encoder model for neural machine translation. In Proc. of ACL, pages 123–135. 1798 Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017b. Convolutional sequence to sequence learning. Proc. of ICML. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. Proc. of ICLR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. CoRR, abs/1512.03385. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9:1735–1780. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. 2017. Structured attention networks. Proc. of ICLR, abs/1702.00887. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. Proc. of ICLR. Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In Proc. of COLING 2016, pages 3093–3102, Osaka, Japan. The COLING 2016 Organizing Committee. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proc. of EMNLP, pages 1412–1421. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proc. of EMNLP, pages 2283–2288, Austin, Texas. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL, pages 311–318. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL, pages 1715–1725. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017. Disan: Directional self-attention network for rnn/cnn-free language understanding. CoRR, abs/1709.04696. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. CoRR, abs/1505.00387. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Zichao Yang, Zhiting Hu, Yuntian Deng, Chris Dyer, and Alex Smola. 2017. Neural machine translation with recurrent attention modeling. In Proc. of EACL, pages 383–387, Valencia, Spain. Association for Computational Linguistics. Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Recurrent neural machine translation. CoRR, abs/1607.08725. Biao Zhang, Deyi Xiong, and Jinsong Su. 2017a. A gru-gated attention model for neural machine translation. CoRR, abs/1704.08430. Jiacheng Zhang, Yanzhuo Ding, Shiqi Shen, Yong Cheng, Maosong Sun, Huan-Bo Luan, and Yang Liu. 2017b. THUMT: an open source toolkit for neural machine translation. CoRR, abs/1706.06415.
2018
166
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1799–1808 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1799 How Much Attention Do You Need? A Granular Analysis of Neural Machine Translation Architectures Tobias Domhan Amazon Berlin, Germany [email protected] Abstract With recent advances in network architectures for Neural Machine Translation (NMT) recurrent models have effectively been replaced by either convolutional or self-attentional approaches, such as in the Transformer. While the main innovation of the Transformer architecture is its use of self-attentional layers, there are several other aspects, such as attention with multiple heads and the use of many attention layers, that distinguish the model from previous baselines. In this work we take a fine-grained look at the different architectures for NMT. We introduce an Architecture Definition Language (ADL) allowing for a flexible combination of common building blocks. Making use of this language, we show in experiments that one can bring recurrent and convolutional models very close to the Transformer performance by borrowing concepts from the Transformer architecture, but not using self-attention. Additionally, we find that self-attention is much more important for the encoder side than for the decoder side, where it can be replaced by a RNN or CNN without a loss in performance in most settings. Surprisingly, even a model without any target side self-attention performs well. 1 Introduction Since the introduction of attention mechanisms (Bahdanau et al., 2014; Luong et al., 2015) Neural Machine Translation (NMT) (Sutskever et al., 2014) has shown some impressive results. Initially, approaches to NMT mainly relied on Recurrent Neural Networks (RNNs) (Kalchbrenner and Blunsom, 2013; Bahdanau et al., 2014; Luong et al., 2015; Wu et al., 2016) such as Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) or the Gated Rectified Unit (GRU) (Cho et al., 2014). Recently, other approaches relying on convolutional networks (Kalchbrenner et al., 2016; Gehring et al., 2017) and self-attention (Vaswani et al., 2017) have been introduced. These approaches remove the dependency between source language time steps, leading to considerable speed-ups in training time and improvements in quality. The Transformer, however, contains other differences besides self-attention, including layer normalization across the entire model, multiple source attention mechanisms, a multi-head dot attention mechanism, and the use of residual feedforward layers. This raises the question of how much each of these components matters. To answer this question we first introduce a flexible Architecture Definition Language (ADL) (§2). In this language we standardize existing components in a consistent way making it easier to compare structural differences of architectures. Additionally, it allows us to efficiently perform a granular analysis of architectures, where we can evaluate the impact of individual components, rather than comparing entire architectures as a whole. This ability leads us to the following observations: • Source attention on lower encoder layers brings no additional benefit (§4.2). • Multiple source attention layers and residual feed-forward layers are key (§4.3). • Self-attention is more important for the source than for the target side (§4.4). 1800 2 Flexible Neural Machine Translation Architecture Combination In order to experiment easily with different architecture variations we define a domain specific NMT Architecture Definition Language (ADL), consisting of combinable and nestable building blocks. 2.1 Neural Machine Translation NMT is formulated as a sequence to sequence prediction task in which a source sentence X = x1, ..., xn is translated auto-regressively into a target sentence Y = y1, ..., ym one token at a time as p(yt|Y1:t−1, X; θ) = softmax(WozL + bo), (1) where bo is a bias vector, Wo projects a model dependent hidden vector zL of the Lth decoder layer to the dimension of the target vocabulary Vtrg and θ denotes the model parameters. Typically, during training Y1:t−1 consists of the reference sequence tokens, rather then the predictions produced by the model, which is known as teacher-forcing. Training is done by minimizing the cross-entropy loss between the predicted and the reference sequence. 2.2 Architecture Definition Language In the following we specify the ADL which can be used to define any standard NMT architecture and combinations thereof. Layers The basic building block of the ADL is a layer l. Layers can be nested, meaning that a layer can consist of several sublayers. Layers optionally take set of named arguments l(k1=v1, k2=v2, ...) with names k1, k2, ... and values v1, v2, ... or positional arguments l(v1, v2, ...). Layer definitions For each layer we have a corresponding layer definition based on the hidden states of the previous layer and any additional arguments. Specifically, each layer takes T hidden states hi 1, ..., hi T , which in matrix form are Hi ∈RT×di, and produces a new set of hidden states hi+1 1 , ..., hi+1 T or Hi+1. While each layer can have a different number of hidden units di, in the following we assume them to stay constant across layers and refer to the model dimensionality as dmodel. We distinguish the hidden states on the source side U0, ..., ULs from the hidden states of the target side Z0, ..., ZL. These are produced by the source and target embeddings and Ls source layers and L target layers. Source attention layers play a special role in that their definition additionally makes use of any of the source hidden states U0, ..., ULs. Layer chaining Layers can be chained, feeding the output of one layer as the input to the next. We denote this as l1 →l2...lL. This is equivalent to writing lL(... l2(l1(H0))) if none of the layers is a source attention layer. In layer chains layers may also contain layers that themselves take arguments. As an example l1(k=v) →l2  ...  lL is equivalent to lL(... l2(l1(H0, k=v))). Note that unlike in the layer definition hidden states are not explicitly stated in the layer chain, but rather implicitly defined through the preceding layers. Encoder/Decoder structure A NMT model is fully defined through two layer chains, namely one describing the encoder and another describing the decoder. The first layer hidden states on the source U0 are defined through the source embedding as u0 t = Esrcxt (2) where xt ∈{0, 1}|Vsrc| is the one-hot representation of xt and ESxt ∈Re×|Vsrc| an embedding matrix with embedding dimensionality e. Similarly, Z0 is defined through the target embedding matrix Etgt. Given the final decoder hidden state ZL the next word predictions are done according to Equation 1. Layer repetition Networks often consist of substructures that are repeated several times. In order to support this we define a repetition layer as repeat(n, l) = l1l2...ln, where l represents a layer chain and each one of l1, ..., ln an instantiation of that layer chain with a separate set of weights. 2.3 Layer Definitions In this section we will introduce the concrete layers and their definitions, which are available for composing NMT architectures. They are based on building blocks common to many current NMT models. 1801 Dropout A dropout (Srivastava et al., 2014) layer, denoted as dropout(ht), can be applied to hidden states as a form of regularization. Fixed positional embeddings Fixed positional embeddings (Vaswani et al., 2017) add information about the position in the sequence to the hidden states. With ht ∈Rd the positional embedding layer is defined as pos(ht) = dropout( √ d · ht + pt) pt,j = sin(t/100002j/d) pt,2j+1 = cos(t/100002j/d). Linear We define a linear projection layer as linear(ht, do) = Wht + b, where W ∈Rdo×din. Feed-forward Making use of the linear projection layer a feed-forward layer with ReLU activation and dropout is defined as ff(ht, do) = dropout(max(0, linear(ht, do))) and a version which temporarily upscales the number of hidden units, as done by Vaswani et al. (2017), can be defined as ffl(ht) = ff(4din)linear(din) where ht ∈Rdin. Convolution Convolutions run a small feedforward network on a sliding window over the input. Formally, on the encoder side this is defined as cnn(H, v, k) = v(W[hi−⌊k/2⌋; ...; hi+⌊k/2⌋] + b) where k is the kernel size, and v is a non-linearity. The input is padded so that the number of hidden states does not change. To preserve the auto-regressive property of the decoder we need to make sure to never take future decoder time steps into account, which can be achieved by adding k −1 padding vectors h−k+1 = 0, . . . , h−1 = 0 such that the decoder convolution is given as cnn(H, v, k) = v(W[ht−k+1; ...; ht] + b). The non-linearity v can either be a ReLU or a Gated Linear Unit (GLU) (Dauphin et al., 2016). With the GLU we set di = 2d such that we can split h = [hA; hB] ∈R2d and compute the nonlinearity as glu([hA; hB]) = hA ⊗σ(hB). Identity We define an identity layer as id(ht) = ht. Concatenation To concatenate the output of p layer chains we define concat(ht, l1, ..., lp) = [l1(ht); ...; lp(ht)]. Recurrent Neural Network An RNN layer is defined as rnn(ht) = frnn o(ht, st−1) st = frnn h(ht, st−1) where frnn o and frnn h could be defined through either a GRU (Cho et al., 2014) or a LSTM (Hochreiter and Schmidhuber, 1997) cell. In addition, a bidirectional RNN layer birnn is available, which runs one rnn in forward and another in reverse direction and concatenates both results. Attention All attention mechanisms take a set of query vectors q0, ..., qM, key vectors k0, ..., kN and value vectors v0, ..., vN in order to produce one context vector per query, which is a linear combination of the value vectors. We define Q ∈ RM×d, V ∈RN×d and K ∈RN×d as the concatenation of these vectors. What is used as the query, key and value vectors depends on attention type and is defined below. Dot product attention The scaled dot product attention (Vaswani et al., 2017) is defined as dot att(Q, K, V, s) = softmax QK⊤ √s  V, where the scaling factor s is implicitly set to d unless noted otherwise. Adding a projection to the queries, keys and values we get the projected dot attention as proj dot att(Q, K, V, dp, s) = dot att(QWQ, KWK, VWV , s) where dp is dimensionality of the projected vectors such that WQ ∈Rdq×dp, WK ∈Rdk×dp and WV ∈Rdv×dp. Vaswani et al. (2017) further introduces a multihead attention, which applies multiple attentions at a reduced dimensionality. With h heads multihead attention is computed as mh dot att(Q, K, V, h, s) = [C0; ...; Ch], 1802 Ci = proj dot att(Q, K, V, d/h, s). Note that with h = 1 we recover the projected dot attention. MLP attention The MLP attention (Bahdanau et al., 2014) computes the scores with a onelayer neural network as mlp att(Q, K, V) = softmax (S) V, Sij = wT o tanh(Wqqi + Wkkj). Source attention Using the source hidden vectors U, the source attentions are computed as mh dot src att(H, U, h, s) = mh dot att(H, U, U, h, s), mlp src att(H, U) = mlp att(H, U, U), dot src att(H, U, s) = mh dot att(H, U, U, 1, s). Self-attention Self-attention (Vaswani et al., 2017) uses the hidden states as queries, keys and values such that mh dot self att(H, s) = mh dot att(H, H, H, s). Please note that on the target side one needs to make sure to preserve the auto-regressive property by only attending to hidden states at the current or past steps h < t, which is achieved by masking the attention mechanism. Layer normalization Layer normalization (Ba et al., 2016) uses the mean and standard deviation for normalization. It is computed as norm(ht) = g σt ⊗(ht −µt) + b µt = 1 d d X i=1 ht,j σt = v u u t1 d d X i=1 (ht,j −µj)2 where g and b are learned scale and shift parameters with the same dimensionality as h. Residual layer A residual layer adds the output of an arbitrary layer chain l to the current hidden states. We define this as res(ht, l) = ht + l(ht). For convenience we also define res d(ht, l) = res(l(ht)dropout) and res nd(ht, l) = res(norml(ht)dropout). 2.4 Standard Architectures Having defined the common building blocks we now show how standard NMT architectures can be constructed. RNMT As RNNs have been around the longest in NMT, several smaller architecture variations exist. Similar to Wu et al. (2016) in the following we use a bi-directional RNN followed by a stack of uni-directional RNNs with residual connections on the encoder side. Using the ADL an n layer encoder can be expressed as ULs = dropoutbirnnrepeat(n −1, res d(rnn)). For the decoder we use the architecture by Luong et al. (2015), which first runs a stacked RNN and then combines the context provided by a single attention mechanism with the hidden state provided by the RNN. This can be expressed by ZL = dropoutrepeat(n, res d(rnn)) concat(id, mlp att)ff. If input feeding (Luong et al., 2015) is used the first layer hidden states are redefined as z0 t = [zL t−1; Etgtyt]. Note that this inhibits any parallelism across decoder time steps. This is only an issue when using models other than RNNs, as RNNs already do not allow for parallelizing over decoder time steps. ConvS2S Gehring et al. (2017) introduced a NMT model that fully relies on convolutions, both on the encoder and on the decoder side. The encoder is defined as ULs = posrepeat(n, res(cnn(glu)dropout)) and the decoder, which uses an unscaled single head dot attention is defined as ZL = posres(dropoutcnn(glu)dropout res(dot src att(s=1))). Note that unlike (Gehring et al., 2017) we do not project the query vectors before the attention and do not add the embeddings to the attention values. 1803 Transformer The Transformer (Vaswani et al., 2017) makes use of self-attention, instead of RNNs or Convolutional Neural Networks (CNNs), as the basic computational block. Note that we use a slightly updated residual structure as implemented by tensor2tensor1 than proposed originally. Specifically, layer normalization is applied to the input of the residual block instead of applying it between blocks. The Transformer uses a combination of self-attention and feed-forward layers on the encoder and additionally source attention layers on the decoder side. When defining the Transformer encoder block as tenc = res nd(mh dot self att)res nd(ffl), and the decoder block as tdec = res nd(mh dot self att) res nd(mh dot src att)res nd(ffl). the Transformer encoder is given as ULs = posrepeat(n, tenc)norm and the decoder as ZL = posrepeat(n, tdec)norm. 3 Related Work The dot attention mechanism, now heavily used in the Transformer models, was introduced by (Luong et al., 2015) as part of an exploration of different attention mechanisms for RNN based NMT models. Britz et al. (2017) performed an extensive exploration of hyperparameters of RNN based NMT models. The variations explored include different attention mechanisms, RNN cells types and model depth. Similar to our work, Schrimpf et al. (2017) define a language for exploring architectures. In this case the architectures are defined for RNN cells and not for the higher level model architecture. Using the language they perform an automatic search of RNN cell architectures. For the application of image classification there have been several recent successful efforts of automatically searching for successful architectures (Zoph and Le, 2016; Negrinho and Gordon, 2017; Liu et al., 2017). 1https://github.com/tensorflow/tensor2tensor 4 Experiments What follows is an extensive empirical analysis of current NMT architectures and how certain sublayers as defined through our ADL affect performance. 4.1 Setup All experiments were run with an adapted version of SOCKEYE (Hieber et al., 2017), which can parse arbitrary model definitions that are expressed in the language described in Section 2.3. The code and configuration are available at https://github.com/awslabs/sockeye/tree/acl18 allowing researchers to easily replicate the experiments and to quickly try new NMT architectures by either making use of existing building blocks in novel ways or adding new ones. In order to get data points on corpora of different sizes we ran experiments on both WMT and IWSLT data sets. For WMT we ran the majority of our experiments on the most recent WMT’17 data consisting of roughly 5.9 million training sentences for English-German (EN→DE) and 4.5 million sentences for Latvian-English (LV→EN). We used newstest2016 as validation data and report metrics calculated on newstest2017. For the smaller IWSLT’16 English-German corpus, which consists of roughly 200 thousand training sentences, we used TED.tst2013 as validation data and report numbers for TED.tst2014. For both WMT’17 and IWSLT’16 we preprocessed all data using the Moses2 tokenizer and apply Byte Pair Encoding (BPE) (Sennrich et al., 2015) with 32,000 merge operations. Unless noted otherwise we run each experiment three times with different random seeds and report the mean and standard deviation of the BLEU and METEOR (Lavie and Denkowski, 2009) scores across runs. Evaluation scores are based on tokenized sequences and calculated with MultEval (Clark et al., 2011). Model WMT’14 Vaswani et al. (2017) 27.3 Our Transformerbase impl. 27.5 Table 1: BLEU scores on WMT’14 EN→DE. In order to compare to previous work, we also ran an additional experiment on WMT’14 using the same data as Vaswani et al. (2017) 2https://github.com/moses-smt/mosesdecoder/ 1804 as provided in preprocessed form through tensor2tensor.3 This data set consists of WMT’16 training data, which has been tokenized and byte pair encoded with 32,000 merge operations. Evaluation is done on tokenized and compound split newstest2014 data using multi-bleu.perl in order to get scores comparable to Vaswani et al. (2017). As seen in Table 1, our Transformer implementation achieves a score equivalent to the originally reported numbers. On the smaller IWSLT data we use dmodel = 512 and on WMT dmodel = 256 for all models. Models are trained with 6 encoder and 6 decoder blocks, where in the Transformer model a layer refers to a full encoder or decoder block. All convolutional layers use a kernel of size 3 and a ReLU activation, unless noted otherwise. RNNs use LSTM cells. For training we use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0002. The learning rate is decayed by a factor of 0.7, whenever the validation perplexity does not improve for 8 consecutive checkpoints, where a checkpoint is created every 4,000 updates on WMT and 1,000 updates on IWSLT. All models use label smoothing (Szegedy et al., 2016) with ϵls = 0.1. 4.2 What to attend to? Source attention is typically based on the top encoder block. With multiple source attention layers one could hypothesize that it could be beneficial to allow attention encoder blocks other than the top encoder block. It might for example be beneficial for lower decoder blocks to use encoder blocks from the same level as they represent the same level of abstraction. Inversely, assuming that the translation is done in a coarse to fine manner it might help to first use the uppermost encoder block and use gradually lower level representations. Encoder block IWSLT WMT’17 upper 25.4 ± 0.2 27.6 ± 0.0 increasing 25.4 ± 0.1 27.3 ± 0.1 decreasing 25.3 ± 0.2 27.1 ± 0.1 Table 2: BLEU scores when varying the encoder block used in the source attention mechanism of a Transformer on the EN→DE IWSLT and WMT’17 datasets. 3https://github.com/tensorflow/tensor2tensor/blob/ 765d33bb/tensor2tensor/data generators/translate ende.py The result of modifying the source attention mechanism to use different encoder blocks is shown in Table 2. The variations include using the result of the encoder Transformer block at the same level as the decoder Transformer block (increasing) and using the upper encoder Transformer block in the first decoder block and then gradually using the lower blocks (decreasing). We can see that attention on the upper encoder block performs best and no gains can be observed by attention on different encoder layers in the source attention mechanism. 4.3 Network Structure The Transformer sets itself apart from both standard RNN models and convolutional model by more than just the multi-head self-attention blocks. RNN to Transformer The differences to the RNN include the multiple source attention layers, multi-head attention, layer normalization and the residual upscaling feed-forward layers. Additionally, RNN models typically use single head MLP attention instead of the dot attention. This raises the question of what aspect contributes most to the performance of the Transformer. Table 3 shows the result of taking an RNN and step by step changing the architecture to be similar to the Transformer architecture. We start with a standard RNN architecture with MLP attention similar to Luong et al. (2015) as described in Section 2.4 with and without input feeding denoted as RNMT. Next, we take a model with a residual connection around the encoder bi-RNN such that the encoder is defined as dropoutres d(birnn)repeat(5, res d(rnn)). The decoder uses a residual single head dot attention and no input feeding and is defined as dropoutrepeat(6, res d(rnn)) res d(dot src att)res d(ffl). We denote this model as RNN in Table 3. This model is then changed to use multi-head attention (mh), positional embeddings (pos), layer normalization on the inputs of the residual blocks (norm), an attention mechanism in a residual block after every RNN layer with multiple (multi-att) and a single head (multi-add-1h), and finally a residual 1805 IWSLT EN→DE WMT’17 EN→DE WMT’17 LV→EN Model BLEU BLEU METEOR BLEU METEOR Transformer 25.4 ± 0.1 27.6 ± 0.0 47.2 ± 0.1 18.5 ± 0.0 51.3 ± 0.1 RNMT 23.2 ± 0.2 25.5 ± 0.2 45.1 ± 0.1 - input feeding 23.1 ± 0.2 24.6 ± 0.1 43.8 ± 0.2 RNN 22.8 ± 0.2 23.8 ± 0.1 43.3 ± 0.1 15.2 ± 0.1 45.9 ± 0.1 + mh 23.7 ± 0.4 24.4 ± 0.1 43.9 ± 0.1 16.0 ± 0.1 47.1 ± 0.1 + pos 23.9 ± 0.2 24.1 ± 0.1 43.5 ± 0.2 + norm 23.7 ± 0.1 24.0 ± 0.2 43.2 ± 0.1 15.2 ± 0.1 46.3 ± 0.2 + multi-att-1h 24.5 ± 0.0 25.2 ± 0.1 44.9 ± 0.1 16.6 ± 0.2 49.1 ± 0.2 / multi-att 24.4 ± 0.3 25.5 ± 0.0 45.3 ± 0.0 17.0 ± 0.2 49.4 ± 0.1 + ff 25.1 ± 0.1 26.7 ± 0.1 46.4 ± 0.2 17.8 ± 0.1 50.5 ± 0.1 Table 3: Transforming an RNN into a Transformer style architecture. + shows the incrementally added variation. / denotes an alternative variation to which the subsequent + is relative to. upscaling feed-forward layer is added after each attention block (ff). The final architecture of the encoder after applying these variations is posres nd(birnn)res nd(ffl) repeat(5, res nd(rnn)res nd(ffl)norm and of the decoder posrepeat(6, res nd(rnn) res nd(mh dot src att)res nd(ffl))norm. Comparing this to the Transformer as defined in Section 2.4 we note that the model is identical to the Transformer, except that each self-attention has been replaced by an RNN or bi-RNN. Table 3 shows that not using input feeding has a negative effect on the result, which however can be compensated by the explored model variations. With just a single attention mechanism the model benefits from multiple attention heads. The gains are even larger when an attention mechanism is added to every layer. With multiple source attention mechanisms the benefit of multiple heads decreases. Layer normalization on the inputs of the residual blocks has a small negative effect in all settings and metrics. As RNNs can learn to encode positional information positional embeddings are not strictly necessary. Indeed, we can observe no gains but rather even a small drop in BLEU and METEOR for WMT’17 EN→DE when using them. Adding feed-forward layers leads to large and consistent performance boost. While the final model, which is a Transformer model where each self-attention has been replaced by an RNN, is able to make up for a large amount of the difference between the baseline and the Transformer, it is still outperformed by the Transformer. The largest gains come from multiple attention mechanisms and residual feed-forward layers. CNN to Transformer While the convolutional models have much more in common with the Transformer than the RNN based models, there are still some notable differences. Like the Transformer, convolutional models have no dependency between decoder time steps during training, use multiple source attention mechanisms and use a slightly different residual structure, as seen in Section 2.4. The Transformer uses a multi-head scaled dot attention while the ConvS2S model uses an unscaled single head dot attention. Other differences include the use of layer normalization as well as residual feed-forward blocks in the Transformer. The result of making a CNN based architecture more and more similar to the Transformer can be seen in Table 4. As a baseline we use a simple residual CNN structure with a residual single head dot attention. This is denoted as CNN in Table 4. On the encoder side we have posrepeat(6, res d(cnn)) and for the decoder posrepeat(6, res d(cnn)res d(dot src att)). This is similar to, but slightly simpler than, the ConvS2S model described in Section 2.4. In the experiments we explore both the GLU and ReLU as non-linearities for the CNN. Adding layer normalization (norm), multi-head attention (mh) and upsampling residual feedforward layers (ff) we arrive at a model that is 1806 IWSLT EN-DE WMT’17 EN→DE WMT’17 LV→EN Model BLEU BLEU METEOR BLEU METEOR Transformer 25.4 ± 0.1 27.6 ± 0.0 47.2 ± 0.1 18.5 ± 0.0 51.3 ± 0.1 CNN GLU 24.3 ± 0.4 25.0 ± 0.3 44.4 ± 0.2 16.0 ± 0.5 47.4 ± 0.4 + norm 24.1 ± 0.1 + mh 24.2 ± 0.2 25.4 ± 0.1 44.8 ± 0.1 16.1 ± 0.1 47.6 ± 0.2 + ff 25.3 ± 0.1 26.8 ± 0.1 46.0 ± 0.1 16.4 ± 0.2 47.9 ± 0.2 CNN ReLU 23.6 ± 0.3 23.9 ± 0.1 43.4 ± 0.1 15.4 ± 0.1 46.4 ± 0.3 + norm 24.3 ± 0.1 24.3 ± 0.2 43.6 ± 0.1 16.0 ± 0.2 47.1 ± 0.5 + mh 24.2 ± 0.2 24.9 ± 0.1 44.4 ± 0.1 16.1 ± 0.1 47.5 ± 0.2 + ff 25.3 ± 0.3 26.9 ± 0.1 46.1 ± 0.0 16.4 ± 0.2 47.9 ± 0.1 Table 4: Transforming a CNN based model into a Transformer style architecture. identical to a Transformer where the self-attention layers have been replaced by CNNs. This means that we have the following architecture on the encoder posrepeat(6, res nd(cnn)res nd(ffl))norm. Whereas for the decoder we have posrepeat(6, res nd(cnn) res nd(mh dot src att)res nd(ffl))norm. While in the baseline the GLU activation works better than the ReLU activation, when layer normalization, multi-head attention attention and residual feed-forward layers are added, the performance is similar. Except for IWSLT multi-head attention gives consistent gains over single head attention. The largest gains can however be observed by the addition of residual feed-forward layers. The performance of the final model, which is very similar to a Transformer where each selfattention has been replaced by a CNN, matches the performance of the Transformer on IWSLT EN→DE but is still 0.7 BLEU points worse on WMT’17 EN→DE and two BLEU points on WMT’17 LV→EN. 4.4 Self-attention variations At the core of the Transformer are self-attentional layers, which take the role previously occupied by RNNs and CNNs. Self-attention has the advantage that any two positions are directly connected and that, similar to CNNs, there are no dependencies between consecutive time steps so that the computation can be fully parallelized across time. One disadvantage is that relative positional information is not directly represented and one needs to rely on the different heads to make up for this. In a CNN information is constrained to a local window which grows linearly with depth. Relative positions are therefore taken into account. While an RNN keeps an internal state, which can be used in future time steps, it is unclear how well this works for very long range dependencies (Koehn and Knowles, 2017; Bentivogli et al., 2016). Additionally, having a dependency on the previous hidden state inhibits any parallelization across time. Given the different advantages and disadvantages we selectively replace self-attention on the encoder and decoder side in order to see where the model benefits most from self-attention. We take the encoder and decoder block defined in Section 2.4 and try out different layers in place of the self-attention. Concretely, we have tenc = res nd(xenc)res nd(ffl), on the encoder side and tdec = res nd(xdec) res nd(mh dot src att)res nd(ffl). on the decoder side. Table 5 shows the result of replacing xenc and xdec with either self-attention, a CNN with ReLU activation or an RNN. Notice that with self-attention used in both xenc and xdec we recover the Transformer model. Additionally, we remove the residual block on the decoder side entirely (none). This results in a decoder block which only has information about the previous target word yt through the word embedding that is fed as the input to the first layer. The decoder block is reduced to tdec = res nd(mh dot src att)res nd(ffl). 1807 IWSLT EN→DE WMT’17 EN→DE WMT’17 LV→EN Encoder Decoder BLEU BLEU METEOR BLEU METEOR self-att self-att 25.4 ± 0.2 27.6 ± 0.0 47.2 ± 0.1 18.3 ± 0.0 51.1 ± 0.1 self-att RNN 25.1 ± 0.1 27.4 ± 0.1 47.0 ± 0.1 18.4 ± 0.2 51.1 ± 0.1 self-att CNN 25.4 ± 0.4 27.6 ± 0.2 46.7 ± 0.1 18.0 ± 0.3 50.3 ± 0.3 RNN self-att 25.8 ± 0.1 27.2 ± 0.1 46.7 ± 0.1 17.8 ± 0.1 50.6 ± 0.1 CNN self-att 25.7 ± 0.1 26.6 ± 0.3 46.3 ± 0.1 16.8 ± 0.4 49.4 ± 0.4 RNN RNN 25.1 ± 0.1 26.7 ± 0.1 46.4 ± 0.2 17.8 ± 0.1 50.5 ± 0.1 CNN CNN 25.3 ± 0.3 26.9 ± 0.1 46.1 ± 0.0 16.4 ± 0.2 47.9 ± 0.2 self-att combined 25.1 ± 0.2 27.6 ± 0.2 47.2 ± 0.2 18.3 ± 0.1 51.1 ± 0.1 self-att none 23.7 ± 0.2 25.3 ± 0.2 43.1 ± 0.1 15.9 ± 0.1 45.1 ± 0.2 Table 5: Different variations of the encoder and decoder self-attention layer. In addition to that, we try a combination where the first and fourth block use self-attention, the second and fifth an RNN, the third and sixth a CNN (combined). Replacing the self-attention on both the encoder and the decoder side with an RNN or CNN results in a degradation of performance. In most settings, such as WMT’17 EN→DE for both variations and WMT’17 LV→EN for the RNN, the performance is comparable when replacing the decoder side self-attention. For the encoder however, except for IWSLT, we see a drop in performance of up to 1.5 BLEU points when not using self-attention. Therefore, self-attention seems to be more important on the encoder side than on the decoder side. Despite the disadvantage of having a limited context window, the CNN performs as well as self-attention on the decoder side on IWLT and WMT’17 EN→DE in terms of BLEU and only slightly worse in terms of METEOR. The combination of the three mechanisms (combined) on the decoder side performs almost identical to the full Transformer model, except for IWSLT where it is slightly worse. It is surprising how well the model works without any self-attention as the decoder essentially looses any information about the history of generated words. Translations are entirely based on the previous word, provided through the target side word embedding, and the current position, provided through the positional embedding. 5 Conclusion We described an ADL for specifying NMT architectures based on composable building blocks. Instead of committing to a single architecture, the language allows for combining architectures on a granular level. Using this language we explored how specific aspects of the Transformer architecture can successfully be applied to RNNs and CNNs. We performed an extensive evaluation on IWSLT EN→DE, WMT’17 EN→DE and LV→EN, reporting both BLEU and METEOR over multiple runs in each setting. We found that RNN based models benefit from multiple source attention mechanisms and residual feed-forward blocks. CNN based models on the other hand can be improved through layer normalization and also feed-forward blocks. These variations bring the RNN and CNN based models close to the Transformer. Furthermore, we showed that one can successfully combine architectures. We found that self-attention is much more important on the encoder side than it is on the decoder side, where even a model without self-attention performed surprisingly well. For the data sets we evaluated on, models with self-attention on the encoder side and either an RNN or CNN on the decoder side performed competitively to the Transformer model in most cases. We make our implementation available so that it can be used for exploring novel architecture variations. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus phrase1808 based machine translation quality: a case study. arXiv preprint arXiv:1608.04631. Denny Britz, Anna Goldie, Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. arXiv preprint arXiv:1703.03906. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Jonathan H Clark, Chris Dyer, Alon Lavie, and Noah A Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 176–181. Association for Computational Linguistics. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A Toolkit for Neural Machine Translation. ArXiv preprint arXiv:1712.05690. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872. Alon Lavie and Michael J Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine translation, 23(2-3):105–115. Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. 2017. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Renato Negrinho and Geoff Gordon. 2017. Deeparchitect: Automatically designing and training deep architectures. arXiv preprint arXiv:1704.08792. Martin Schrimpf, Stephen Merity, James Bradbury, and Richard Socher. 2017. A flexible approach to automated rnn architecture generation. arXiv preprint arXiv:1712.07316. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578.
2018
167
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1809–1819 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1809 Weakly Supervised Semantic Parsing with Abstract Examples Omer Goldman∗, Veronica Latcinnik∗, Udi Naveh∗, Amir Globerson, Jonathan Berant Tel-Aviv University {omergoldman@mail,veronical@mail, ehudnave@mail,gamir@post,joberant@cs}.tau.ac.il Abstract Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways. First, a large search space of potential programs needs to be explored at training time to find a correct program. Second, spurious programs that accidentally lead to a correct denotation add noise to training. In this work we propose that in closed worlds with clear semantic types, one can substantially alleviate these problems by utilizing an abstract representation, where tokens in both the language utterance and program are lifted to an abstract form. We show that these abstractions can be defined with a handful of lexical rules and that they result in sharing between different examples that alleviates the difficulties in training. To test our approach, we develop the first semantic parser for CNLVR, a challenging visual reasoning dataset, where the search space is large and overcoming spuriousness is critical, because denotations are either TRUE or FALSE, and thus random programs are likely to lead to a correct denotation. Our method substantially improves performance, and reaches 82.5% accuracy, a 14.7% absolute accuracy improvement compared to the best reported accuracy so far. 1 Introduction The goal of semantic parsing is to map language utterances to executable programs. Early work on statistical learning of semantic parsers utilized ∗ Authors equally contributed to this work. I : k :[[{y loc: ..., color: ’Black’, type: ’square’, x loc: ... size: 20}, ...}]] x :There is a small yellow item not touching any wall y :True z :Exist(Filter(ALL ITEMS, λx.And(And(IsYellow(x), IsSmall(x)), Not(IsTouchingWall(x, Side.Any)))))) Figure 1: Overview of our visual reasoning setup for the CNLVR dataset. Given an image rendered from a KB k and an utterance x, our goal is to parse x to a program z that results in the correct denotation y. Our training data includes (x, k, y) triplets. supervised learning, where training examples included pairs of language utterances and programs (Zelle and Mooney, 1996; Kate et al., 2005; Zettlemoyer and Collins, 2005, 2007). However, collecting such training examples at scale has quickly turned out to be difficult, because expert annotators who are familiar with formal languages are required. This has led to a body of work on weaklysupervised semantic parsing (Clarke et al., 2010; Liang et al., 2011; Krishnamurthy and Mitchell, 2012; Kwiatkowski et al., 2013; Berant et al., 2013; Cai and Yates, 2013; Artzi and Zettlemoyer, 2013). In this setup, training examples correspond to utterance-denotation pairs, where a denotation is the result of executing a program against the environment (see Fig. 1). Naturally, collecting denotations is much easier, because it can be performed by non-experts. Training semantic parsers from denotations rather than programs complicates training in two ways: (a) Search: The algorithm must learn to search through the huge space of programs at training time, in order to find the correct program. This is a difficult search problem due to the combinatorial nature of the search space. (b) Spurious1810 ness: Incorrect programs can lead to correct denotations, and thus the learner can go astray based on these programs. Of the two mentioned problems, spuriousness has attracted relatively less attention (Pasupat and Liang, 2016; Guu et al., 2017). Recently, the Cornell Natural Language for Visual Reasoning corpus (CNLVR) was released (Suhr et al., 2017), and has presented an opportunity to better investigate the problem of spuriousness. In this task, an image with boxes that contains objects of various shapes, colors and sizes is shown. Each image is paired with a complex natural language statement, and the goal is to determine whether the statement is true or false (Fig. 1). The task comes in two flavors, where in one the input is the image (pixels), and in the other it is the knowledge-base (KB) from which the image was synthesized. Given the KB, it is easy to view CNLVR as a semantic parsing problem: our goal is to translate language utterances into programs that will be executed against the KB to determine their correctness (Johnson et al., 2017b; Hu et al., 2017). Because there are only two return values, it is easy to generate programs that execute to the right denotation, and thus spuriousness is a major problem compared to previous datasets. In this paper, we present the first semantic parser for CNLVR. Semantic parsing can be coarsely divided into a lexical task (i.e., mapping words and phrases to program constants), and a structural task (i.e., mapping language composition to program composition operators). Our core insight is that in closed worlds with clear semantic types, like spatial and visual reasoning, we can manually construct a small lexicon that clusters language tokens and program constants, and create a partially abstract representation for utterances and programs (Table 1) in which the lexical problem is substantially reduced. This scenario is ubiquitous in many semantic parsing applications such as calendar, restaurant reservation systems, housing applications, etc: the formal language has a compact semantic schema and a well-defined typing system, and there are canonical ways to express many program constants. We show that with abstract representations we can share information across examples and better tackle the search and spuriousness challenges. By pulling together different examples that share the same abstract representation, we can identify programs that obtain high reward across multiple examples, thus reducing the problem of spuriousness. This can also be done at search time, by augmenting the search state with partial programs that have been shown to be useful in earlier iterations. Moreover, we can annotate a small number of abstract utterance-program pairs, and automatically generate training examples, that will be used to warm-start our model to an initialization point in which search is able to find correct programs. We develop a formal language for visual reasoning, inspired by Johnson et al. (2017b), and train a semantic parser over that language from weak supervision, showing that abstract examples substantially improve parser accuracy. Our parser obtains an accuracy of 82.5%, a 14.7% absolute accuracy improvement compared to stateof-the-art. All our code is publicly available at https://github.com/udiNaveh/ nlvr_tau_nlp_final_proj. 2 Setup Problem Statement Given a training set of N examples {(xi, ki, yi)}N i=1, where xi is an utterance, ki is a KB describing objects in an image and yi ∈{TRUE, FALSE} denotes whether the utterance is true or false in the KB, our goal is to learn a semantic parser that maps a new utterance x to a program z such that when z is executed against the corresponding KB k, it yields the correct denotation y (see Fig. 1). Programming language The original KBs in CNLVR describe an image as a set of objects, where each object has a color, shape, size and location in absolute coordinates. We define a programming language over the KB that is more amenable to spatial reasoning, inspired by work on the CLEVR dataset (Johnson et al., 2017b). This programming language provides access to functions that allow us to check the size, shape, and color of an object, to check whether it is touching a wall, to obtain sets of items that are above and below a certain set of items, etc.1 More formally, a program is a sequence of tokens describing a possibly recursive sequence of function applications in prefix notation. Each token is either a function with fixed arity (all functions have either one or two arguments), a constant, a variable or a λ term used to define Boolean functions. Functions, constants and variables have one of the following 1We leave the problem of learning the programming language functions from the original KB for future work. 1811 x: “There are exactly 3 yellow squares touching the wall.” z: Equal(3, Count(Filter(ALL ITEMS, λx. And (And (IsYellow(x), IsSquare(x), IsTouchingWall(x)))))) ¯x: “There are C-QuantMod C-Num C-Color C-Shape touching the wall.” ¯z: C-QuantMod(C-Num, Count(Filter(ALL ITEMS, λx. And (And (IsC-Color(x), IsC-Shape(x), IsTouchingWall(x)))))) Table 1: An example for an utterance-program pair (x, z) and its abstract counterpart (¯x, ¯z) x: “There is a small yellow item not touching any wall.” z: Exist(Filter(ALL ITEMS, λx.And(And(IsYellow(x), IsSmall(x)), Not(IsTouchingWall(x, Side.Any))))) x: “One tower has a yellow base.” z: GreaterEqual(1, Count(Filter(ALL ITEMS, λx.And(IsYellow(x), IsBottom(x))))) Table 2: Examples for utterance-program pairs. Commas and parenthesis provided for readability only. atomic types: Int, Bool, Item, Size, Shape, Color, Side (sides of a box in the image); or a composite type Set(?), and Func(?,?). Valid programs have a return type Bool. Tables 1 and 2 provide examples for utterances and their correct programs. The supplementary material provides a full description of all program tokens, their arguments and return types. Unlike CLEVR, CNLVR requires substantial set-theoretic reasoning (utterances refer to various aspects of sets of items in one of the three boxes in the image), which required extending the language described by Johnson et al. (2017b) to include set operators and lambda abstraction. We manually sampled 100 training examples from the training data and estimate that roughly 95% of the utterances in the training data can be expressed with this programming language. 3 Model We base our model on the semantic parser of Guu et al. (2017). In their work, they used an encoderdecoder architecture (Sutskever et al., 2014) to define a distribution pθ(z | x). The utterance x is encoded using a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) that creates a contextualized representation hi for every utterance token xi, and the decoder is a feed-forward network combined with an attention mechanism over the encoder outputs (Bahdanau et al., 2015). The feedforward decoder takes as input the last K tokens that were decoded. More formally the probability of a program is the product of the probability of its tokens given the history: pθ(z | x) = Q t pθ(zt | x, z1:t−1), and the probability of a decoded token is computed as follows. First, a Bi-LSTM encoder converts the input sequence of utterance embeddings into a sequence of forward and backward states h{F,B} 1 , . . . , h{F,B} |x| . The utterance representation ˆx is ˆx = [hF |x|; hB 1 ]. Then decoding produces the program token-by-token: qt = ReLU(Wq[ˆx; ˆv; zt−K−1:t−1]), αt,i ∝exp(q⊤ t Wαhi) , ct = X i αt,ihi, pθ(zt | x, z1:t−1) ∝exp(φ⊤ ztWs[qt; ct]), where φz is an embedding for program token z, ˆv is a bag-of-words vector for the tokens in x, zi:j = (zi, . . . , zj) is a history vector of size K, the matrices Wq, Wα, Ws are learned parameters (along with the LSTM parameters and embedding matrices), and ’;’ denotes concatenation. Search: Searching through the large space of programs is a fundamental challenge in semantic parsing. To combat this challenge we apply several techniques. First, we use beam search at decoding time and when training from weak supervision (see Sec. 4), similar to prior work (Liang et al., 2017; Guu et al., 2017). At each decoding step we maintain a beam B of program prefixes of length n, expand them exhaustively to programs of length n+1 and keep the top-|B| program prefixes with highest model probability. Second, we utilize the semantic typing system to only construct programs that are syntactically valid, and substantially prune the program search space (similar to type constraints in Krishnamurthy et al. (2017); Xiao et al. (2016); Liang et al. (2017)). We maintain a stack that keeps track of the expected semantic type at each decoding step. The stack is initialized with the type Bool. Then, at each decoding step, only tokens that return the semantic type at the top of the stack are allowed, the stack is popped, and if the decoded token is a function, the semantic types of its arguments are pushed to the stack. This dramatically reduces the search space and guarantees that only syntactically valid programs will be produced. Fig. 2 illustrates the state of the stack when decoding a program for an input utterance. 1812 x :One tower has a yellow base. z : EqualInt 1 Count Filter ALL ITEMS λx And IsYellow x IsBottom x s : Int Set Bool Item Bool Int Int Set BoolFunc BoolFunc Bool Bool Bool Bool Item Figure 2: An example for the state of the type stack s while decoding a program z for an utterance x. Given the constrains on valid programs, our model p′ θ(z | x) is defined as: Y t pθ(zt | x, z1:t−1) · 1(zt | z1:t−1) P z′ pθ(z′ | x, z1:t−1) · 1(z′ | z1:t−1), where 1(zt | z1:t−1) indicates whether a certain program token is valid given the program prefix. Discriminative re-ranking: The above model is a locally-normalized model that provides a distribution for every decoded token, and thus might suffer from the label bias problem (Andor et al., 2016; Lafferty et al., 2001). Thus, we add a globally-normalized re-ranker pψ(z | x) that scores all |B| programs in the final beam produced by p′ θ(z | x). Our globally-normalized model is: pg ψ(z | x) ∝exp(sψ(x, z)), and is normalized over all programs in the beam. The scoring function sψ(x, z) is a neural network with identical architecture to the locallynormalized model, except that (a) it feeds the decoder with the candidate program z and does not generate it. (b) the last hidden state is inserted to a feed-forward network whose output is sψ(x, z). Our final ranking score is p′ θ(z|x)pg ψ(z | x). 4 Training We now describe our basic method for training from weak supervision, which we extend upon in Sec. 5 using abstract examples. To use weak supervision, we treat the program z as a latent variable that is approximately marginalized. To describe the objective, define R(z, k, y) ∈{0, 1} to be one if executing program z on KB k results in denotation y, and zero otherwise. The objective is then to maximize p(y | x) given by: X z∈Z p′ θ(z | x)p(y | z, k) = X z∈Z p′ θ(z | x)R(z, k, y) ≈ X z∈B p′ θ(z | x)R(z, k, y) where Z is the space of all programs and B ⊂Z are the programs found by beam search. In most semantic parsers there will be relatively few z that generate the correct denotation y. However, in CNLVR, y is binary, and so spuriousness is a central problem. To alleviate it, we utilize a property of CNLVR: the same utterance appears 4 times with 4 different images.2 If a program is spurious it is likely that it will yield the wrong denotation in one of those 4 images. Thus, we can re-define each training example to be (x, {(kj, yj)}4 j=1), where each utterance x is paired with 4 different KBs and the denotations of the utterance with respect to these KBs. Then, we maximize p({yj}4 j=1 | x, ) by maximizing the objective above, except that R(z, {kj, yj}4 j=1) = 1 iff the denotation of z is correct for all four KBs. This dramatically reduces the problem of spuriousness, as the chance of randomly obtaining a correct denotation goes down from 1 2 to 1 16. This is reminiscent of Pasupat and Liang (2016), where random permutations of Wikipedia tables were shown to crowdsourcing workers to eliminate spurious programs. We train the discriminative ranker analogously by maximizing the probability of programs with correct denotation P z∈B pg ψ(z | x)R(z, k, y). This basic training method fails for CNLVR (see Sec. 6), due to the difficulties of search and spuriousness. Thus, we turn to learning from abstract examples, which substantially reduce these problems. 5 Learning from Abstract Examples The main premise of this work is that in closed, well-typed domains such as visual reasoning, the main challenge is handling language compositionality, since questions may have a complex and nested structure. Conversely, the problem of mapping lexical items to functions and constants in the programming language can be substantially alleviated by taking advantage of the compact KB schema and typing system, and utilizing a 2 We used the KBs in CNLVR, for which there are 4 KBs per utterance. When working over pixels there are 24 images per utterance, as 6 images were generated from each KB. 1813 Utterance Program Cluster # “yellow” IsYellow C-Color 3 “big” IsBig C-Size 3 “square” IsSquare C-Shape 4 “3” 3 C-Num 2 “exactly” EqualInt C-QuantMod 5 “top” Side.Top C-Location 2 “above” GetAbove C-SpaceRel 6 Total: 25 Table 3: Example mappings from utterance tokens to program tokens for the seven clusters used in the abstract representation. The rightmost column counts the number of mapping in each cluster, resulting in a total of 25 mappings. small lexicon that maps prevalent lexical items into typed program constants. Thus, if we abstract away from the actual utterance into a partially abstract representation, we can combat the search and spuriousness challenges as we can generalize better across examples in small datasets. Consider the utterances: 1. “There are exactly 3 yellow squares touching the wall.” 2. “There are at least 2 blue circles touching the wall.” While the surface forms of these utterances are different, at an abstract level they are similar and it would be useful to leverage this similarity. We therefore define an abstract representation for utterances and logical forms that is suitable for spatial reasoning. We define seven abstract clusters (see Table 3) that correspond to the main semantic types in our domain. Then, we associate each cluster with a small lexicon that contains language-program token pairs associated with this cluster. These mappings represent the canonical ways in which program constants are expressed in natural language. Table 3 shows the seven clusters we use, with an example for an utterance-program token pair from the cluster, and the number of mappings in each cluster. In total, 25 mappings are used to define abstract representations. As we show next, abstract examples can be used to improve the process of training semantic parsers. Specifically, in sections 5.1-5.3, we use abstract examples in several ways, from generating new training data to improving search accuracy. The combined effect of these approaches is quite dramatic, as our evaluation demonstrates. 5.1 High Coverage via Abstract Examples We begin by demonstrating that abstraction leads to rather effective coverage of the types of questions asked in a dataset. Namely, that many questions in the data correspond to a small set of abstract examples. We created abstract representations for all 3,163 utterances in the training examples by mapping utterance tokens to their cluster label, and then counted how many distinct abstract utterances exist. We found that as few as 200 abstract utterances cover roughly half of the training examples in the original training set. The above suggests that knowing how to answer a small set of abstract questions may already yield a reasonable baseline. To test this baseline, we constructured a “rule-based” parser as follows. We manually annotated 106 abstract utterances with their corresponding abstract program (including alignment between abstract tokens in the utterance and program). For example, Table 1 shows the abstract utterance and program for the utterance “There are exactly 3 yellow squares touching the wall”. Note that the utterance “There are at least 2 blue circles touching the wall” will be mapped to the same abstract utterance and program. Given this set of manual annotations, our rulebased semantic parser operates as follows: Given an utterance x, create its abstract representation ¯x. If it exactly matches one of the manually annotated utterances, map it to its corresponding abstract program ¯z. Replace the abstract program tokens with real program tokens based on the alignment with the utterance tokens, and obtain a final program z. If ¯x does not match return TRUE, the majority label. The rule-based parser will fail for examples not covered by the manual annotation. However, it already provides a reasonable baseline (see Table 4). As shown next, manual annotations can also be used for generating new training data. 5.2 Data Augmentation While the rule-based semantic parser has high precision and gauges the amount of structural variance in the data, it cannot generalize beyond observed examples. However, we can automatically generate non-abstract utterance-program pairs from the manually annotated abstract pairs and train a semantic parser with strong supervision that can potentially generalize better. E.g., consider the utterance “There are exactly 3 yellow squares touching the wall”, whose abstract representation is given in Table 1. It is clear that we can use this abstract pair to generate a program for a new utterance “There are exactly 3 blue squares touching the wall”. This program will be identical 1814 Algorithm 1 Decoding with an Abstract Cache 1: procedure DECODE(x, y, C, D) 2: // C is a map where the key is an abstract utterance and the value is a pair (Z, ˆR) of a list of abstract programs Z and their average rewards ˆR. D is an integer. 3: ¯x ←Abstract utterance of x 4: A ←D programs in C[¯x] with top reward values 5: B1 ←compute beam of programs of length 1 6: for t = 2 . . . T do // Decode with cache 7: Bt ←construct beam from Bt−1 8: At = truncate(A, t) 9: Bt.add(de-abstract(At)) 10: for z ∈BT do //Update cache 11: Update rewards in C[¯x] using (¯z, R(z, y)) 12: return BT ∪de-abstract(A). to the program of the first utterance, with IsBlue replacing IsYellow. More generally, we can sample any abstract example and instantiate the abstract clusters that appear in it by sampling pairs of utterance-program tokens for each abstract cluster. Formally, this is equivalent to a synchronous context-free grammar (Chiang, 2005) that has a rule for generating each manually-annotated abstract utteranceprogram pair, and rules for synchronously generating utterance and program tokens from the seven clusters. We generated 6,158 (x, z) examples using this method and trained a standard sequence to sequence parser by maximizing log p′ θ(z|x) in the model above. Although these are generated from a small set of 106 abstract utterances, they can be used to learn a model with higher coverage and accuracy compared to the rule-based parser, as our evaluation demonstrates.3 The resulting parser can be used as a standalone semantic parser. However, it can also be used as an initialization point for the weakly-supervised semantic parser. As we observe in Sec. 6, this results in further improvement in accuracy. 5.3 Caching Abstract Examples We now describe a caching mechanism that uses abstract examples to combat search and spuriousness when training from weak supervision. As shown in Sec. 5.1, many utterances are identical at the abstract level. Thus, a natural idea is to keep track at training time of abstract utteranceprogram pairs that resulted in a correct denotation, 3Training a parser directly over the 106 abstract examples results in poor performance due to the small number of examples. and use this information to direct the search procedure. Concretely, we construct a cache C that maps abstract utterances to all abstract programs that were decoded by the model, and tracks the average reward obtained for those programs. For every utterance x, after obtaining the final beam of programs, we add to the cache all abstract utteranceprogram pairs (¯x, ¯z), and update their average reward (Alg. 1, line 10). To construct an abstract example (¯x, ¯z) from an utterance-program pair (x, z) in the beam, we perform the following procedure. First, we create ¯x by replacing utterance tokens with their cluster label, as in the rule-based semantic parser. Then, we go over every program token in z, and replace it with an abstract cluster if the utterance contains a token that is mapped to this program token according to the mappings from Table 3. This also provides an alignment from abstract program tokens to abstract utterance tokens that is necessary when utilizing the cache. We propose two variants for taking advantage of the cache C. Both are shown in Algorithm 1. 1. Full program retrieval (Alg. 1, line 12): Given utterance x, construct an abstract utterance ¯x, retrieve the top D abstract programs A from the cache, compute the de-abstracted programs Z using alignments from program tokens to utterance tokens, and add the D programs to the final beam. 2. Program prefix retrieval (Alg. 1, line 9): Here, we additionally consider prefixes of abstract programs to the beam, to further guide the search process. At each step t, let Bt be the beam of decoded programs at step t. For every abstract program ¯z ∈A add the de-abstracted prefix z1:t to Bt and expand Bt+1 accordingly. This allows the parser to potentially construct new programs that are not in the cache already. This approach combats both spuriousness and the search challenge, because we add promising program prefixes to the beam that might have fallen off of it earlier. Fig. 3 visualizes the caching mechanism. A high-level overview of our entire approach for utilizing abstract examples at training time for both data augmentation and model training is given in Fig. 4. 6 Experimental Evaluation Model and Training Parameters The BiLSTM state dimension is 30. The decoder has one hidden layer of dimension 50, that takes the 1815 Figure 3: A visualization of the caching mechanism. At each decoding step, prefixes of high-reward abstract programs are added to the beam from the cache. Figure 4: An overview of our approach for utilizing abstract examples for data augmentation and model training. last 4 decoded tokens as input as well as encoder states. Token embeddings are of dimension 12, beam size is 40 and D = 10 programs are used in Algorithm 1. Word embeddings are initialized from CBOW (Mikolov et al., 2013) trained on the training data, and are then optimized end-toend. In the weakly-supervised parser we encourage exploration with meritocratic gradient updates with β = 0.5 (Guu et al., 2017). In the weaklysupervised parser we warm-start the parameters with the supervised parser, as mentioned above. For optimization, Adam is used (Kingma and Ba, 2014)), with learning rate of 0.001, and mini-batch size of 8. Pre-processing Because the number of utterances is relatively small for training a neural model, we take the following steps to reduce sparsity. We lowercase all utterance tokens, and also use their lemmatized form. We also use spelling correction to replace words that contain typos. After pre-processing we replace every word that occurs less than 5 times with an UNK symbol. Evaluation We evaluate on the public development and test sets of CNLVR as well as on the hidden test set. The standard evaluation metric is accuracy, i.e., how many examples are correctly classified. In addition, we report consistency, which is the proportion of utterances for which the decoded program has the correct denotation for all 4 images/KBs. It captures whether a model consistently produces a correct answer. Baselines We compare our models to the MAJORITY baseline that picks the majority class (TRUE in our case). We also compare to the stateof-the-art model reported by Suhr et al. (2017) 1816 Dev. Test-P Test-H Model Acc. Con. Acc. Con. Acc. Con. MAJORITY 55.3 56.2 55.4 MAXENT 68.0 67.7 67.8 RULE 66.0 29.2 66.3 32.7 SUP. 67.7 36.7 66.9 38.3 SUP.+DISC 77.7 52.4 76.6 51.8 WEAKSUP. 84.3 66.3 81.7 60.1 W.+DISC 85.7 67.4 84.0 65.0 82.5 63.9 Table 4: Results on the development, public test (Test-P) and hidden test (Test-H) sets. For each model, we report both accuracy and consistency. when taking the KB as input, which is a maximum entropy classifier (MAXENT). For our models, we evaluate the following variants of our approach: • RULE: The rule-based parser from Sec. 5.1. • SUP.: The supervised semantic parser trained on augmented data as in Sec. 5.2 (5, 598 examples for training and 560 for validation). • WEAKSUP.: Our full weakly-supervised semantic parser that uses abstract examples. • +DISC: We add a discriminative re-ranker (Sec. 3) for both SUP. and WEAKSUP. Main results Table 4 describes our main results. Our weakly-supervised semantic parser with re-ranking (W.+DISC) obtains 84.0 accuracy and 65.0 consistency on the public test set and 82.5 accuracy and 63.9 on the hidden one, improving accuracy by 14.7 points compared to state-of-theart. The accuracy of the rule-based parser (RULE) is less than 2 points below MAXENT, showing that a semantic parsing approach is very suitable for this task. The supervised parser obtains better performance (especially in consistency), and with re-ranking reaches 76.6 accuracy, showing that generalizing from generated examples is better than memorizing manually-defined patterns. Our weakly-supervised parser significantly improves over SUP., reaching an accuracy of 81.7 before reranking, and 84.0 after re-ranking (on the public test set). Consistency results show an even crisper trend of improvement across the models. 6.1 Analysis We analyze our results by running multiple ablations of our best model W.+DISC on the development set. To examine the overall impact of our procedure, we trained a weakly-supervised parser from scratch without pre-training a supervised parser nor using a cache, which amounts to a re-implementation of the RANDOMER algorithm (Guu et al., 2017). We find that the algorithm is Dev. Model Acc. Con. RANDOMER 53.2 7.1 −ABSTRACTION 58.2 17.6 −DATAAUGMENTATION 71.4 41.2 −BEAMCACHE 77.2 56.1 −EVERYSTEPBEAMCACHE 82.3 62.2 ONEEXAMPLEREWARD 58.2 11.2 Table 5: Results of ablations of our main models on the development set. Explanation for the nature of the models is in the body of the paper. unable to bootstrap in this challenging setup and obtains very low performance. Next, we examined the importance of abstract examples, by pretraining only on examples that were manually annotated (utterances that match the 106 abstract patterns), but with no data augmentation or use of a cache (−ABSTRACTION). This results in performance that is similar to the MAJORITY baseline. To further examine the importance of abstraction, we decoupled the two contributions, training once with a cache but without data augmentation for pre-training (−DATAAUGMENTATION), and again with pre-training over the augmented data, but without the cache (−BEAMCACHE). We found that the former improves by a few points over the MAXENT baseline, and the latter performs comparably to the supervised parser, that is, we are still unable to improve learning by training from denotations. Lastly, we use a beam cache without line 9 in Alg. 1 (−EVERYSTEPBEAMCACHE). This already results in good performance, substantially higher than SUP. but is still 3.4 points worse than our best performing model on the development set. Orthogonally, to analyze the importance of tying the reward of all four examples that share an utterance, we trained a model without this tying, where the reward is 1 iff the denotation is correct (ONEEXAMPLEREWARD). We find that spuriousness becomes a major issue and weaklysupervised learning fails. Error Analysis We sampled 50 consistent and 50 inconsistent programs from the development set to analyze the weaknesses of our model. By and large, errors correspond to utterances that are more complex syntactically and semantically. In about half of the errors an object was described by two or more modifying clauses: “there is a box with a yellow circle and three blue items”; or nesting occurred: “one of the gray boxes has exactly 1817 three objects one of which is a circle”. In these cases the model either ignored one of the conditions, resulting in a program equivalent to “there is a box with three blue items” for the first case, or applied composition operators wrongly, outputting an equivalent to “one of the gray boxes has exactly three circles” for the second case. However, in some cases the parser succeeds on such examples and we found that 12% of the sampled utterances that were parsed correctly had a similar complex structure. Other, less frequent reasons for failure were problems with cardinality interpretation, i.e. ,“there are 2” parsed as “exactly 2” instead of “at least 2”; applying conditions to items rather than sets, e.g., “there are 2 boxes with a triangle closely touching a corner” parsed as “there are 2 triangles closely touching a corner”; and utterances with questionable phrasing, e.g., “there is a tower that has three the same blocks color”. Other insights are that the algorithm tended to give higher probability to the top ranked program when it is correct (average probability 0.18), compared to cases when it is incorrect (average probability 0.08), indicating that probabilities are correlated with confidence. In addition, sentence length is not predictive for whether the model will succeed: average sentence length of an utterance is 10.9 when the model is correct, and 11.1 when it errs. We also note that the model was successful with sentences that deal with spatial relations, but struggled with sentences that refer to the size of shapes. This is due to the data distribution, which includes many examples of the former case and fewer examples of the latter. 7 Related Work Training semantic parsers from denotations has been one of the most popular training schemes for scaling semantic parsers since the beginning of the decade. Early work focused on traditional log-linear models (Clarke et al., 2010; Liang et al., 2011; Kwiatkowski et al., 2013), but recently denotations have been used to train neural semantic parsers (Liang et al., 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017; Cheng et al., 2017). Visual reasoning has attracted considerable attention, with datasets such as VQA (Antol et al., 2015) and CLEVR (Johnson et al., 2017a). The advantage of CNLVR is that language utterances are both natural and compositional. Treating visual reasoning as an end-to-end semantic parsing problem has been previously done on CLEVR (Hu et al., 2017; Johnson et al., 2017b). Our method for generating training data resembles data re-combination ideas in Jia and Liang (2016), where examples are generated automatically by replacing entities with their categories. While spuriousness is central to semantic parsing when denotations are not very informative, there has been relatively little work on explicitly tackling it. Pasupat and Liang (2015) used manual rules to prune unlikely programs on the WIKITABLEQUESTIONS dataset, and then later utilized crowdsourcing (Pasupat and Liang, 2016) to eliminate spurious programs. Guu et al. (2017) proposed RANDOMER, a method for increasing exploration and handling spuriousness by adding randomness to beam search and a proposing a “meritocratic” weighting scheme for gradients. In our work we found that random exploration during beam search did not improve results while meritocratic updates slightly improved performance. 8 Discussion In this work we presented the first semantic parser for the CNLVR dataset, taking structured representations as input. Our main insight is that in closed, well-typed domains we can generate abstract examples that can help combat the difficulties of training a parser from delayed supervision. First, we use abstract examples to semiautomatically generate utterance-program pairs that help warm-start our parameters, thereby reducing the difficult search challenge of finding correct programs with random parameters. Second, we focus on an abstract representation of examples, which allows us to tackle spuriousness and alleviate search, by sharing information about promising programs between different examples. Our approach dramatically improves performance on CNLVR, establishing a new state-of-the-art. In this paper, we used a manually-built highprecision lexicon to construct abstract examples. This is suitable for well-typed domains, which are ubiquitous in the virtual assistant use case. In future work we plan to extend this work and automatically learn such a lexicon. This can reduce manual effort and scale to larger domains where there is substantial variability on the language side. 1818 References D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev, S. Petrov, and M. Collins. 2016. Globally normalized transition-based neural networks. arXiv preprint arXiv:1603.06042 . S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. 2015. Vqa: Visual question answering. In International Conference on Computer Vision (ICCV). pages 2425–2433. Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL) 1:49–62. D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR). J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). Q. Cai and A. Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Association for Computational Linguistics (ACL). J. Cheng, S. Reddy, V. Saraswat, and M. Lapata. 2017. Learning structured natural language representations for semantic parsing. In Association for Computational Linguistics (ACL). D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Association for Computational Linguistics (ACL). pages 263–270. J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL). pages 18–27. K. Guu, P. Pasupat, E. Z. Liu, and P. Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Association for Computational Linguistics (ACL). S. Hochreiter and J. Schmidhuber. 1997. Long shortterm memory. Neural Computation 9(8):1735– 1780. R. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko. 2017. Learning to reason: End-toend module networks for visual question answering. In International Conference on Computer Vision (ICCV). R. Jia and P. Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL). J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick. 2017a. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Computer Vision and Pattern Recognition (CVPR). J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, L. Fei-Fei, C. L. Zitnick, and R. Girshick. 2017b. Inferring and executing programs for visual reasoning. In International Conference on Computer Vision (ICCV). R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Association for the Advancement of Artificial Intelligence (AAAI). pages 1062–1068. D. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . J. Krishnamurthy, P. Dasigi, and M. Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Empirical Methods in Natural Language Processing (EMNLP). J. Krishnamurthy and T. Mitchell. 2012. Weakly supervised training of semantic parsers. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 754–765. T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing (EMNLP). J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling data. In International Conference on Machine Learning (ICML). pages 282–289. C. Liang, J. Berant, Q. Le, K. D. Forbus, and N. Lao. 2017. Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. In Association for Computational Linguistics (ACL). P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599. T. Mikolov, K. Chen, G. Corrado, and Jeffrey. 2013. Efficient estimation of word representations in vector space. arXiv . P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL). P. Pasupat and P. Liang. 2016. Inferring logical forms from denotations. In Association for Computational Linguistics (ACL). 1819 M. Rabinovich, M. Stern, and D. Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Association for Computational Linguistics (ACL). A. Suhr, M. Lewis, J. Yeh, and Y. Artzi. 2017. A corpus of natural language for visual reasoning. In Association for Computational Linguistics (ACL). I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS). pages 3104–3112. C. Xiao, M. Dymetman, and C. Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Association for Computational Linguistics (ACL). M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI). pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI). pages 658– 666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 678–687.
2018
168
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1820–1830 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1820 Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback Carolin Lawrence Computational Linguistics Heidelberg University 69120 Heidelberg, Germany [email protected] Stefan Riezler Computational Linguistics & IWR Heidelberg University 69120 Heidelberg, Germany [email protected] Abstract Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system. We show how to apply this learning framework to neural semantic parsing. From a machine learning perspective, the key challenge lies in a proper reweighting of the estimator so as to avoid known degeneracies in counterfactual learning, while still being applicable to stochastic gradient optimization. To conduct experiments with human users, we devise an easy-to-use interface to collect human feedback on semantic parses. Our work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data. 1 Introduction In semantic parsing, natural language utterances are mapped to machine readable parses which are complex and often tailored specifically to the underlying task. The cost and difficulty of manually preparing large amounts of such parses thus is a bottleneck for supervised learning in semantic parsing. Recent work (Liang et al. (2017); Mou et al. (2017); Peng et al. (2017); inter alia) has applied reinforcement learning to address the annotation bottleneck as follows: Given a question, the existence of a corresponding gold answer is assumed. A semantic parser produces multiple parses per question and corresponding answers are obtained. These answers are then compared against the gold answer and a positive reward is recorded if there is an overlap. The parser is then guided towards correct parses using the REINFORCE algorithm (Williams, 1992) which scales the gradient for the various parses by their obtained reward (see the left half of Figure 1). However, learning from question-answer pairs is only efficient if gold answers are cheap to obtain. For complex open-domain question-answering tasks, correct answers are not unique factoids, but openended lists, counts in large ranges, or fuzzily defined objects. For example, geographical queries against databases such as OpenStreetMap (OSM) can involve fuzzy operators such as “near” or “in walking distance” and thus need to allow for fuzziness in the answers as well. A possible solution lies in machine learning from even weaker supervision signals in form of human bandit feedback1 where the semantic parsing system suggests exactly one parse for which feedback is collected from a human user. In this setup neither gold parse nor gold answer are known and feedback is obtained for only one system output per question. The goal of our paper is to exploit this scenario of learning from human bandit feedback to train semantic parsers. This learning scenario perfectly fits commercial setups such as virtual personal assistants that embed a semantic parser. Commercial systems can easily log large amounts of interaction data between users and system. Once sufficient data has been collected, the log can then be used to improve the parser. This leads to a counterfactual learning scenario (Bottou et al., 2013) where we have to solve the counterfactual problem of how to improve a target system from logged feedback that was given to the outputs of a different historic system (see the right half of Figure 1). In order to achieve our goal of counterfactual learning of semantic parsers from human bandit feedback, the following contributions are required: 1The term “bandit feedback” is inspired by the scenario of maximizing the reward for a sequence of pulls of arms of “one-armed bandit” slot machines. 1821 question x Parser Database Comparison gold answer train predict question x Parser log predict User Feedback r MRP y Parser (x, y, r) for 1...n Database for 1...n train required data required data answer a MRPs y1, ..., ys Rewards r1, ..., rs Answers a1, ..., as Figure 1: Left: Online reinforcement learning setup for semantic parsing setup where both questions and gold answers are available. The parser attempts to find correct machine readable parses (MRPs) by producing multiple parses, obtaining corresponding answers, and comparing them against the gold answer. Right: In our setup, a question does not have an associated gold answer. The parser outputs a single MRP and the corresponding answer is shown to a user who provides some feedback. Such triplets are collected in a log which can be used for offline training of a semantic parser. This scenario is called counterfactual since the feedback was logged for outputs from a system different from the target system to be optimized. First, we need to construct an easy-to-use user interface that allows to collect feedback based on the parse rather than the answer. To this aim, we automatically convert the parse to a set of statements that can be judged as correct or incorrect by a human. This approach allows us to assign rewards at the token level, which in turn enables us to perform blame assignment in bandit learning and to learn from partially correct queries where tokens are reinforced individually. We show that users can provide such feedback for one question-parse pair in 16.4 seconds on average. This exemplifies that our approach is more efficient and cheaper than recruiting experts to annotate parses or asking workers to compile large answer sets. Next, we demonstrate experimentally that counterfactual learning can be applied to neural sequence-to-sequence learning for semantic parsing. A baseline neural semantic parser is trained in fully supervised fashion, human bandit feedback from human users is collected in a log and subsequently used to improve the parser. The resulting parser significantly outperforms the baseline model as well as a simple bandit-to-supervised approach (B2S) where the subset of completely correct parses is treated as a supervised dataset. Finally, we repeat our experiments on a larger but simulated log to show that our gains can scale: the baseline system is improved by 7.45% in answer F1 score without ever seeing a gold standard parse. Lastly, from a machine learning perspective, we have to solve problems of degenerate behavior in counterfactual learning by lifting the multiplicative control variate technique (Swaminathan and Joachims, 2015b; Lawrence et al., 2017b,a) to stochastic learning for neural models. This is done by reweighting target model probabilities over the logged data under a one-step-late model that decouples the normalization from gradient estimation and is thus applicable in stochastic (minibatch) gradient optimization. 2 Related Work Semantic parsers have been successfully trained using neural sequence-to-sequence models with a cross-entropy objective and question-parse pairs (Jia and Liang, 2016; Dong and Lapata, 2016)) or question-answer pairs (Neelakantan et al., 2017). Improving semantic parsers using weak feedback has previously been studied (Goldwasser and Roth (2013); Artzi and Zettlemoyer (2013); inter alia). More recently, several works have applied policy gradient techniques such as REINFORCE (Williams, 1992) to train neural semantic parsers (Liang et al. (2017); Mou et al. (2017); Peng et al. (2017); inter alia). However, they assume the existence of the true target answers that can be used to obtain a reward for any number of output queries suggested by the system. It thus differs from a bandit setup where we assume that a reward is available for only one structure. Our work most closely resembles the work of 1822 Iyer et al. (2017) who do make the assumption of only being able to judge one output. They improve their parser using simulated and real user feedback. Parses with negative feedback are given to experts to obtain the correct parse. Corrected queries and queries with positive feedback are added to the training corpus and learning continues with a cross-entropy objective. We show that this bandit-to-supervision approach can be outperformed by offline bandit learning from partially correct queries. Yih et al. (2016) proposed a user interface for the Freebase database that enables a fast and easy creation of parses. However, in their setup the worker still requires expert knowledge about the Freebase database, whereas in our approach feedback can be collected freely and from any user interacting with the system. From a machine learning perspective, related work can be found in the areas of counterfactual bandit learning (Dudik et al., 2011; Swaminathan and Joachims, 2015a), or equivalently, off-policy reinforcement learning (Precup et al., 2000; Jiang and Li, 2016). Our contribution is to modify the self-normalizing estimator (Kong, 1992; Precup et al., 2000; Swaminathan and Joachims, 2015b; Joachims et al., 2018) to be applicable to neural networks. Our work is similar to the counterfactual learning setup for machine translation introduced by Lawrence et al. (2017b). Following their insight, we also assume the logs were created deterministically, i.e. the logging policy always outputs the most likely sequence. Their framework was applied to statistical machine translation using linear models. We show how to generalize their framework to neural models and how to apply it to the task of neural semantic parsing in the OSM domain. 3 Neural Semantic Parsing Our semantic parsing model is a state-of-theart sequence-to-sequence neural network using an encoder-decoder setup (Cho et al., 2014; Sutskever et al., 2014) together with an attention mechanism (Bahdanau et al., 2015). We use the settings of Sennrich et al. (2017), where an input sequence x = x1, x2, . . . x|x| (a natural language question) is encoded by a Recurrent Neural Network (RNN), each input token has an associated hidden vector hi = [−→h i; ←−h i] where the former is created by a forward pass over the input, and the latter by a backward pass. −→h i is obtained by recursively computing f(xi, −→h i−1) where f is a Gated Recurrent Unit (GRU) (Chung et al., 2014), and ←−h i is computed analogously. The input sequence is reduced to a single vector c = g({h1, . . . , h|x|}) which serves as the initialization of the decoder RNN. g calculates the average over all vectors hi. At each time step t the decoder state is set by st = q(st−1, yt−1, ct). q is a conditional GRU with an attention mechanism and ct is the context vector computed by the attention mechanism. Given an output vocabulary Vy and the decoder state st = {s1, . . . , s|Vy|}, a softmax output layer defines a probability distribution over Vy and the probability for a token yj is: πw(yj = to|y<j, x) = exp(sto) P|Vy| v=1 exp(stv) . (1) The model πw can be seen as parameterized policy over an action space defined by the target language vocabulary. The probability for a full output sequence y = y1, y2, . . . y|y| is defined by πw(y|x) = |y| Y j=1 πw(yj|y<j, x). (2) In our case, output sequences are linearized machine readable parses, called queries in the following. Given supervised data Dsup = {(xt, ¯yt)}n t=1 of question-query pairs, where ¯yt is the true target query for xt, the neural network can be trained using SGD and a cross-entropy (CE) objective: LCE = −1 n n X t=1 |¯y| X j=1 log πw(¯yj|¯y<j, x). (3) 4 Counterfactual Learning from Deterministic Bandit Logs Counterfactual Learning Objectives. We assume a policy πw that, given an input x ∈X, defines a conditional probability distribution over possible outputs y ∈Y(x). Furthermore, we assume that the policy is parameterized by w and its gradient can be derived. In this work, πw is defined by the sequence-to-sequence model described in Section 3. We also assume that the model decomposes over individual output tokens, i.e. that the model produces the output token by token. 1823 ∇w ˆRDPM = 1 n Pn t=1 δtπw(yt|xt)∇w log πw(yt|xt). ∇w ˆRDPM+R = 1 n Pn t=1[δt¯πw(yt|xt)(∇w log πw(yt|xt) −1 n Pn u=1 ¯πw(yu|xu)∇log πw(yu|xu))]. ∇w ˆRDPM+OSL = 1 m Pm t=1 δt¯πw,w′(yt|xt)∇w log πw(yt|xt). ∇w ˆRDPM+T = 1 n Pn t=1 Q|y| j=1 δjπw(yj|xt) P|y| j=1 ∇w log πw(yj|xt). ∇w ˆRDPM+T+OSL = 1 m Pm t=1 Q|y| j=1 δj¯πw,w′(yt|xt) P|y| j=1 ∇w log πw(yj|xt). Table 1: Gradients of counterfactual objectives. The counterfactual learning problem can be described as follows: We are given a data log of triples Dlog = {(xt, yt, δt)}n t=1 where outputs yt for inputs xt were generated by a logging system under policy π0, and loss values δt ∈[−1, 0]2 were observed for the generated data points. Our goal is to optimize the expected reward (in our case: minimize the expected risk) for a target policy πw given the data log Dlog. In case of deterministic logging, outputs are logged with propensity π0(yt|xt) = 1, t = 1, . . . , n. This results in a deterministic propensity matching (DPM) objective (Lawrence et al., 2017b), without the possibility to correct the sampling bias of the logging policy by inverse propensity scoring (Rosenbaum and Rubin, 1983): ˆRDPM(πw) = 1 n n X t=1 δtπw(yt|xt). (4) This objective can show degenerate behavior in that it overfits to the choices of the logging policy (Swaminathan and Joachims, 2015b; Lawrence et al., 2017a). This degenerate behavior can be avoided by reweighting using a multiplicative control variate (Kong, 1992; Precup et al., 2000; Jiang and Li, 2016; Thomas and Brunskill, 2016). The new objective is called the reweighted deterministic propensity matching (DPM+R) objective in Lawrence et al. (2017b): ˆRDPM+R(πw) = 1 n n X t=1 δt¯πw(yt|xt) (5) = 1 n Pn t=1 δtπw(yt|xt) 1 n Pn t=1 πw(yt|xt) . Algorithms for optimizing the discussed objectives can be derived as gradient descent algorithms where gradients using the score function gradient estimator (Fu, 2006) are shown in Table 1. 2We use the terms loss and (negative) rewards interchangeably, depending on context. Reweighting in Stochastic Learning. As shown in Swaminathan and Joachims (2015b) and Lawrence et al. (2017a), reweighting over the entire data log Dlog is crucial since it avoids that high loss outputs in the log take away probability mass from low loss outputs. This multiplicative control variate has the additional effect of reducing the variance of the estimator, at the cost of introducing a bias of order O( 1 n) that decreases as n increases (Kong, 1992). The desirable properties of this control variate cannot be realized in a stochastic (minibatch) learning setup since minibatch sizes large enough to retain the desirable reweighting properties are infeasible for large neural networks. We offer a simple solution to this problem that nonetheless retains all desired properties of the reweighting. The idea is inspired by one-step-late algorithms that have been introduced for EM algorithms (Green, 1990). In the EM case, dependencies in objectives are decoupled by evaluating certain terms under parameter settings from previous iterations (thus: one-step-late) in order to achieve closed-form solutions. In our case, we decouple the reweighting from the parameterization of the objective by evaluating the reweighting under parameters w′ from some previous iteration. This allows us to perform gradient descent updates and reweighting asynchronously. Updates are performed using minibatches, however, reweighting is based on the entire log, allowing us to retain the desirable properties of the control variate. The new objective, called one-step-late reweighted DPM objective (DPM+OSL), optimizes πw,w′ with respect to w for a minibatch of size m, with reweighting over the entire log of 1824 size n under parameters w′: ˆRDPM+OSL(πw) = 1 m m X t=1 δt¯πw,w′(yt|xt) (6) = 1 m Pm t=1 δtπw(yt|xt) 1 n Pn t=1 πw′(yt|xt) . If the renormalization is updated periodically, e.g. after every validation step, renormalizations under w or w′ are not much different and will not hamper convergence. Despite losing the formal justification from the perspective of control variates, we found empirically that the OSL update schedule for reweighting is sufficient and does not deteriorate performance. The gradient for learning with OSL updates is given in Table 1. Token-Level Rewards. For our application of counterfactual learning to human bandit feedback, we found another deviation from standard counterfactual learning to be helpful: For humans, it is hard to assign a graded reward to a query at a sequence level because either the query is correct or it is not. In particular, with a sequence level reward of 0 for incorrect queries, we do not know which part of the query is wrong and which parts might be correct. Assigning rewards at token-level will ease the feedback task and allow the semantic parser to learn from partially correct queries. Thus, assuming the underlying policy can decompose over tokens, a token level (DPM+T) reward objective can be defined: ˆRDPM+T(πw) = 1 n n X t=1   |y| Y j=1 δjπw(yj|xt)  . (7) Analogously, we can define an objective that combines the token-level rewards and the minibatched reweighting (DPM+T+OSL): ˆRDPM+T+OSL(πw) = 1 m Pm t=1 Q|y| j=1 δjπw(yj|xt)  1 n Pn t=1 πw′(yt|xt) . (8) Gradients for the DPM+T and DPM+T+OSL objectives are given in Table 1. 5 Semantic Parsing in the OpenStreetMap Domain OpenStreetMap (OSM) is a geographical database in which volunteers annotate points of interests in the world. A point of interest consists of one or more associated GPS points. Further relevant information may be added at the discretion of the volunteer in the form of tags. Each tag consists of a key and an associated value, for example “tourism : hotel”. The NLMAPS corpus was introduced by Haas and Riezler (2016) as a basis to create a natural language interface to the OSM database. It pairs English questions with machine readable parses, i.e. queries that can be executed against OSM. Human Feedback Collection. The task of creating a natural language interface for OSM demonstrates typical difficulties that make it expensive to collect supervised data. The machine readable language of the queries is based on the OVERPASS query language which was specifically designed for the OSM database. It is thus not easily possible to find experts that could provide correct queries. It is equally difficult to ask workers at crowdsourcing platforms for the correct answer. For many questions, the answer set is too large to expect a worker to count or list them all in a reasonable amount of time and without errors. For example, for the question “How many hotels are there in Paris?” there are 951 hotels annotated in the OSM database. Instead we propose to automatically transform the query into a block of statements that can easily be judged as correct or incorrect by a human. The question and the created block of statements are embedded in a user interface with a form that can be filled out by users. Each statement is accompanied by a set of radio buttons where a user can select either “Yes” or “No”. For a screenshot of the interface and an example see Figure 2. In total there are 8 different types of statements. The presence of certain tokens in a query trigger different statement types. For example, the token “area” triggers the statement type “Town”. The statement is then populated with the corresponding information from the query. In the case of “area”, the following OSM value is used, e.g. “Paris”. With this, the meaning of every query can be captured by a set of human-understandable statements. For a full overview of all statement types and their triggers see section B of the supplementary material. OSM tags and keys are generally understandable. For example, the correct OSM tag for “hotels” is “tourism : hotel” and when searching for 1825 Figure 2: The user interface for collecting feedback from humans with an example question and a correctly filled out form. websites, the correct question type key would be “website”. Nevertheless, for each OSM tag or key, we automatically search for the corresponding Wikipedia page on the OpenStreetMap Wiki3 and extract the description for this tag or key. The description is made available to the user in form of a tool-tip that appears when hovering over the tag or key with the mouse. If a user is unsure if a OSM tag or key is correct, they can read this description to help in their decision making. Once the form is submitted, a script maps each statement back to the corresponding tokens in the original query. These tokens then receive negative or positive feedback based on the feedback the user provided for that statement. Corpus Extension. Similar to the extension of the NLMAPS corpus by Lawrence and Riezler (2016) who include shortened questions which are more typically used by humans in search tasks, we present an automatic extension that allows a larger coverage of common OSM tags.4 The basis for the extension is a hand-written, online freely available list5 that links natural language expressions such as “cash machine” to appropriate OSM tags, in this case “amenity : atm”. Using the list, we generate for each unique expression-tag pair a set of question-query pairs. These latter pairs contain 3https://wiki.openstreetmap.org/ 4The extended dataset, called NLMAPS V2, will be released upon acceptance of the paper. 5http://wiki.openstreetmap.org/wiki/ Nominatim/Special_Phrases/EN NLMAPS NLMAPS V2 # question-query pairs 2,380 28,609 tokens 25,906 202,088 types 1,002 8,710 avg. sent. length 10.88 7.06 distinct tags 477 6,582 Table 2: Corpus statistics of the questionanswering corpora NLMAPS and our extension NLMAPS V2 which additionally contains the search engine style queries (Lawrence and Riezler, 2016) and the automatic extensions of the most common OSM tags. several placeholders which will be filled automatically in a second step. To fill the area placeholder $LOC, we sample from a list of 30 cities from France, Germany and the UK. $POI is the placeholder for a point of interest. We sample it from the list of objects which are located in the prior sampled city and which have a name key. The corresponding value belonging to the name key will be used to fill this spot. The placeholder $QTYPE is filled by uniformly sampling from the four primary question types available in the NLMAPS query language. On the natural language side they corresponded to “How many”, “Where”, “Is there” and $KEY. $KEY is a further parameter belonging to the primary question operator FINDKEY. It can be filled by any OSM key, such as name, website or height. To ensure that there will be an answer for the generated query, we first ran a query with the current tag (“amenity : atm”) to find all objects fulfilling this requirement in the area of the already sampled city. From the list of returned objects and the keys that appear in association with them, we uniformly sampled a key. For $DIST we chose between the pre-defined options for walking distance and within city distance. The expressions map to corresponding values which define the size of a radius in which objects of interest (with tag “amenity : atm”) will be located. If the walking distance was selected, we added “in walking distance” to the question. Otherwise no extra text was added to the question, assuming the within city distance to be the default. This sampling process was repeated twice. Table 2 presents the corpus statistics, comparing NLMAPS to our extension. The automatic 1826 extension, obviating the need for expensive manual work, allows a vast increase of question-query pairs by an order of magnitude. Consequently the number of tokens and types increase in a similar vein. However, the average sentence length drops. This comes as no surprise due to the nature of the rather simple hand-written list which contains never more than one tag for an element, resulting in simpler question structures. However, the main idea of utilizing this list is to extend the coverage to previously unknown OSM tags. With 6,582 distinct tags compared to the previous 477, this was clearly successful. Together with the still complex sentences from the original corpus, a semantic parser is now able to learn both complex questions and a large variety of tags. An experiment that empirically validates the usefulness of the automatically created data can be found in the supplementary material, section A. 6 Experiments General Settings. In our experiments we use the sequence-to-sequence neural network package NEMATUS (Sennrich et al., 2017). Following the method used by Haas and Riezler (2016), we split the queries into individual tokens by taking a pre-order traversal of the original tree-like structure. For example, “query(west(area(keyval(’name’,’Paris’)), nwr(keyval(’railway’,’station’))),qtype(count))” becomes “query@2 west@2 area@1 keyval@2 name@0 Paris@s nwr@1 keyval@2 railway@0 station@s qtype@1 count@0”. The SGD optimizer used is ADADELTA (Zeiler, 2012). The model employs 1,024 hidden units and word embeddings of size 1,000. The maximum sentence length is 200 and gradients are clipped if they exceed a value of 1.0. The stopping point is determined by validation on the development set and selecting the point at which the highest evaluation score is obtained. F1 validation is run after every 100 updates, and each update is made on the basis of a minibatch of size 80. The evaluation of all models is based on the answers obtained by executing the most likely query obtained after a beam search with a beam of size 12. We report the F1 score which is the harmonic mean of precision and recall. Recall is defined as the percentage of fully correct answers divided by the set size. Precision is the percentage of correct answers out of the set of answers with non-empty strings. Statistical significance between models is measured using an approximate randomization test (Noreen, 1989). Baseline Parser & Log Creation. Our experiment design assumes a baseline neural semantic parser that is trained in fully supervised fashion, and is to be improved by bandit feedback obtained for system outputs from the baseline system for given questions. For this purpose, we select 2,000 question-query pairs randomly from the full extended NLMAPS V2 corpus. We will call this dataset Dsup. Using this dataset, a baseline semantic parser is trained in supervised fashion under a cross-entropy objective. It obtains an F1 score of 57.45% and serves as the logging policy π0. Furthermore we randomly split off 1,843 and 2,000 pairs for a development and test set, respectively. This leaves a set of 22,765 question-query pairs. The questions can be used as input and bandit feedback can be collected for the most likely output of the semantic parser. We refer to this dataset as Dlog. To collect human feedback, we take the first 1,000 questions from Dlog and use π0 to parse these questions to obtain one output query for each. 5 question-query pairs are discarded because the suggested query is invalid. For the remaining question-query pairs, the queries are each transformed into a block of human-understandable statements and embedded into the user interface described in Section 5. We recruited 9 users to provide feedback for these question-query pairs. The resulting log is referred to as Dhuman. Every question-query pair is purposely evaluated only once to mimic a realistic real-world scenario where user logs are collected as users use the system. In this scenario, it is also not possible to explicitly obtain several evaluations for the same question-query pair. Some examples of the received feedback can be found in the supplementary material, section C. To verify that the feedback collection is efficient, we measured the time each user took from loading a form to submitting it. To provide feedback for one question-query pair, users took 16.4 seconds on average with a standard deviation of 33.2 seconds. The vast majority (728 instances) are completed in less than 10 seconds. Learning from Human Bandit Feedback. An analysis of Dhuman shows that for 531 queries all 1827 corresponding statements were marked as correct. We consider a simple baseline that treats completely correct logged data as a supervised data set with which training continues using the crossentropy objective. We call this baseline banditto-supervised conversion (B2S). Furthermore, we present experimental results using the log Dhuman for stochastic (minibatch) gradient descent optimization of the counterfactual objectives introduced in equations 4, 6, 7 and 8. For the tokenlevel feedback, we map the evaluated statements back to the corresponding tokens in the original query and assign these tokens a feedback of 0 if the corresponding statement was marked as wrong and 1 otherwise. In the case of sequence-level feedback, the query receives a feedback of 1 if all statements are marked correct, 0 otherwise. For the OSL objectives, a separate experiment (see below) showed that updating the reweighting constant after every validation step promises the best trade-off between performance and speed. Results, averaged over 3 runs, are reported in Table 3. The B2S model can slightly improve upon the baseline but not significantly. DPM improves further, significantly beating the baseline. Using the multiplicative control variate modified for SGD by OSL updates does not seem to help in this setup. By moving to token-level rewards, it is possible to learn from partially correct queries. These partially correct queries provide valuable information that is not present in the subset of correct answers employed by the previous models. Optimizing DPM+T leads to a slight improvement and combined with the multiplicative control variate, DPM+T+OSL yields an improvement of about 1.0 in F1 score upon the baseline. It beats both the baseline and the B2S model by a significant margin. Learning from Large-Scale Simulated Feedback. We want to investigate whether the results scale if a larger log is used. Thus, we use π0 to parse all 22,765 questions from Dlog and obtain for each an output query. For sequence level rewards, we assign feedback of 1 for a query if it is identical to the true target query, 0 otherwise. We also simulate token-level rewards by iterating over the indices of the output and assigning a feedback of 1 if the same token appears at the current index for the true target query, 0 otherwise. An analysis of Dlog shows that 46.27% of the queries have a sequence level reward of 1 and are F1 ∆F1 1 baseline 57.45 2 B2S 57.79±0.18 +0.34 3 DPM1 58.04±0.04 +0.59 4 DPM+OSL 58.01±0.23 +0.56 5 DPM+T1 58.11±0.24 +0.66 6 DPM+T+OSL1,2 58.44±0.09 +0.99 Table 3: Human Feedback: Answer F1 scores on the test set for the various setups, averaged over 3 runs. Statistical significance of system differences at p < 0.05 are indicated by experiment number in superscript. F1 ∆F1 1 baseline 57.45 2 B2S1,3 63.22±0.27 +5.77 3 DPM1 61.80±0.16 +4.35 4 DPM+OSL1,3 62.91±0.05 +5.46 5 DPM+T1,2,3,4 63.85±0.2 +6.40 6 DPM+T+OSL1,2,3,4 64.41±0.05 +6.96 Table 4: Simulated Feedback: Answer F1 scores on the test set for the various setups, averaged over 3 runs. Statistical significance of system differences at p < 0.05 are indicated by experiment number in superscript. thus completely correct. This subset is used to train a bandit-to-supervised (B2S) model using the cross-entropy objective. Experimental results for the various optimization setups, averaged over 3 runs, are reported in Table 4. We see that the B2S model outperforms the baseline model by a large margin, yielding an increase in F1 score by 6.24 points. Optimizing the DPM objective also yields a significant increase over the baseline, but its performance falls short of the stronger B2S baseline. Optimizing the DPM+OSL objective leads to a substantial improvement in F1 score over optimizing DPM but still falls slightly short of the strong B2S baseline. Token-level rewards are again crucial to beat the B2S baseline significantly. DPM+T is already able to significantly outperform B2S in this setup and DPM+T+OSL can improve upon this further. Analysis. Comparing the baseline and DPM+T+OSL, we manually examined all queries in the test set where DPM+T+OSL ob1828 Error Type Human Simulated OSM Tag 90% 86.75% Question Type 6% 8.43% Structure 4% 4.82% Table 5: Analysis of which type of errors DPM+T+OSL corrected on the test set compared to the baseline system for both human and simulated feedback experiments. tained the correct answer and the baseline system did not (see Table 5). The analysis showed that the vast majority of previously wrong queries were fixed by correcting an OSM tag in the query. For example, for the question “closest Florist from Manchester in walking distance” the baseline system chose the tag “landuse : retail” in the query, whereas DPM+T+OSL learnt that the correct tag is “shop : florist”. In some cases, the question type had to be corrected, e.g. the baseline’s suggested query returned the location of a point of interest but DPM+T+OSL correctly returns the phone number. Finally, in a few cases DPM+T+OSL corrected the structure for a query, e.g. by searching for a point of interest in the east of an area rather than the south. OSL Update Variation. Using the DPM+T+OSL objective and the simulated feedback setup, we vary the frequency of updating the reweighting constant. Results are reported in Table 6. Calculating the constant only once at the beginning leads to a near identical result in F1 score as not using OSL. The more frequent update strategies, once or four times per epoch, are more effective. Both strategies reduce variance further and lead to higher F1 scores. Updating four times per epoch compared to once per epoch, leads to a nominally higher performance in F1. It has the additional benefit that the re-calculation is done at the same time as the validation, leading to no additional slow down as executing the queries for the development set against the database takes longer than the re-calculation of the constant. Updating after every minibatch is infeasible as it slows down training too much. Compared to the previous setup, iterating over one epoch takes approximately an additional 5.5 hours. OSL Update F1 ∆F1 1 no OSL (DPM+T) 63.85±0.2 2 once 63.82±0.1 -0.03 3 every epoch 64.26±0.04 +0.41 4 every validation / 64.41±0.05 +0.56 4x per epoch 5 every minibatch N/A N/A Table 6: Simulated Feedback: Answer F1 scores on the test set for DPM+T and DPM+T+OSL with varying OSL update strategies, averaged over 3 runs. Updating after every minibatch is infeasible as it significantly slows down learning. Statistical significance of system differences at p < 0.05 occur for experiment 4 over experiment 2. 7 Conclusion We introduced a scenario for improving a neural semantic parser from logged bandit feedback. This scenario is important to avoid complex and costly data annotation for supervise learning, and it is realistic in commercial applications where weak feedback can be collected easily in large amounts from users. We presented robust counterfactual learning objectives that allow to perform stochastic gradient optimization which is crucial in working with neural networks. Furthermore, we showed that it is essential to obtain reward signals at the token-level in order to learn from partially correct queries. We presented experimental results using feedback collected from humans and a larger scale setup with simulated feedback. In both cases we show that a strong baseline using a bandit-to-supervised conversion can be significantly outperformed by a combination of a onestep-late reweighting and token-level rewards. Finally, our approach to collecting feedback can also be transferred to other domains. For example, (Yih et al., 2016) designed a user interface to help Freebase experts to efficiently create queries. This interface could be reversed: given a question and a query produced by a parser, the interface is filled out automatically and the user has to verify if the information fits. Acknowledgments The research reported in this paper was supported in part by DFG grant RI-2221/4-1. 1829 References Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1(1). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), San Diego, CA. L´eon Bottou, Jonas Peters, Joaquin Qui nonero Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual reasoning and learning systems: The example of computational advertising. Journal of Machine Learning Research, 14. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv e-prints, 1412.3555. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany. Miroslav Dudik, John Langford, and Lihong Li. 2011. Doubly robust policy evaluation and learning. In Proceedings of the 28th International Conference on Machine Learning (ICML), New York, NY. Michael C. Fu. 2006. Gradient estimation. Handbook in Operations Research and Management Science, 13. Dan Goldwasser and Dan Roth. 2013. Learning from natural instructions. Machine Learning, 94(2). Peter J. Green. 1990. On the use of the EM algorithm for penalized likelihood estimation. Journal of the Royal Statistical Society B, 52(3). Carolin Haas and Stefan Riezler. 2016. A corpus and semantic parser for multilingual natural language querying of openstreetmap. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), San Diego, California. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany. Nan Jiang and Lihong Li. 2016. Doubly robust offpolicy value evaluation for reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning (ICML), New York, New York, USA. Thorsten Joachims, Adith Swaminathan, and Maarten de Rijke. 2018. Deep learning with logged bandit feedback. In International Conference on Learning Representations (ICLR). Augustine Kong. 1992. A note on importance sampling using standardized weights. Technical Report 348, Department of Statistics, University of Chicago, Illinois. Carolin Lawrence, Pratik Gajane, and Stefan Riezler. 2017a. Counterfactual learning for machine translation: Degeneracies and solutions. In Proceedings of the NIPS WhatIF Workshop, Long Beach, CA. Carolin Lawrence and Stefan Riezler. 2016. Nlmaps: A natural language interface to query openstreetmap. In Proceedings of the 26th International Conference on Computational Linguistics: System Demonstrations (COLING), Osaka, Japan. Carolin Lawrence, Artem Sokolov, and Stefan Riezler. 2017b. Counterfactual learning from bandit feedback under deterministic logging : A case study in statistical machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada. Lili Mou, Zhengdong Lu, Hang Li, and Zhi Jin. 2017. Coupling distributed and symbolic execution for natural language queries. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia. Arvind Neelakantan, Quoc V. Le, Mart´ın Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. In International Conference on Learning Representations (ICLR), Toulon, France. Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses: An Introduction. Wiley, New York. 1830 Haoruo Peng, Ming-Wei Chang, and Wen-tau Yih. 2017. Maximum margin reward networks for learning from explicit and implicit supervision. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark. Doina Precup, Richard S. Sutton, and Satinder P. Singh. 2000. Eligibility traces for off-policy policy evaluation. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML), San Francisco, CA, USA. Paul R. Rosenbaum and Donald B. Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1). Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L¨aubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Valencia, Spain. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS), Montreal, Canada. Adith Swaminathan and Thorsten Joachims. 2015a. Batch learning from logged bandit feedback through counterfactual risk minimization. Journal of Machine Learning Research, 16. Adith Swaminathan and Thorsten Joachims. 2015b. The self-normalized estimator for counterfactual learning. In Advances in Neural Information Processing Systems (NIPS), Montreal, Canada. Philip Thomas and Emma Brunskill. 2016. Dataefficient off-policy policy evaluation for reinforcement learning. In Proceedings of the 33nd International Conference on Machine Learning (ICML), New York, NY. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 20. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. ArXiv e-prints, 1212.5701.
2018
169
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 174–184 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 174 Obtaining Reliable Human Ratings of Valence, Arousal, and Dominance for 20,000 English Words Saif M. Mohammad National Research Council Canada [email protected] Abstract Words play a central role in language and thought. Factor analysis studies have shown that the primary dimensions of meaning are valence, arousal, and dominance (VAD). We present the NRC VAD Lexicon, which has human ratings of valence, arousal, and dominance for more than 20,000 English words. We use Best–Worst Scaling to obtain fine-grained scores and address issues of annotation consistency that plague traditional rating scale methods of annotation. We show that the ratings obtained are vastly more reliable than those in existing lexicons. We also show that there exist statistically significant differences in the shared understanding of valence, arousal, and dominance across demographic variables such as age, gender, and personality. 1 Introduction Words are the smallest meaningful utterances in language. They play a central role in our understanding and descriptions of the world around us. Some believe that the structure of a language even affects how we think (principle of linguistic relativity aka the SapirWhorf hypothesis). Several influential factor analysis studies have shown that the three most important, largely independent, dimensions of word meaning are valence (positiveness–negativeness/pleasure– displeasure), arousal (active–passive), and dominance (dominant–submissive) (Osgood et al., 1957; Russell, 1980, 2003).1 Thus, when comparing the meanings of two words, we can compare their degrees of valence, arousal, or domi1We will refer to the three dimensions individually as V, A, and D, and together as VAD. nance. For example, the word banquet indicates more positiveness than the word funeral; nervous indicates more arousal than lazy; and fight indicates more dominance than delicate. Access to these degrees of valence, arousal, and dominance of words is beneficial for a number of applications, including those in natural language processing (e.g., automatic sentiment and emotion analysis of text), in cognitive science (e.g., for understanding how humans represent and use language), in psychology (e.g., for understanding how people view the world around them), in social sciences (e.g., for understanding relationships between people), and even in evolutionary linguistics (e.g., for understanding how language and behaviour inter-relate to give us an advantage). Existing VAD lexicons (Bradley and Lang, 1999; Warriner et al., 2013) were created using rating scales and thus suffer from limitations associated with the method (Presser and Schuman, 1996; Baumgartner and Steenkamp, 2001). These include: inconsistencies in annotations by different annotators, inconsistencies in annotations by the same annotator, scale region bias (annotators often have a bias towards a portion of the scale), and problems associated with a fixed granularity. In this paper, we describe how we obtained human ratings of valence, arousal, and dominance for more than 20,000 commonly used English words by crowdsourcing. Notably, we use a comparative annotation technique called Best-Worst Scaling (BWS) that addresses the limitations of traditional rating scales (Louviere, 1991; Cohen, 2003; Louviere et al., 2015). The scores are finegrained real-valued numbers in the interval from 0 (lowest V, A, or D) to 1 (highest V, A, or D). We will refer to this new lexicon as the NRC Valence, Arousal, and Dominance (VAD) Lexicon.2 2NRC refers to National Research Council Canada. 175 Correlations (r) between repeated annotations, through metrics such as split-half reliability (SHR), are a common way to evaluate the reliabilities of ordinal and rank annotations. We show that our annotations have SHR scores of r = 0.95 for valence, r = 0.90 for arousal, and r = 0.91 for dominance. These scores are well above the SHR scores obtained by Warriner et al. (2013), and indicate high reliability. Respondents who provided valence, arousal, and dominance annotations, were given the option of additionally filling out a brief demographic questionnaire to provide details of their age, gender, and personality traits. This demographic information along with the VAD annotations allows us to determine whether attributes such as age, gender, and personality impact our understanding of the valence, arousal, and dominance of words. We show that even though overall the annotations are consistent (as seen from the high SHR scores), people aged over 35 are significantly more consistent in their annotations than people aged 35 or less. We show for the first time that men have a significantly higher shared understanding of dominance and valence of words, whereas women have a higher shared understanding of the degree of arousal of words. We find that some personality traits significantly impact a person’s annotations of one or more of valence, arousal, and dominance. We hope that these and other findings described in the paper foster further research into how we use language, how we represent concepts in our minds, and how certain aspects of the world are more important to certain demographic groups leading to higher degrees of shared representations of those concepts within those groups. All of the annotation tasks described in this paper were approved by our institution’s review board, which examined the methods to ensure that they were ethical. Special attention was paid to obtaining informed consent and protecting participant anonymity. The NRC VAD Lexicon is made freely available for research and non-commercial use through our project webpage.3 2 Related Work Primary Dimensions of Meaning: Osgood et al. (1957) asked human participants to rate words along dimensions of opposites such as heavy– light, good–bad, strong–weak, etc. Factor analysis 3http://saifmohammad.com/WebPages/nrc-vad.html of these judgments revealed that the three most prominent dimensions of meaning are evaluation (good–bad), potency (strong–weak), and activity (active–passive). Russell (1980, 2003) showed through similar analyses of emotion words that the three primary independent dimensions of emotions are valence or pleasure (positiveness– negativeness/pleasure–displeasure), arousal (active–passive), and dominance (dominant– submissive). He argues that individual emotions such as joy, anger, and fear are points in a three-dimensional space of valence, arousal, and dominance. It is worth noting that even though the names given by Osgood et al. (1957) and Russell (1980) are different, they describe similar dimensions (Bakker et al., 2014). Existing Affect Lexicons: Bradley and Lang (1999) asked annotators to rate valence, arousal, and dominance—for more than 1,000 words—on a 9-point rating scale. The ratings from multiple annotators were averaged to obtain a score between 1 (lowest V, A, or D) to 9 (highest V, A, or D). Their lexicon, called the Affective Norms of English Words (ANEW), has since been widely used across many different fields of study. More than a decade later, Warriner et al. (2013) created a similar lexicon for more than 13,000 words, using a similar annotation method. There exist a small number of VAD lexicons in non-English languages as well, such as the ones created by Moors et al. (2013) for Dutch, by V˜o et al. (2009) for German, and by Redondo et al. (2007) for Spanish. The NRC VAD lexicon is the largest manually created VAD lexicon (in any language), and the only one that was created via comparative annotations (instead of rating scales). Best-Worst Scaling: Best-Worst Scaling (BWS) was developed by (Louviere, 1991), building on work in the 1960’s in mathematical psychology and psychophysics. Annotators are given n items (an n-tuple, where n > 1 and commonly n = 4).4 They are asked which item is the best (highest in terms of the property of interest) and which is the worst (least in terms of the property of interest). When working on 4-tuples, best–worst annotations are particularly efficient because each best and worst annotation will reveal the order of five of the six item pairs (e.g., for a 4-tuple with items 4At its limit, when n = 2, BWS becomes a paired comparison (Thurstone, 1927; David, 1963), but then a much larger set of tuples need to be annotated (closer to N 2). 176 A, B, C, and D, if A is the best, and D is the worst, then A > B, A > C, A > D, B > D, and C > D). Real-valued scores of association between the items and the property of interest can be determined using simple arithmetic on the number of times an item was chosen best and number of times it was chosen worst (as described in Section 3) (Orme, 2009; Flynn and Marley, 2014). It has been empirically shown that three annotations each for 2N 4-tuples is sufficient for obtaining reliable scores (where N is the number of items) (Louviere, 1991; Kiritchenko and Mohammad, 2016). Kiritchenko and Mohammad (2017) showed through empirical experiments that BWS produces more reliable and more discriminating scores than those obtained using rating scales. (See Kiritchenko and Mohammad (2016, 2017) for further details on BWS.) Within the NLP community, BWS has been used for creating datasets for relational similarity (Jurgens et al., 2012), word-sense disambiguation (Jurgens, 2013), word–sentiment intensity (Kiritchenko and Mohammad, 2016), word–emotion intensity (Mohammad, 2018), and tweet–emotion intensity (Mohammad and Bravo-Marquez, 2017; Mohammad et al., 2018; Mohammad and Kiritchenko, 2018). Automatically Creating Affect Lexicons: There is growing work on automatically determining word–sentiment and word–emotion associations (Yang et al., 2007; Mohammad and Kiritchenko, 2015; Yu et al., 2015; Staiano and Guerini, 2014). The VAD Lexicon can be used to evaluate how accurately the automatic methods capture valence, arousal, and dominance. 3 Obtaining Human Ratings of Valence, Arousal, and Dominance We now describe how we selected the terms to be annotated and how we crowdsourced the annotation of the terms using best–worst scaling. 3.1 Term Selection We chose to annotate commonly used English terms. We especially wanted to include terms that denotate or connotate emotions. We also include terms common in tweets.5 Specifically, we include terms from the following sources: 5Tweets include non-standard language such as emoticons, emojis, creatively spelled words (happee), hashtags (#takingastand, #lonely) and conjoined words (loveumom). • All terms in the NRC Emotion Lexicon (Mohammad and Turney, 2013). It has about 14,000 words with labels indicating whether they are associated with any of the eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust (Plutchik, 1980). • All 4,206 terms in the positive and negative lists of the General Inquirer (Stone et al., 1966). • All 1,061 terms listed in ANEW (Bradley and Lang, 1999). • All 13,915 terms listed in the Warriner et al. (2013) lexicon. • 520 words from the Roget’s Thesaurus categories corresponding to the eight basic Plutchik emotions.6 • About 1000 high-frequency content terms, including emoticons, from the Hashtag Emotion Corpus (HEC) (Mohammad, 2012).7 The union of the above sets resulted in 20,007 terms that were then annotated for valence, arousal, and dominance. 3.2 Annotating VAD via Best–Worst Scaling We describe below how we annotated words for valence. The same approach is followed for arousal and dominance. The annotators were presented with four words at a time (4-tuples) and asked to select the word with the highest valence and the word with the lowest valence. The questionnaire uses a set of paradigm words that signify the two ends of the valence dimension. The paradigm words were taken from past literature on VAD (Bradley and Lang, 1999; Osgood et al., 1957; Russell, 1980). The questions used for valence are shown below. Q1. Which of the four words below is associated with the MOST happiness / pleasure / positiveness / satisfaction / contentedness / hopefulness OR LEAST unhappiness / annoyance / negativeness / dissatisfaction / melancholy / despair? (Four words listed as options.) Q2. Which of the four words below is associated with the LEAST happiness / pleasure / positiveness / satisfaction / contentedness / hopefulness OR MOST unhappiness / annoyance / negativeness / dissatisfaction / melancholy / despair? (Four words listed as options.) 6http://www.gutenberg.org/ebooks/10681 7All tweets in the HEC include at least one of the eight basic emotion words as a hashtag word (#anger, #sadness, etc.). 177 Location of Annotation #Best–Worst Dataset #words Annotators Item #Items #Annotators MAI #Q/Item Annotations valence 20,007 worldwide 4-tuple of words 40,014 1,020 6 2 243,295 arousal 20,007 worldwide 4-tuple of words 40,014 1,081 6 2 258,620 dominance 20,007 worldwide 4-tuple of words 40,014 965 6 2 276,170 Total 778,085 Table 1: A summary of the annotations for valence, arousal, and dominance. MAI = minimum number of annotations per item. Q = questions. A total of 778,085 pairs of best–worst responses were obtained. Questions for arousal and dominance are similar.8 Detailed directions and example questions (with suitable responses) were provided in advance. 2 × N distinct 4-tuples were randomly generated in such a manner that each word is seen in eight different 4-tuples and no two 4-tuples have more than two items in common (where N is the number of words to be annotated).9 Crowdsourcing: We setup three separate crowdsourcing tasks corresponding to valence, arousal, and dominance. The 4-tuples of words were uploaded for annotation on the crowdsourcing platform, CrowdFlower.10 We obtained annotations from native speakers of English residing around the world. Annotators were free to provide responses to as many 4-tuples as they wished. The annotation tasks were approved by our institution’s review board. About 2% of the data was annotated beforehand by the authors. These questions are referred to as gold questions. CrowdFlower interspersed the gold questions with the other questions. If a crowd worker answered a gold question incorrectly, then they were immediately notified, the annotation was discarded, and an additional annotation was requested from a different annotator. If an annotator’s accuracy on the gold questions fell below 80%, then they were refused further annotation, and all of their annotations were discarded. This served as a mechanism to avoid malicious and random annotations. The gold questions also served as examples to guide the annotators. 8The two ends of the arousal dimension were described with the words: arousal, activeness, stimulation, frenzy, jitteriness, alertness AND unarousal, passiveness, relaxation, calmness, sluggishness, dullness, sleepiness. The two ends of the dominance dimension were described with the words: dominant, in control of the situation, powerful, influential, important, autonomous AND submissive, controlled by outside factors, weak, influenced, cared-for, guided. 9We used the script provided by Kiritchenko and Mohammad (2016) to generate the 4-tuples from the list of terms: http://saifmohammad.com/WebPages/BestWorst.html 10CrowdFlower later changed its name to Figure Eight: https://www.figure-eight.com Dimension Word Score↑ Word Score↓ valence love 1.000 toxic 0.008 happy 1.000 nightmare 0.005 happily 1.000 shit 0.000 arousal abduction 0.990 mellow 0.069 exorcism 0.980 siesta 0.046 homicide 0.973 napping 0.046 dominance powerful 0.991 empty 0.081 leadership 0.983 frail 0.069 success 0.981 weak 0.045 Table 2: The terms with the highest (↑) and lowest (↓) valence (V), arousal (A), and dominance (D) scores in the VAD Lexicon. In the task settings for CrowdFlower, we specified that we needed annotations from six people for each word.11 However, because of the way the gold questions work in CrowdFlower, they were annotated by more than six people. Both the minimum and the median number of annotations per item was six. See Table 1 for summary statistics on the annotations.12 Annotation Aggregation: The final VAD scores were calculated from the BWS responses using a simple counting procedure (Orme, 2009; Flynn and Marley, 2014): For each item, the score is the proportion of times the item was chosen as the best (highest V/A/D) minus the proportion of times the item was chosen as the worst (lowest V/A/D). The scores were linearly transformed to the interval: 0 (lowest V/A/D) to 1 (the highest V/A/D). We refer to the list of words along with their scores for valence, arousal, and dominance as the NRC Valence, Arousal, and Dominance Lexicon, or the NRC VAD Lexicon for short. Table 2 shows entries from the lexicon with the highest and lowest scores for V, A, and D. 11Note that since each word occurs in eight different 4tuples, it is involved in 8 × 6 = 48 best–worst judgments. 12In a post-annotation survey, the respondents gave the task high scores for clarity of instruction (an average of 4.5 out of 5) and overall satisfaction (an average of 4.3 out of 5). 178 Attribute Value % Value % Gender f 37 m 63 Age ≤35 70 >35 30 Personality Ag 69 Di 31 Co 52 Ea 48 Ex 52 In 48 Ne 40 Se 60 Op 50 Cl 50 Table 3: Summary of the demographic information provided by the annotators. 4 Demographic Survey Respondents who annotated our VAD questionnaires were given a special code through which they could then optionally respond to a separate CrowdFlower survey asking for their demographic information: age, gender, country they live in, and personality traits. For the latter, we asked how they viewed themselves across the big five (Barrick and Mount, 1991) personality traits: • Agreeableness (Ag) – Disagreeableness (Di): friendly and compassionate or careful in whom to trust, argumentative • Conscientiousness (Co) – Easygoing (Ea): efficient and organized (prefer planned and self-disciplined behaviour) or easy-going and carefree (prefer flexibility and spontaneity) • Extrovert (Ex) – Introvert (In): outgoing, energetic, seek the company of others or solitary, reserved, meeting many people causes anxiety • Neurotic (Ne) – Secure (Se): sensitive and nervous (often feel anger, anxiety, depression, and vulnerability) or secure and confident (rarely feel anger, anxiety, depression, and vulnerability) • Open to experiences (Op) – Closed to experiences (Cl): inventive and curious (seek out new experiences) or consistent and cautious (anxious about new experiences) The questionnaire described the two sides of the dimension using only the texts after the colons above.13 The questionnaire did not ask for identifying information such as name or date of birth. In total, 991 people (55% of the VAD annotators) chose to provide their demographic information. Table 3 shows the details. V A D Ours–Warriner 0.814 0.615 0.326 Table 4: Pearson correlations between our V, A, and D scores and the Warriner scores. Lexicon V–A A–D V–D Ours -0.268 0.302 0.488 Ours (Warriner subset) -0.287 0.322 0.463 Warriner -0.185 -0.180 0.717 Table 5: Pearson correlations between various pair-wise combinations of V, A, and D. 5 Examining of the NRC VAD Lexicon 5.1 A Comparsion of the NRC VAD Lexicon and the Warriner et al. Lexicon Scores We calculated the Pearson correlations r between the NRC VAD Lexicon scores and the Warriner et al. Lexicon scores. Table 4 shows the results. (These numbers were calculated for the 13,915 common terms across the two lexicons.) Observe that the especially low correlations for dominance and arousal indicate that our lexicon has substantially different scores and rankings of terms by these dimensions. Even for valence, a correlation of 0.81 indicates a marked amount of differences in scores. 5.2 Independence of Dimensions Russell (1980) found through his factor analysis work that valence, arousal, and dominance are nearly independent dimensions. However, Warriner et al. (2013) report that their scores for valence and dominance have substantial correlation (r = 0.717). Given that the split-half reliability score for their dominance annotations is only 0.77, the high V–D correlations raises the suspicion whether annotators sufficiently understood the difference between dominance and valence. Table 5 shows the correlations between various pair-wise combinations of valence, arousal, and dominance for both our lexicon and the Warriner lexicon. Observe that unlike the Warriner annotations where V and D are highly correlated, our annotations show that V and D are only slightly correlated. The correlations for V–A and A–D are low in both our and Warriner annotations, albeit slightly higher in magnitude in our annotations. 13How people view themselves may be different from what they truly are. The conclusions in this paper apply to groups that view themselves to be a certain personality type. 179 Annotations #Terms #Annotations V A D a. Ours (on all terms) 20,007 6 per tuple 0.950 0.899 0.902 b. Ours (on only those terms also in Warriner) 13,915 6 per tuple 0.952 0.905 0.906 c. Warriner et al. (2013) 13,915 20 per term 0.914 0.689 0.770 Table 6: Split-half reliabilities (as measured by Pearson correlation) for valence, arousal, and dominance scores obtained from our annotations and the Warriner et al. annotations. 5.3 Reliability of the Annotations A useful measure of quality is reproducibility of the end result—repeated independent manual annotations from multiple respondents should result in similar scores. To assess this reproducibility, we calculate average split-half reliability (SHR) over 100 trials. All annotations for an item (in our case, 4-tuples) are randomly split into two halves. Two sets of scores are produced independently from the two halves. Then the correlation between the two sets of scores is calculated. If the annotations are of good quality, then the correlation between the two halves will be high. Table 6 shows the split-half reliabilities (SHR) for valence, arousal, and dominance annotations. Row a. shows the SHR on the full set of terms in the VAD lexicon. Row b. shows the SHR on just the Warriner subset of terms in the VAD lexicon. Row c. shows the SHR reported by Warriner et al. (2013) on their annotations. Observe that the SHR scores for our annotations are markedly higher than those reported by Warriner et al. (2013), especially for arousal and dominance. All differences in SHR scores between rows b and c are statistically significant. Summary of Main Results: The low correlations between the scores in our lexicon and the Warriner lexicon (especially for D and A) show that the scores in the two lexicons are substantially different. The scores for correlations across all pairs of dimensions in our lexicon are low (r < 0.5). SHR scores of 0.95 for valence, 0.9 for arousal, and 0.9 for dominance show for the first time that highly reliable fine-grained ratings can be obtained for valence, arousal, and dominance. 6 Shared Understanding of VAD Within and Across Demographic Groups Human cognition and behaviour is impacted by evolutionary and socio-cultural factors. These factors are known to impact different groups of people differently (men vs. women, young vs. old, etc.). Thus it is not surprising that our understanding of the world may be slightly different depending on our demographic attributes. Consider gender—a key demographic attribute.14 Men, women, and other genders are substantially more alike than they are different. However, they have encountered different socio-cultural influences for thousands of years. Often these disparities have been a means to exert unequal status and asymmetric power relations. Thus a crucial area in gender studies is to examine both the overt and subtle impacts of these socio-cultural influences, as well as ways to mitigate the inequity. Understanding how different genders perceive and use language is an important component of that research. Language use is also relevant to the understanding and treatment of neuropsychiatric disorders, such as sleep, mood, and anxiety disorders, which have been shown to occur more frequently in women than men (Bao and Swaab, 2011; Lewinsohn et al., 1998; McLean et al., 2011; Johnson et al., 2006; Chmielewski et al., 1995). In addition to the VAD Lexicon (created by aggregating human judgments), we also make available the demographic information of the annotators. This demographic information along with the individual judgments on the best–worst tuples forms a significant resource in the study of how demographic attributes are correlated with our understanding of language. The data can be used to shed light on research questions such as: ‘are there significant differences in the shared understanding of word meanings in men and women?’, ‘how is the social construct of gender reflected in language, especially in socio-political interactions?’, ‘does age impact our view of the valence, arousal, and dominance of concepts?’, ‘do people that view themselves as conscientious have slightly different judgments of valence, arousal, and dominance, than people who view themselves as easy going?’, and so on. 14Note that the term sex refers to a biological attribute pertaining to the anatomy of one’s reproductive system and sex chromosomes, whereas gender refers to a psycho-sociocultural construct based on a person’s sex or a person’s self identification of levels of masculinity and femininity. One may identify their gender as female, male, agender, trans, queer, etc. 180 V A D f–f pairs 56.55 44.15 42.55 m–m pairs 56.88 43.80 43.55 f–m pairs 56.41 43.65 43.03 Table 7: Gender: Average agreement % on best– worst responses. V A D f–f pairs vs. m–m pairs y y y f–f pairs vs. f–m pairs y y m–m pairs vs. f–m pairs y y Table 8: Gender: Significance of difference in average agreement scores (p = 0.05). ‘y’ = yes significant. ‘-’ = not significant. 6.1 Experiments We now describe experiments we conducted to determine whether demographic attributes impact how we judge words for valence, arousal, and dominance. For each demographic attribute, we partitioned the annotators into two groups: male (m) and female (f), ages 18 to 35 (≤35) and ages over 35 (>35), and so on.15 For each of the five personality traits, annotators are partitioned into the two groups shown in the bullet list of Section 4. We then calculated the extent to which people within the same group agreed with each other, and the extent to which people across groups agreed with each other on the VAD annotations (as described in the paragraph below). We also determined if the differences in agreement were statistically significant. For each dimension (V, A, and D), we first collected only those 4-tuples where at least two female and at least two male responses were available. We will refer to this set as the base set. For each of the base set 4-tuples, we calculated three agreement percentages: 1. the percentage of all female–female best–worst responses where the two agreed with each other, 2. the percentage of all male–male responses where the two agreed with each other, and 3. the percentage of all female–male responses where the two agreed with each other. We then calculated the averages of the agreement percentages across all the 4-tuples in the base set. We conducted similar experiments for age groups and personality traits. 15For age, we chose 35 to create the two groups because several psychology and medical studies report changes in health and well-being at this age. Nonetheless, other partitions of age are also worth exploring. V A D ≤35–≤35 pairs 56.10 43.84 43.81 >35–>35 pairs 57.56 44.10 42.49 ≤35–>35 pairs 56.40 43.58 43.07 Table 9: Age: Average agreement % on best– worst responses. V A D ≤35–≤35 pairs vs. >35–>35 pairs y y y ≤35–≤35 pairs vs. ≤35–>35 pairs y y y >35–>35 pairs vs. ≤35–>35 pairs y y y Table 10: Age: Significance of difference in average agreement scores (p = 0.05). 6.2 Results Table 7 shows the results for gender. Note that the average agreement numbers are not expected to be high because often a 4-tuple may include two words that are close to each other in terms of the property of interest (V/A/D).16 However, the relative values of the agreement percentages indicate the relative levels of agreements within groups and across groups. Table 7 numbers indicate that women have a higher shared understanding of the degree of arousal of words (higher f–f average agreement scores on A), whereas men have a higher shared understanding of dominance and valence of words (higher m–m average agreement scores on V and D). The table also shows the cross-group (f–m) average agreements are the lowest for valence and arousal, but higher than f–f pairs for dominance. (Each of these agreements was determined from 1 to 1.5 million judgment pairs.) Table 8 shows which of the Table 7 average agreements are statistically significantly different (shown with a ‘y’). Significance values were calculated using the chi-square test for independence and significance level of 0.05. Observe that all score differences are statistically significant except for between f–f and f–m scores for V and m– m and f–m scores for A. Tables 9 through 12 are similar to Tables 7 and 8, but for age groups and personality traits. Tables 9 and 10 show that respondents over the age of 35 obtain significantly higher agreements with each other on valence and arousal and lower agreements on dominance, than respondents aged 35 and under (with each other). Tables 11 and 12 show that 16Such disagreements are useful as they cause the two words to obtain scores close to each other. 181 V A D Agreeable (Ag) – Disagreeable (Di) # pairs 1.0M 1.8M 1.7M Ag–Ag pairs 56.54 43.89 42.39 Di–Di pairs 55.76 43.63 43.61 Ag–Di pairs 56.28 43.57 43.01 Conscientious (Co) – Easygoing (Ea) # pairs 0.9M 1.9M 1.5M Co–Co pairs 56.34 44.60 44.38 Ea–Ea pairs 56.39 43.15 41.36 Co–Ea pairs 56.39 43.77 42.52 Extrovert (Ex) – Introvert (In) # pairs 0.9M 2.0M 1.6M Ex–Ex pairs 58.00 44.16 43.43 In–In pairs 56.49 43.78 42.16 Ex–In pairs 57.00 43.85 42.89 Neurotic (Ne) – Secure (Se) # pairs 1.0M 1.8M 1.5M Ne–Ne pairs 56.33 43.78 41.98 Se–Se pairs 57.97 43.90 43.65 Ne–Se pairs 56.93 43.97 42.93 Open (Op) – Closed (Cl) # pairs 0.8M 1.8M 1.3M Op–Op pairs 57.65 44.19 43.51 Cl–Cl pairs 56.39 43.52 43.23 Op–Cl pairs 56.90 44.03 43.36 Table 11: Personality Trait: Average agreement % on best–worst responses. some personality traits significantly impact a person’s annotations of one or more of V, A, and D. Notably, those who view themselves as conscientious have a particularly higher shared understanding of the dominance of words, as compared to those who view themselves as easy going. They also have higher in-group agreement for arousal, than those who view themselves as easy going, but the difference for valence is not statistically significant. Also notable, is that those who view themselves as extroverts have a particularly higher shared understanding of the valence, arousal, and dominance of words, as compared to those who view themselves as introverts. Finally, as a sanity check, we divided respondents into those whose CrowdFlower worker ids are odd and those whose worker ids are even. We then determined average agreements for even–even, odd-odd, and even–odd groups just as we did for the demographic variables. We found that, as expected, there were no significant differences in average agreements. Summary of Main Results: We showed that several demographic attributes such as age, gender, and personality traits impact how we judge words for valence, arousal, and dominance. Further, V A D Agreeable (Ag) – Disagreeable (Di) Ag–Ag vs. Di–Di y y y Ag–Ag vs. Ag–Di y y y Di–Di vs. Ag–Di y y Conscientious (Co) – Easygoing (Ea) Co–Co vs. Ea–Ea y y Co–Co vs. Co–Ea y y Ea–Ea vs. Co–Ea y y Extrovert (Ex) – Introvert (In) Ex–Ex vs. In–In y y y Ex–Ex vs. Ex–In y y In–In vs. Ex–In y y y Neurotic (Ne) – Secure (Se) Ne–Ne vs. Se–Se y y Ne–Ne vs. Ne–Se y y Se–Se vs. Ne–Se y y Open (Op) – Closed (Cl) Op–Op vs. Cl–Cl y y y Op–Op vs. Op–Cl y Cl–Cl vs. Op–Cl y y Table 12: Personality Trait: Significance of difference in average agreement scores (p = 0.05). people that share certain demographic attributes show a higher shared understanding of the relative rankings of words by (one or more of) V, A, or D than others. However, this raises new questions: why do certain demographic attributes impact our judgments of V, A, and D? Are there evolutionary forces that caused some groups such as women to develop a higher shared understanding or the arousal, whereas different evolutionary forces caused some groups, such as men, to have a higher shared understanding of dominance? We hope that the data collected as part of this project will spur further inquiry into these and other questions. 7 Applications and Future Work The large number of entries in the VAD Lexicon and the high reliability of the scores make it useful for a number of research projects and applications. We list a few below: • To provide features for sentiment or emotion detection systems. They can also be used to obtain sentiment-aware word embeddings and sentiment-aware sentence representations. • To study the interplay between the basic emotion model and the VAD model of affect. The VAD lexicon can be used along with lists of words associated with emotions such as joy, sadness, fear, etc. to study the correlation of V, A, and D, with those emotions. 182 • To study the role emotion words play in high emotion intensity sentences or tweets. The Tweet Emotion Intensity Dataset has emotion intensity and valence scores for whole tweets (Mohammad and Bravo-Marquez, 2017). We will use the VAD lexicon to determine the extent to which high intensity and high valence tweets consist of high V, A, and D words, and to identify sentences that express high emotional intensity without using high V, A, and D words. • To identify syllables that tend to occur in words with high VAD scores, which in turn can be used to generate names for literary characters and commercial products that have the desired affectual response. • To identify high V, A, and D words in books and literature. To facilitate research in digital humanities. To facilitate work on literary analysis. • As a source of gold (reference) scores, the entries in the VAD lexicon can be used in the evaluation of automatic methods of determining V, A, and D. • To analyze V, A, ad D annotations for different groups of words, such as: hashtag words and emojis common in tweets, emotion denotating words, emotion associated words, neutral terms, words belonging to particular parts of speech such as nouns, verbs, and adjectives, etc. • To analyze interactions between demographic groups and specific groups of words, for example, whether younger annotators have a higher shared understanding of tweet terms, whether a certain gender is associated with a higher shared understanding of adjectives, etc. • To analyze the shared understanding of V, A, and D within and across geographic and language groups. We are interested in creating VAD lexicons for other languages. We can then explore characteristics of valence, arousal, and dominance that are common across cultures. We can also test whether some of the conclusions reached in this work apply only to English, or more broadly to multiple languages. • The dataset is of use to psychologists and evolutionary linguists interested in determining how evolution shaped our representation of the world around us, and why certain personality traits are associated with higher or lower shared understanding of V, A, and D. 8 Conclusions We obtained reliable human ratings of valence, arousal, and dominance for more than 20,000 English words. (It has about 40% more words than the largest existing manually created VAD lexicon). We used best–worst scaling to obtain finegrained scores (and word rankings) and addressed issues of annotation consistency that plague traditional rating scale methods of annotation. We showed that the lexicon has split-half reliability scores of 0.95 for valence, 0.90 for arousal, and 0.90 for dominance. These scores are markedly higher than that of existing lexicons. We analyzed demographic information to show that even though the annotations overall lead to consistent scores in repeated annotations, there exist statistically significant differences in agreements across demographic groups such as males and females, those above the age of 35 and those that are 35 or under, and across personality dimensions (extroverts and introverts, neurotic and secure, etc.). These results show that certain demographic attributes impact how we view the world around us in terms of the relative valence, arousal, and dominance of the concepts in it. The NRC Valence, Arousal, and Dominance Lexicon is made available.17 It can be used in combination with other manually created affect lexicons such as the NRC Word–Emotion Association Lexicon (Mohammad and Turney, 2013)18 and the NRC Affect Intensity Lexicon (Mohammad, 2018).19 Acknowledgments Many thanks to Svetlana Kiritchenko, Michael Wojatzki, Norm Vinson, and Tara Small for helpful discussions. 17The NRC Valence, Arousal, and Dominance Lexicon provides human ratings of valence, arousal, and dominance for more than 20,000 English words: http://saifmohammad.com/WebPages/nrc-vad.html 18The NRC Emotion Lexicon includes about 14,000 words annotated to indicate whether they are associated with any of the eight basic emotions (anger, anticipation, disgust, fear, joy, sadness, surprise, and trust): http://saifmohammad.com/WebPages/NRC-EmotionLexicon.htm 19The NRC Affect Intensity Lexicon provides realvalued affect intensity scores for four basic emotions (anger, fear, sadness, joy): http://saifmohammad.com/WebPages/AffectIntensity.htm 183 References Iris Bakker, Theo van der Voordt, Peter Vink, and Jan de Boon. 2014. Pleasure, arousal, dominance: Mehrabian and russell revisited. Current Psychology, 33(3):405–421. Ai-Min Bao and Dick F Swaab. 2011. Sexual differentiation of the human brain: relation to gender identity, sexual orientation and neuropsychiatric disorders. Frontiers in neuroendocrinology, 32(2):214– 226. Murray R Barrick and Michael K Mount. 1991. The big five personality dimensions and job performance: a meta-analysis. Personnel psychology, 44(1):1–26. Hans Baumgartner and Jan-Benedict E.M. Steenkamp. 2001. Response styles in marketing research: A cross-national investigation. Journal of Marketing Research, 38(2):143–156. Margaret M Bradley and Peter J Lang. 1999. Affective norms for English words (ANEW): Instruction manual and affective ratings. Technical report, The Center for Research in Psychophysiology, University of Florida. Phillip M Chmielewski, Leyan OL Fernandes, Cindy M Yee, and Gregory A Miller. 1995. Ethnicity and gender in scales of psychosis proneness and mood disorders. Journal of Abnormal Psychology, 104(3):464. Steven H. Cohen. 2003. Maximum difference scaling: Improved measures of importance and preference for segmentation. Sawtooth Software, Inc. Herbert Aron David. 1963. The method of paired comparisons. Hafner Publishing Company, New York. T. N. Flynn and A. A. J. Marley. 2014. Best-worst scaling: theory and methods. In Stephane Hess and Andrew Daly, editors, Handbook of Choice Modelling, pages 178–201. Edward Elgar Publishing. Eric O Johnson, Thomas Roth, Lonni Schultz, and Naomi Breslau. 2006. Epidemiology of dsm-iv insomnia in adolescence: lifetime prevalence, chronicity, and an emergent gender difference. Pediatrics, 117(2):e247–e256. David Jurgens. 2013. Embracing ambiguity: A comparison of annotation methodologies for crowdsourcing word sense labels. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics, Atlanta, GA, USA. David Jurgens, Saif M. Mohammad, Peter Turney, and Keith Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In Proceedings of the 6th International Workshop on Semantic Evaluation, pages 356–364, Montr´eal, Canada. Svetlana Kiritchenko and Saif M. Mohammad. 2016. Capturing reliable fine-grained sentiment associations by crowdsourcing and best–worst scaling. In Proceedings of The 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), San Diego, California. Svetlana Kiritchenko and Saif M. Mohammad. 2017. Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. In Proceedings of The Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada. Peer M Lewinsohn, Ian H Gotlib, Mark Lewinsohn, John R Seeley, and Nicholas B Allen. 1998. Gender differences in anxiety disorders and anxiety symptoms in adolescents. Journal of abnormal psychology, 107(1):109. Jordan J. Louviere. 1991. Best-worst scaling: A model for the largest difference judgments. Working Paper. Jordan J. Louviere, Terry N. Flynn, and A. A. J. Marley. 2015. Best-Worst Scaling: Theory, Methods and Applications. Cambridge University Press. Carmen P McLean, Anu Asnaani, Brett T Litz, and Stefan G Hofmann. 2011. Gender differences in anxiety disorders: prevalence, course of illness, comorbidity and burden of illness. Journal of psychiatric research, 45(8):1027–1035. Saif Mohammad. 2012. #Emotional Tweets. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM), pages 246– 255, Montr´eal, Canada. Saif M. Mohammad. 2018. Word affect intensities. In Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC-2018), Miyazaki, Japan. Saif M. Mohammad and Felipe Bravo-Marquez. 2017. WASSA-2017 shared task on emotion intensity. In Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), Copenhagen, Denmark. Saif M. Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2018), New Orleans, LA, USA. Saif M. Mohammad and Svetlana Kiritchenko. 2015. Using hashtags to capture fine emotion categories from tweets. Computational Intelligence, 31(2):301–326. Saif M. Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC-2018), Miyazaki, Japan. 184 Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word–emotion association lexicon. Computational Intelligence, 29(3):436–465. Agnes Moors, Jan De Houwer, Dirk Hermans, Sabine Wanmaker, Kevin Van Schie, Anne-Laura Van Harmelen, Maarten De Schryver, Jeffrey De Winne, and Marc Brysbaert. 2013. Norms of valence, arousal, dominance, and age of acquisition for 4,300 dutch words. Behavior research methods, 45(1):169–177. Bryan Orme. 2009. Maxdiff analysis: Simple counting, individual-level logit, and HB. Sawtooth Software, Inc. C.E. Osgood, Suci G., and P. Tannenbaum. 1957. The measurement of meaning. University of Illinois Press. Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. Emotion: Theory, research, and experience, 1(3):3–33. Stanley Presser and Howard Schuman. 1996. Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. SAGE Publications, Inc. Jaime Redondo, Isabel Fraga, Isabel Padr´on, and Montserrat Comesa˜na. 2007. The spanish adaptation of anew (affective norms for english words). Behavior research methods, 39(3):600–605. James A Russell. 1980. A circumplex model of affect. Journal of personality and social psychology, 39(6):1161. James A Russell. 2003. Core affect and the psychological construction of emotion. Psychological review, 110(1):145. Jacopo Staiano and Marco Guerini. 2014. Depechemood: a lexicon for emotion analysis from crowd-annotated news. arXiv preprint arXiv:1405.1605. Philip Stone, Dexter C. Dunphy, Marshall S. Smith, Daniel M. Ogilvie, and associates. 1966. The General Inquirer: A Computer Approach to Content Analysis. The MIT Press. Louis L. Thurstone. 1927. A law of comparative judgment. Psychological review, 34(4):273. Melissa LH V˜o, Markus Conrad, Lars Kuchinke, Karolina Urton, Markus J Hofmann, and Arthur M Jacobs. 2009. The berlin affective word list reloaded (bawl-r). Behavior research methods, 41(2):534– 538. Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert. 2013. Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research Methods, 45(4):1191–1207. Changhua Yang, Kevin Hsin-Yih Lin, and Hsin-Hsi Chen. 2007. Building emotion lexicon from weblog corpora. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 133–136. Liang-Chih Yu, Jin Wang, K Robert Lai, and Xue-jie Zhang. 2015. Predicting valence-arousal ratings of words using a weighted graph method. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 788–793.
2018
17
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1831–1841 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1831 AMR Dependency Parsing with a Typed Semantic Algebra Jonas Groschwitz ∗† Matthias Lindemann ∗ Meaghan Fowlie ∗ Mark Johnson † Alexander Koller ∗ ∗Saarland University, Saarbr¨ucken, Germany † Macquarie University, Sydney, Australia {jonasg|mlinde|mfowlie|koller}@coli.uni-saarland.de [email protected] Abstract We present a semantic parser for Abstract Meaning Representations which learns to parse strings into tree representations of the compositional structure of an AMR graph. This allows us to use standard neural techniques for supertagging and dependency tree parsing, constrained by a linguistically principled type system. We present two approximative decoding algorithms, which achieve state-of-the-art accuracy and outperform strong baselines. 1 Introduction Over the past few years, Abstract Meaning Representations (AMRs, Banarescu et al. (2013)) have become a popular target representation for semantic parsing. AMRs are graphs which describe the predicate-argument structure of a sentence. Because they are graphs and not trees, they can capture reentrant semantic relations, such as those induced by control verbs and coordination. However, it is technically much more challenging to parse a string into a graph than into a tree. For instance, grammar-based approaches (Peng et al., 2015; Artzi et al., 2015) require the induction of a grammar from the training corpus, which is hard because graphs can be decomposed into smaller pieces in far more ways than trees. Neural sequence-to-sequence models, which do very well on string-to-tree parsing (Vinyals et al., 2014), can be applied to AMRs but face the challenge that graphs cannot easily be represented as sequences (van Noord and Bos, 2017a,b). In this paper, we tackle this challenge by making the compositional structure of the AMR explicit. As in our previous work, Groschwitz et al. (2017), we view an AMR as consisting of atomic graphs representing the meanings of the individual words, which were combined compositionally using linguistically motivated operations for combining a head with its arguments and modifiers. We represent this structure as terms over the AM algebra as defined in Groschwitz et al. (2017). This previous work had no parser; here we show that the terms of the AM algebra can be viewed as dependency trees over the string, and we train a dependency parser to map strings into such trees, which we then evaluate into AMRs in a postprocessing step. The dependency parser relies on type information, which encodes the semantic valencies of the atomic graphs, to guide its decisions. More specifically, we combine a neural supertagger for identifying the elementary graphs for the individual words with a neural dependency model along the lines of Kiperwasser and Goldberg (2016) for identifying the operations of the algebra. One key challenge is that the resulting term of the AM algebra must be semantically well-typed. This makes the decoding problem NP-complete. We present two approximation algorithms: one which takes the unlabeled dependency tree as given, and one which assumes that all dependencies are projective. We evaluate on two data sets, achieving state-of-the-art results on one and near state-of-theart results on the other (Smatch f-scores of 71.0 and 70.2 respectively). Our approach clearly outperforms strong but non-compositional baselines. Plan of the paper. After reviewing related work in Section 2, we explain the AM algebra in Section 3 and extend it to a dependency view in Section 4. We explain model training in Section 5 and decoding in Section 6. Section 7 evaluates a number of variants of our system. 2 Related Work Recently, AMR parsing has generated considerable research activity, due to the availability of large1832 scale annotated data (Banarescu et al., 2013) and two successful SemEval Challenges (May, 2016; May and Priyadarshi, 2017). Methods from dependency parsing have been shown to be very successful for AMR parsing. For instance, the JAMR parser (Flanigan et al., 2014, 2016) distinguishes concept identification (assigning graph fragments to words) from relation identification (adding graph edges which connect these fragments), and solves the former with a supertagging-style method and the latter with a graph-based dependency parser. Foland and Martin (2017) use a variant of this method based on an intricate neural model, yielding state-of-the-art results. We go beyond these approaches by explicitly modeling the compositional structure of the AMR, which allows the dependency parser to combine AMRs for the words using a small set of universal operations, guided by the types of these AMRs. Other recent methods directly implement a dependency parser for AMRs, e.g. the transitionbased model of Damonte et al. (2017), or postprocess the output of a dependency parser by adding missing edges (Du et al., 2014; Wang et al., 2015). In contrast to these, our model makes no strong assumptions on the dependency parsing algorithm that is used; here we choose that of Kiperwasser and Goldberg (2016). The commitment of our parser to derive AMRs compositionally mirrors that of grammar-based AMR parsers (Artzi et al., 2015; Peng et al., 2015). In particular, there are parallels between the types we use in the AM algebra and CCG categories (see Section 3 for details). As a neural system, our parser struggles less with coverage issues than these, and avoids the complex grammar induction process these models require. More generally, our use of semantic types to restrict our parser is reminiscent of Kwiatkowski et al. (2010), Krishnamurthy et al. (2017) and Zhang et al. (2017), and the idea of deriving semantic representations from dependency trees is also present in Reddy et al. (2017). 3 The AM algebra A core idea of this paper is to parse a string into a graph by instead parsing a string into a dependencystyle tree representation of the graph’s compositional structure, represented as terms of the ApplyModify (AM) algebra (Groschwitz et al., 2017). The values of the AM algebra are annotated swant s o[s] ARG0 ARG1 write person ARG0 sleep s ARG0 m sound manner Figure 1: Elementary as-graphs Gwant, Gwriter, Gsleep, and Gsound for the words “want”, “writer”, “sleep”, and “soundly” respectively. graphs, or as-graphs: directed graphs with node and edge labels in which certain nodes have been designated as sources (Courcelle and Engelfriet, 2012) and annotated with type information. Some examples of as-graphs are shown in Fig. 1. Each as-graph has exactly one root, indicated by the bold outline. The sources are indicated by red labels; for instance, Gwant has an S-source and an O-source. The annotations, written in square brackets behind the red source names, will be explained below. We use these sources to mark open argument slots; for example, Gsleep in Fig. 1 represents an intransitive verb, missing its subject, which will be added at the S-source. The AM algebra can combine as-graphs with each other using two linguistically motivated operations: apply and modify. Apply (APP) adds an argument to a predicate. For example, we can add a subject – the graph Gwriter in Fig. 1 – to the graph GVP in Fig. 2d using APPS, yielding the complete AMR in Fig. 2b. Linguistically, this is like filling the subject (S) slot of the predicate wants to sleep soundly with the argument the writer. In general, for a source a, APPa(GP , GA), combines the asgraph GP representing a predicate, or head, with the as-graph GA, which represents an argument. It does this by plugging the root node of GA into the a-source u of GP – that is, the node u of GP marked with source a. The root of the resulting as-graph G is the root of GP , and we remove the a marking on u, since that slot is now filled. The modify operation (MOD) adds a modifier to a graph. For example, we can combine two elementary graphs from Fig. 1 with MODm (Gsleep, Gsound), yielding the graph in Fig. 2c. The Msource of the modifier Gsoundly attaches to the root of Gsleep. The root of the result is the same as the root of Gsleep in the same sense that a verb phrase with an adverb modifier is still a verb phrase. In general, MODa(GH, GM), combines a head GH with a modifier GM. It plugs the root of GH into the a-source u of GM. Although this may add incoming edges to the root of GH, that node is still 1833 the root of the resulting graph G. We remove the a marking from GM. In both APP and MOD, if there is any other source b which is present in both graphs, the nodes marked with b are unified with each other. For example, when Gwant is O-applied to t1 in Fig. 2d, the S-sources of the graphs for “want” and “sleep soundly” are unified into a single node, creating a reentrancy. This falls out of the definition of merge for s-graphs which formally underlies both operations (see (Courcelle and Engelfriet, 2012)). Finally, the AM algebra uses types to restrict its operations. Here we define the type of an as-graph as the set of its sources with their annotations1; thus for example, in Fig. 1, the graph for “writer” has the empty type [ ], Gsleep has type [S], and Gwant has type [S, O[S]]. Each source in an as-graph specifies with its annotation the type of the as-graph which is plugged into it via APP. In other words, for a source a, we may only a-apply GP with GA if the annotation of the a-source in GP matches the type of GA. For example, the O-source of Gwants (Fig. 1) requires that we plug in an as-graph of type [S]; observe that this means that the reentrancy in Fig. 2b is lexically specified by the control verb “want”. All other source nodes in Fig. 1 have no annotation, indicating a type requirement of [ ]. Linguistically, modification is optional; we therefore want the modified graph to be derivationally just like the unmodified graph, in that exactly the same operations can apply to it. In a typed algebra, this means MOD should not change the type of the head. MODa therefore requires that the modifier GM have no sources not already present in the head GH, except a, which will be deleted anyway. As in any algebra, we can build terms from constants (denoting elementary as-graphs) by recursively combining them with the operations of the AM algebra. By evaluating the operations bottomup, we obtain an as-graph as the value of such a term; see Fig. 2 for an example. However, as discussed above, an operation in the term may be undefined due to a type mismatch. We call an AMterm well-typed if all its operations are defined. Every well-typed AM-term evaluates to an as-graph. Since the applicability of an AM operation depends only on the types, we also write τ = f(τ1, τ2) if as-graphs of type τ1 and τ2 can be combined with the operation f and the result has type τ. 1See (Groschwitz et al., 2017) for a more formally complete definition. Relationship to CCG. There is close relationship between the types of the AM algebra and the categories of CCG. A type [S, O] specifies that the as-graph needs to be applied to two arguments to be semantically complete, similar a CCG category such as S\NP/NP, where a string needs to be applied to two NP arguments to be syntactically complete. However, AM types govern the combination of graphs, while CCG categories control the combination of strings. This relieves AM types of the need to talk about word order; there are no “forward” or “backward” slashes in AM types, and a smaller set of operations. Also, the AM algebra spells out raising and control phenomena more explicitly in the types. 4 Indexed AM terms In this paper, we connect AM terms to the input string w for which we want to produce a graph. We do this in an indexed AM term, exemplified in Fig. 3a. We assume that every elementary as-graph G at a leaf represents the meaning of an individual word token wi in w, and write G[i] to annotate the leaf G with the index i of this token. This induces a connection between the nodes of the AMR and the tokens of the string, in that the label of each node was contributed by the elementary as-graph of exactly one token. We define the head index of a subtree t to be the index of the token which contributed the root of the as-graph to which t evaluates. For a leaf with annotation i, the head index is i; for an APP or MOD node, the head index is the head index of the left child, i.e. of the head argument. We annotate each APP and MOD operation with the head index of the left and right subtree. 4.1 AM dependency trees We can represent indexed AM terms more compactly as AM dependency trees, as shown in Fig. 3b. The nodes of such a dependency tree are the tokens of w. We draw an edge with label f from i to k if there is a node with label f[i, k] in the indexed AM term. For example, the tree in 3b has an edge labeled MODm from 5 (Gsleep) to 6 (Gsoundly) because there is a node in the term in 3a labeled MODm[5, 6]. The same AM dependency tree may represent multiple indexed AM terms, because the order of apply and modify operations is not specified in the dependency tree. However, it can be shown that all well-typed AM terms that map to 1834 APPs Gwant APPo MODm Gsleep Gsoundly Gwriter want ARG0 ARG1 person write ARG0 sleep ARG0 sound manner sleep s ARG0 sound manner (a) (b) (c) (d) want ARG0 ARG1 sleep ARG0 sound manner s Figure 2: (a) An AM-term with its value (b), along with the values for its subexpressions (c) t1 = MODm(Gsleep, Gsound) and (d) t2 = APPo(Gwant, t1). APPs[3,2] Gwant[3] APPo[3,5] MODm[5,6] Gsleep[5] Gsoundly[6] Gwriter[2] (a) 2: Gwriter 6: Gsoundly 4: ⊥ 5: Gsleep APPs APPo IGNORE MODm (b) 1: ⊥ IGNORE 3: Gwant Figure 3: (a) An indexed AM term and (b) an AM dependency tree, linking the term in Fig. 2;a to the sentence “The writer wants to sleep soundly”. the same AM dependency tree evaluate to the same as-graph. We define a well-typed AM dependency tree as one that represents a well-typed AM term. Because not all words in the sentence contribute to the AMR, we include a mechanism for ignoring words in the input. As a special case, we allow the constant ⊥, which represents a dummy as-graph (of type ⊥) which we use as the semantic value of words without a semantic value in the AMR. We furthermore allow the edge label IGNORE in an AM dependency tree, where IGNORE(τ1, τ2) = τ1 if τ2 = ⊥and is undefined otherwise; in particular, an AM dependency tree with IGNORE edges is only well-typed if all IGNORE edges point into ⊥nodes. We keep all other operations f(τ1, τ2) as is, i.e. they are undefined if either τ1 or τ2 is ⊥, and never yield ⊥as a result. When reconstructing an AM term from the AM dependency tree, we skip IGNORE edges, such that the subtree below them will not contribute to the overall AMR. 4.2 Converting AMRs to AM terms In order to train a model that parses sentences into AM dependency trees, we need to convert an AMR corpus – in which sentences are annotated with AMRs – into a treebank of AM dependency trees. We do this in three steps: first, we break each AMR up into elementary graphs and identify their roots; second, we assign sources and annotations to make elementary as-graphs out of them; and third, combine them into indexed AM terms. For the first step, an aligner uses hand-written heuristics to identify the string token to which each node in the AMR corresponds (see Section C in the Supplementary Materials for details). We proceed in a similar fashion as the JAMR aligner (Flanigan et al., 2014), i.e. by starting from high-confidence token-node pairs and then extending them until the whole AMR is covered. Unlike the JAMR aligner, our heuristics ensure that exactly one node in each elementary graph is marked as the root, i.e. as the node where other graphs can attach their edges through APP and MOD. When an edge connects nodes of two different elementary graphs, we use the “blob decomposition” algorithm of Groschwitz et al. (2017) to decide to which elementary graph it belongs. For the example AMR in Fig. 2b, we would obtain the graphs in Fig. 1 (without source annotations). Note that ARG edges belong with the nodes at which they start, whereas the “manner” edge in Gsoundly goes with its target. In the second step we assign source names and annotations to the unlabeled nodes of each elementary graph. Note that the annotations are crucial to our system’s ability to generate graphs with reentrancies. We mostly follow the algorithm of Groschwitz et al. (2017), which determines necessary annotations based on the structure of the given graph. The algorithm chooses each source name depending on the incoming edge label. For instance, the two leaves of Gwant can have the source labels S and O because they have incoming edges labeled ARG0 and ARG1. However, the Groschwitz algorithm is not deterministic: It allows object promotion (the sources for an ARG3 edge may be O3, O2, or O), unaccusative subjects (promoting the minimal object to S if the elementary graph contains an ARGi-edge (i > 0) but no ARG0-edge (Perlmutter, 1978)), and passive alternation (swapping O and S). To make our as-graphs more consistent, we prefer constants that promote objects as far as possible, use unaccusative subjects, and no passive alternation, but still allow constants that do not satisfy these conditions if necessary. This increased our Smatch score significantly. Finally, we choose an arbitrary AM dependency 1835 tree that combines the chosen elementary as-graphs into the annotated AMR; in practice, the differences between the trees seem to be negligible.2 5 Training We can now model the AMR parsing task as the problem of computing the best well-typed AM dependency tree t for a given sentence w. Because t is well-typed, it can be decoded into an (indexed) AM term and thence evaluated to an as-graph. We describe t in terms of the elementary asgraphs G[i] it uses for each token i and of its edges f[i, k]. We assume a node-factored, edge-factored model for the score ω(t) of t: ω(t) = X 1≤i≤n ω(G[i]) + X f[i,k]∈E ω(f[i, k]), (1) where the edge weight further decomposes into the sum ω(f[i, k]) = ω(i →k) + ω(f | i →k) of a score ω(i →k) for the presence of an edge from i to k and a score ω(f | i →k) for this edge having label f. Our aim is to compute the well-typed t with the highest score. We present three models for ω: one for the graph scores and two for the edge scores. All of these are based on a two-layer bidirectional LSTM, which reads inputs x = (x1, . . . , xn) token by token, concatenating the hidden states of the forward and the backward LSTMs in each layer. On the second layer, we thus obtain vector representations vi = BiLSTM(x, i) for the individual input tokens (see Fig. 4). Our models differ in the inputs x and the way they predict scores from the vi. 5.1 Supertagging for elementary as-graphs We construe the prediction of the as-graphs G[i] for each input position i as a supertagging task (Lewis et al., 2016). The supertagger reads inputs xi = (wi, pi, ci), where wi is the word token, pi its POS tag, and ci is a character-based LSTM encoding of wi. We use pretrained GloVe embeddings (Pennington et al., 2014) concatenated with learned embeddings for wi, and learned embeddings for pi. To predict the score for each elementary as-graph out of a set of K options, we add a K-dimensional output layer as follows: ω(G[i]) = log softmax(W · vi + b) 2Indeed, we conjecture that for a fixed set of constants and a fixed AMR, there is only one dependency tree. x1 v1 x2 v2 xn vn ... ... ω(G[1]) ω(G[2]) ω(G[n]) ω(2 → n) ω(f | 2 → n) v⊥ ω(⊥ → 1) Figure 4: Architecture of the neural taggers. and train the neural network using a cross-entropy loss function. This maximizes the likelihood of the elementary as-graphs in the training data. 5.2 Kiperwasser & Goldberg edge model Predicting the edge scores amounts to a dependency parsing problem. We chose the dependency parser of Kiperwasser and Goldberg (2016), henceforth K&G, to learn them, because of its accuracy and its fit with our overall architecture. The K&G parser scores the potential edge from i to k and its label from the concatenations of vi and vk: MLPθ(v) = W2 · tanh(W1 · v + b1) + b2 ω(i →k) = MLPE(vi ◦vk) ω(f | i →k) = MLPLBL(vi ◦vk) We use inputs xi = (wi, pi, τi) including the type τi of the supertag G[i] at position i, using trained embeddings for all three. At evaluation time, we use the best scoring supertag according to the model of Section 5.1. At training time, we sample from q, where q(τi) = (1 −δ) + δ · p(τi|pi, pi−1), q(τ) = δ · p(τ|pi, pi−1) for any τ ̸= τi and δ is a hyperparameter controlling the bias towards the aligned supertag. We train the model using K&G’s original DyNet implementation. Their algorithm uses a hinge loss function, which maximizes the score difference between the gold dependency tree and the best predicted dependency tree, and therefore requires parsing each training instance in each iteration. Because the AM dependency trees are highly non-projective, we replaced the projective parser used in the off-the-shelf implementation by the Chu-Liu-Edmonds algorithm implemented in the TurboParser (Martins et al., 2010), improving the LAS on the development set by 30 points. 5.3 Local edge model We also trained a local edge score model, which uses a cross-entropy rather than a hinge loss and therefore avoids the repeated parsing at training 1836 time. Instead, we follow the intuition that every node in a dependency tree has at most one incoming edge, and train the model to score the correct incoming edge as high as possible. This model takes inputs xi = (wi, pi). We define the edge and edge label scores as in Section 5.2, with tanh replaced by ReLU. We further add a learned parameter v⊥for the “LSTM embedding” of a nonexistent node, obtaining scores ω(⊥→k) for k having no incoming edge. To train ω(i →k), we collect all scores for edges ending at the same node k into a vector ω(• →k). We then minimize the cross-entropy loss for the gold edge into k under softmax(ω(• → k)), maximizing the likelihood of the gold edges. To train the labels ω(f | i →k), we simply minimize the cross-entropy loss of the actual edge labels f of the edges which are present in the gold AM dependency trees. The PyTorch code for this and the supertagger are available at bitbucket.org/tclup/ amr-dependency. 6 Decoding Given learned estimates for the graph and edge scores, we now tackle the challenge of computing the best well-typed dependency tree t for the input string w, under the score model (equation (1)). The requirement that t must be well-typed is crucial to ensure that it can be evaluated to an AMR graph, but as we show in the Supplementary Materials (Section A), makes the decoding problem NP-complete. Thus, an exact algorithm is not practical. In this section, we develop two different approximation algorithms for AM dependency parsing: one which assumes the (unlabeled) dependency tree structure as known, and one which assumes that the AM dependency tree is projective. 6.1 Projective decoder The projective decoder assumes that the AM dependency tree is projective, i.e. has no crossing dependency edges. Because of this assumption, it can recursively combine adjacent substrings using dynamic programming. The algorithm is shown in Fig. 5 as a parsing schema (Shieber et al., 1995), which derives items of the form ([i, k], r, τ) with scores s. An item represents a well-typed derivation of the substring from i to k with head index r, and which evaluates to an as-graph of type τ. The parsing schema consists of three types of s = ω(G[i]) G ̸= ⊥ ([i, i + 1], i, τ(G)) : s Init ([i, k], r, τ) : s s′ = ω(⊥[k]) ([i, k + 1], r, τ) : s + s′ Skip-R ([i, k], r, τ) : s s′ = ω(⊥[i −1]) ([i −1, k], r, τ) : s + s′ Skip-L ([i, j], r1, τ1) : s1 ([j, k], r2, τ2) : s2 τ = f(τ1, τ2) defined s = ω(f[r1, r2]) Arc-R [f] ([i, k], r1, τ) : s1 + s2 + s ([i, j], r1, τ1) : s1 ([j, k], r2, τ2) : s2 τ = f(τ2, τ1) defined s = ω(f[r2, r1]) Arc-L [f] ([i, k], r2, τ) : s1 + s2 + s Figure 5: Rules for the projective decoder. rules. First, the Init rule generates an item for each graph fragment G[i] that the supertagger predicted for the token wi, along with the score and type of that graph fragment. Second, given items for adjacent substrings [i, j] and [j, k], the Arc rules apply an operation f to combine the indexed AM terms for the two substrings, with Arc-R making the left-hand substring the head and the right-hand substring the argument or modifier, and Arc-L the other way around. We ensure that the result is well-typed by requiring that the types can be combined with f. Finally, the Skip rules allow us to extend a substring such that it covers tokens which do not correspond to a graph fragment (i.e., their AM term is ⊥), introducing IGNORE edges. After all possible items have been derived, we extract the best well-typed tree from the item of the form ([1, n], r, τ) with the highest score, where τ = [ ]. Because we keep track of the head indices, the projective decoder is a bilexical parsing algorithm, and shares a parsing complexity of O(n5) with other bilexical algorithms such as the Collins parser. It could be improved to a complexity of O(n4) using the algorithm of Eisner and Satta (1999). 6.2 Fixed-tree decoder The fixed-tree decoder computes the best unlabeled dependency tree tr for w, using the edge scores ω(i →k), and then computes the best AM dependency tree for w whose unlabeled version is tr. The Chu-Liu-Edmonds algorithm produces a forest of dependency trees, which we want to combine into tr. We choose the tree whose root r has the highest score for being the root of the AM dependency tree and make the roots of all others children of r. At this point, the shape of tr is fixed. We choose 1837 s = ω(G[i]) (i, ∅, τ(G)) : s Init (i, C1, τ1) : s1 (k, Ch(k), τ2) : s2 k ∈Ch(i)\C1 τ = f(τ1, τ2) defined s = ω(f[i, k]) Edge[f] (i, C1 ∪{k}, τ) : s1 + s2 + s Figure 6: Rules for the fixed-tree decoder. supertags for the nodes and edge labels for the edges by traversing tr bottom-up, computing types for the subtrees as we go along. Formally, we apply the parsing schema in Fig. 6. It uses items of the form (i, C, τ) : s, where 1 ≤i ≤n is a node of tr, C is the set of children of i for which we have already chosen edge labels, and τ is a type. We write Ch(i) for the set of children of i in tr. The Init rule generates an item for each graph that the supertagger can assign to each token i in w, ensuring that every token is also assigned ⊥as a possible supertag. The Edge rule labels an edge from a parent node i in tr to one of its children k, whose children already have edge labels. As above, this rule ensures that a well-typed AM dependency tree is generated by locally checking the types. In particular, if all types τ2 that can be derived for k are incompatible with τ1, we fall back to an item for k with τ2 = ⊥(which always exists), along with an IGNORE edge from i to k. The complexity of this algorithm is O(n · 2d · d), where d is the maximal arity of the nodes in tr. 7 Evaluation We evaluate our models on the LDC2015E86 and LDC2017T103 datasets (henceforth “2015” and “2017”). Technical details and hyperparameters of our implementation can be found in Sections B to D of the Supplementary Materials. 7.1 Training data The original LDC datasets pair strings with AMRs. We convert each AMR in the training and development set into an AM dependency tree, using the procedure of Section 4.2. About 10% of the training instances cannot be split into elementary as-graphs by our aligner; we removed these from the training data. Of the remaining AM dependency trees, 37% are non-projective. Furthermore, the AM algebra is designed to handle short-range reentrancies, modeling grammati3https://catalog.ldc.upenn.edu/ LDC2017T10, identical to LDC2016E25. cal phenomena such as control and coordination, as in the derivation in Fig. 2. It cannot easily handle the long-range reentrancies in AMRs which are caused by coreference, a non-compositional phenomenon.4 We remove such reentrancies from our training data (about 60% of the roughly 20,000 reentrant edges). Despite this, our model performs well on reentrant edges (see Table 2). 7.2 Pre- and postprocessing We use simple pre- and postprocessing steps to handle rare words and some AMR-specific patterns. In AMRs, named entities follow a pattern shown in Fig. 7. Here the named entity is of type “person”, has a name edge to a “name” node whose children spell out the tokens of “Agatha Christie”, and a link to a wiki entry. Before training, we replace each “name” node, its children, and the corresponding span in the sentence with a special NAME token, and we completely remove wiki edges. In this example, this leaves us with only a “person” and a NAME node. Further, we replace numbers and some date patterns with NUMBER and DATE tokens. On the training data this is straightforward, since names and dates are explicitly annotated in the AMR. At evaluation time, we detect dates and numbers with regular expressions, and names with Stanford CoreNLP (Manning et al., 2014). We also use Stanford CoreNLP for our POS tags. Each elementary as-graph generated by the procedure of Section 4.2 has a unique node whose label corresponds most closely to the aligned word (e.g. the “want” node in Gwant and the “write” node in Gwriter). We replace these node labels with LEX in preprocessing, reducing the number of different elementary as-graphs from 28730 to 2370. We factor the supertagger model of Section 5.1 such that the unlexicalized version of G[i] and the label for LEX are predicted separately. At evaluation, we re-lexicalize all LEX nodes in the predicted AMR. For words that were frequent in the training data (at least 10 times), we take the supertagger’s prediction for the label. For rarer words, we use simple heuristics, explained in the Supplementary Materials (Section D). For names, we just look up name nodes with their children and wiki entries observed for the name string in the training data, and for unseen names use the literal tokens as the name, and no wiki entry. Similarly, 4As Damonte et al. (2017) comment: “A valid criticism of AMR is that these two reentrancies are of a completely different type, and should not be collapsed together.” 1838 we collect the type for each encountered name (e.g. “person” for “Agatha Christie”), and correct it in the output if the tagger made a different prediction. We recover dates and numbers straightforwardly. 7.3 Supertagger accuracy All of our models rely on the supertagger to predict elementary as-graphs; they differ only in the edge scores. We evaluated the accuracy of the supertagger on the converted development set (in which each token has a supertag) of the 2015 data set, and achieved an accuracy of 73%. The correct supertag is within the supertagger’s 4 best predictions for 90% of the tokens, and within the 10 best for 95%. Interestingly, supertags that introduce grammatical reentrancies are predicted quite reliably, although they are relatively rare in the training data. The elementary as-graph for subject control verbs (see Gwant in Fig. 1) accounts for only 0.8% of supertags in the training data, yet 58% of its occurrences in the development data are predicted correctly (84% in 4-best). The supertag for VP coordination (with type [OP1[S], OP2[S]]) makes up for 0.4% of the training data, but 74% of its occurrences are recognized correctly (92% in 4-best). Thus the prediction of informative types for individual words is feasible. 7.4 Comparison to Baselines Type-unaware fixed-tree baseline. The fixed-tree decoder is built to ensure well-typedness of the predicted AM dependency trees. To investigate to what extent this is required, we consider a baseline which just adds the individually highest-scoring supertags and edge labels to the unlabeled dependency tree tu, ignoring types. This leads to AM dependency trees which are not well-typed for 75% of the sentences (we fall back to the largest welltyped subtree in these cases). Thus, an off-theshelf dependency parser can reliably predict the tree structure of the AM dependency tree, but correct supertag and edge label assignment requires a decoder which takes the types into account. JAMR-style baseline. Our elementary asgraphs differ from the elementary graphs used in JAMR-style algorithms in that they contain explicit source nodes, which restrict the way in which they can be combined with other as-graphs. We investigate the impact of this choice by implementing a strong JAMR-style baseline. We adapt the AMR-todependency conversion of Section 4.2 by removing all unlabeled nodes with source names from the Model 2015 2017 Ours local edge + projective decoder 70.2±0.3 71.0±0.5 local edge + fixed-tree decoder 69.4±0.6 70.2±0.5 K&G edge + projective decoder 68.6±0.7 69.4±0.4 K&G edge + fixed-tree decoder 69.6±0.4 69.9±0.2 Baselines fixed-tree (type-unaware) 26.0±0.6 27.9±0.6 JAMR-style 66.1 66.2 Previous work CAMR (Wang et al., 2015) 66.5 JAMR (Flanigan et al., 2016) 67 Damonte et al. (2017) 64 van Noord and Bos (2017b) 68.5 71.0 Foland and Martin (2017) 70.7 Buys and Blunsom (2017) 61.9 Table 1: 2015 & 2017 test set Smatch scores elementary graphs. For instance, the graph Gwant in Fig. 1 now only consists of a single “want” node. We then aim to directly predict AMR edges between these graphs, using a variant of the local edge scoring model of Section 5.3 which learns scores for each edge in isolation. (The assumption for the original local model, that each node has only one incoming edge, does not apply here.) When parsing a string, we choose the highestscoring supertag for each word; there are only 628 different supertags in this setting, and 1-best supertagging accuracy is high at 88%. We then follow the JAMR parsing algorithm by predicting all edges whose score is over a threshold (we found -0.02 to be optimal) and then adding edges until the graph is connected. Because we do not predict which node is the root of the AMR, we evaluated this model as if it always predicted the root correctly, overestimating its score slightly. 7.5 Results Table 1 shows the Smatch scores (Cai and Knight, 2013) of our models, compared to a selection of previously published results. Our results are averages over 4 runs with 95% confidence intervals (JAMR-style baselines are single runs). On the 2015 dataset, our best models (local + projective, K&G + fixed-tree) outperform all previous work, with the exception of the Foland and Martin (2017) model; on the 2017 set we match state of the art results (though note that van Noord and Bos (2017b) use 100k additional sentences of silver data). The fixed-tree decoder seems to work well with either edge model, but performance of the projective decoder drops with the K&G edge scores. It may be that, while the hinge loss used in the K&G edge scoring model is useful to finding the correct un1839 2015 2017 Metric W’15 F’16 D’17 PD FTD vN’17 PD FTD Smatch 67 67 64 70 70 71 71 70 Unlabeled 69 69 69 73 73 74 74 74 No WSD 64 68 65 71 70 72 72 70 Named Ent. 75 79 83 79 78 79 78 77 Wikification 0 75 64 71 72 65 71 71 Negations 18 45 48 52 52 62 57 55 Concepts 80 83 83 83 84 82 84 84 Reentrancies 41 42 41 46 44 52 49 46 SRL 60 60 56 63 61 66 64 62 Table 2: Details for the LDC2015E86 and LDC2017T10 test sets Agatha_Christie name person name wiki Agatha Christie op1 op2 Figure 7: A named entity labeled dependency tree in the fixed-tree decoder, scores for bad edges – which are never used when computing the hinge loss – are not trained accurately. Thus such edges may be erroneously used by the projective decoder. As expected, the type-unaware baseline has low recall, due to its inability to produce well-typed trees. The fact that our models outperform the JAMR-style baseline so clearly is an indication that they indeed gain some of their accuracy from the type information in the elementary as-graphs, confirming our hypothesis that an explicit model of the compositional structure of the AMR can help the parser learn an accurate model. Table 2 analyzes the performance of our two best systems (PD = projective, FTD = fixed-tree) in more detail, using the categories of Damonte et al. (2017), and compares them to Wang’s, Flanigan’s, and Damonte’s AMR parsers on the 2015 set and , and van Noord and Bos (2017b) for the 2017 dataset. (Foland and Martin (2017) did not publish such results.) The good scores we achieve on reentrancy identification, despite removing a large amount of reentrant edges from the training data, indicates that our elementary as-graphs successfully encode phenomena such as control and coordination. The projective decoder is given 4, and the fixedtree decoder 6, supertags for each token. We trained the supertagging and edge scoring models of Section 5 separately; joint training did not help. Not sampling the supertag types τi during training of the K&G model, removing them from the input, and removing the character-based LSTM encodings ci from the input of the supertagger, all reduced our models’ accuracy. 7.6 Differences between the parsers Although the Smatch scores for our two best models are close, they sometimes struggle with different sentences. The fixed-tree parser is at the mercy of the fixed tree; the projective parser cannot produce non-projective AM dependency trees. It is remarkable that the projective parser does so well, given the prevalence of non-projective trees in the training data. Looking at its analyses, we find that it frequently manages to find a projective tree which yields an (almost) correct AMR, by choosing supertags with unusual types, and by using modify rather than apply (or vice versa). 8 Conclusion We presented an AMR parser which applies methods from supertagging and dependency parsing to map a string into a well-typed AM term, which it then evaluates into an AMR. The AM term represents the compositional semantic structure of the AMR explicitly, allowing us to use standard treebased parsing techniques. The projective parser currently computes the complete parse chart. In future work, we will speed it up through the use of pruning techniques. We will also look into more principled methods for splitting the AMRs into elementary as-graphs to replace our hand-crafted heuristics. In particular, advanced methods for alignments, as in Lyu and Titov (2018), seem promising. Overcoming the need for heuristics also seems to be a crucial ingredient for applying our method to other semantic representations. Acknowledgements We would like to thank the anonymous reviewers for their comments. We thank Stefan Gr¨unewald for his contribution to our PyTorch implementation, and want to acknowledge the inspiration obtained from Nguyen et al. (2017). We also extend our thanks to the organizers and participants of the Oslo CAS Meaning Construction workshop on Universal Dependencies. This work was supported by the DFG grant KO 2916/2-1 and a Macquarie University Research Excellence Scholarship for Jonas Groschwitz. 1840 References Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG Semantic Parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Jan Buys and Phil Blunsom. 2017. Oxford at SemEval2017 task 9: Neural AMR parsing with pointeraugmented attention. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). pages 914–919. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Bruno Courcelle and Joost Engelfriet. 2012. Graph Structure and Monadic Second-Order Logic, a Language Theoretic Approach. Cambridge University Press. Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics. Yantao Du, Fan Zhang, Weiwei Sun, and Xiaojun Wan. 2014. Peking: Profiling syntactic tree parsing techniques for semantic graph parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proceedings of the 37th ACL. Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). William Foland and James H. Martin. 2017. Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Jonas Groschwitz, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2017. A constrained graph algebra for semantic parsing with amrs. In Proceedings of the 12th International Conference on Computational Semantics (IWCS). Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations. Transactions of the Association for Computational Linguistics 4:313–327. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 1516–1526. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proceedings of the 2010 conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 1223–1233. Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG Parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Chunchuan Lyu and Ivan Titov. 2018. Amr parsing as graph prediction with latent alignment. In Proceedings of the 56th Annual Conference of the Association for Computational Linguistics (ACL). Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Andr´e F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2010. Turbo parsers: Dependency parsing by approximate variational inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Jonathan May. 2016. Semeval-2016 task 8: Meaning representation parsing. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Association for Computational Linguistics. Jonathan May and Jay Priyadarshi. 2017. Semeval2017 task 9: Abstract meaning representation parsing and generation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). Association for Computational Linguistics. 1841 Dat Quoc Nguyen, Mark Dras, and Mark Johnson. 2017. A novel neural network model for joint POS tagging and graph-based dependency parsing. arXiv preprint arXiv:1705.05952 . Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement grammar based approach for amr parsing. In Proceedings of the 19th Conference on Computational Language Learning. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). David M Perlmutter. 1978. Impersonal passives and the unaccusative hypothesis. In annual meeting of the Berkeley Linguistics Society. volume 4, pages 157–190. Siva Reddy, Oscar T¨ackstr¨om, Slav Petrov, Mark Steedman, and Mirella Lapata. 2017. Universal semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 89–101. http://aclweb.org/anthology/D17-1009. Stuart Shieber, Yves Schabes, and Fernando Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming 24(1–2):3– 36. Rik van Noord and Johan Bos. 2017a. Dealing with co-reference in neural semantic parsing. In Proceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2). Rik van Noord and Johan Bos. 2017b. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. Computational Linguistics in the Netherlands Journal . Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2014. Grammar as a foreign language. CoRR abs/1412.7449. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015. A Transition-based Algorithm for AMR Parsing. In Proceedings of NAACL-HLT. Yuchen Zhang, Panupong Pasupat, and Percy Liang. 2017. Macro grammars and holistic triggering for efficient semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1214–1223. http://aclweb.org/anthology/D17-1125.
2018
170
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1842–1852 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1842 Sequence-to-sequence Models for Cache Transition Systems Xiaochang Peng1, Linfeng Song1, Daniel Gildea1, Giorgio Satta2 1University of Rochester 2University of Padua {xpeng,lsong10,gildea}@cs.rochester.edu, [email protected] Abstract In this paper, we present a sequenceto-sequence based approach for mapping natural language sentences to AMR semantic graphs. We transform the sequence to graph mapping problem to a word sequence to transition action sequence problem using a special transition system called a cache transition system. To address the sparsity issue of neural AMR parsing, we feed feature embeddings from the transition state to provide relevant local information for each decoder state. We present a monotonic hard attention model for the transition framework to handle the strictly left-to-right alignment between each transition state and the current buffer input focus. We evaluate our neural transition model on the AMR parsing task, and our parser outperforms other sequence-to-sequence approaches and achieves competitive results in comparison with the best-performing models.1 1 Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a semantic formalism where the meaning of a sentence is encoded as a rooted, directed graph. Figure 1 shows an example of an AMR in which the nodes represent the AMR concepts and the edges represent the relations between the concepts. AMR has been used in various applications such as text summarization (Liu et al., 2015), sentence compression (Takase et al., 2016), and event extraction (Huang et al., 2016). 1The implementation of our parser is available at https://github.com/xiaochang13/CacheTransition-Seq2seq want-01 person go-01 ARG0 ARG0 ARG1 name “John” name op1 Figure 1: An example of AMR graph representing the meaning of: “John wants to go” The task of AMR graph parsing is to map natural language strings to AMR semantic graphs. Different parsers have been developed to tackle this problem (Flanigan et al., 2014; Wang et al., 2015b,a; Peng et al., 2015; Artzi et al., 2015; Pust et al., 2015; van Noord and Bos, 2017). On the other hand, due to the limited amount of labeled data and the large output vocabulary, the sequence-to-sequence model has not been very successful on AMR parsing. Peng et al. (2017) propose a linearization approach that encodes labeled graphs as sequences. To address the data sparsity issue, low-frequency entities and tokens are mapped to special categories to reduce the vocabulary size for the neural models. Konstas et al. (2017) use self-training on a huge amount of unlabeled text to lower the out-of-vocabulary rate. However, the final performance still falls behind the best-performing models. The best performing AMR parsers model graph structures directly. One approach to modeling graph structures is to use a transition system to build graphs step by step, as shown by the system 1843 of Wang and Xue (2017), which is currently the top performing system. This raises the question of whether the advantages of neural and transitionbased system can be combined, as for example with the syntactic parser of Dyer et al. (2015), who use stack LSTMs to capture action history information in the transition state of the transition system. Ballesteros and Al-Onaizan (2017) apply stack-LSTM to transition-based AMR parsing and achieve competitive results, which shows that local transition state information is important for predicting transition actions. Instead of linearizing the target AMR graph to a sequence structure, Buys and Blunsom (2017) propose a sequence-to-action-sequence approach where the reference AMR graph is replaced with an action derivation sequence by running a deterministic oracle algorithm on the training sentence, AMR graph pairs. They use a separate alignment probability to explicitly model the hard alignment from graph nodes to sentence tokens in the buffer. Gildea et al. (2018) propose a special transition framework called a cache transition system to generate the set of semantic graphs. They adapt the stack-based parsing system by adding a working set, which they refer to as a cache, to the traditional stack and buffer. Peng et al. (2018) apply the cache transition system to AMR parsing and design refined action phases, each modeled with a separate feedforward neural network, to deal with some practical implementation issues. In this paper, we propose a sequence-to-actionsequence approach for AMR parsing with cache transition systems. We want to take advantage of the sequence-to-sequence model to encode wholesentence context information and the history action sequence, while using the transition system to constrain the possible output. The transition system can also provide better local context information than the linearized graph representation, which is important for neural AMR parsing given the limited amount of data. More specifically, we use bi-LSTM to encode two levels of input information for AMR parsing: word level and concept level, each refined with more general category information such as lemmatization, POS tags, and concept categories. We also want to make better use of the complex transition system to address the data sparsity issue for neural AMR parsing. We extend the hard attention model of Aharoni and Goldberg (2017), which deals with the nearly-monotonic alignment in the morphological inflection task, to the more general scenario of transition systems where the input buffer is processed from left-to-right. When we process the buffer in this ordered manner, the sequence of target transition actions are also strictly aligned left-to-right according to the input order. On the decoder side, we augment the prediction of output action with embedding features from the current transition state. Our experiments show that encoding information from the transition state significantly improves sequenceto-sequence models for AMR parsing. 2 Cache Transition Parser We adopt the transition system of Gildea et al. (2018), which has been shown to have good coverage of the graphs found in AMR. A cache transition parser consists of a stack, a cache, and an input buffer. The stack is a sequence σ of (integer, concept) pairs, as explained below, with the topmost element always at the rightmost position. The buffer is a sequence of ordered concepts β containing a suffix of the input concept sequence, with the first element to be read as a newly introduced concept/vertex of the graph. (We use the terms concept and vertex interchangeably in this paper.) Finally, the cache is a sequence of concepts η = [v1, . . . , vm]. The element at the leftmost position is called the first element of the cache, and the element at the rightmost position is called the last element. Operationally, the functioning of the parser can be described in terms of configurations and transitions. A configuration of our parser has the form: C = (σ, η, β, Gp) where σ, η and β are as described above, and Gp is the partial graph that has been built so far. The initial configuration of the parser is ([], [$, . . . , $], [c1, . . . , cn], ∅), meaning that the stack and the partial graph are initially empty, and the cache is filled with m occurrences of the special symbol $. The buffer is initialized with all the graph vertices constrained by the order of the input sentence. The final configuration is ([], [$, . . . , $], [], G), where the stack and the cache are as in the initial configuration and the buffer is empty. The constructed graph is the target AMR graph. 1844 stack cache buffer edges actions taken [] [$, $, $] [Per, want-01, go-01] ∅ — [1, $] [$, $, Per] [want-01, go-01] ∅ Shift; PushIndex(1) [1, $] [$, $, Per] [want-01, go-01] ∅ Arc(1, -, NULL); Arc(2, -, NULL) [1, $, 1, $] [$, Per, want-01] [go-01] ∅ Shift; PushIndex(1) [1, $, 1, $] [$, Per, want-01] [go-01] E1 Arc(1, -, NULL); Arc(2, L, ARG0) [1, $, 1, $, 1, $] [Per, want-01, go-01] [] E1 Shift; PushIndex(1) [1, $, 1, $, 1, $] [Per, want-01, go-01] [] E2 Arc(1, L, ARG0); Arc(2, R, ARG1) [1, $, 1, $] [$, Per, want-01 ] [] E2 Pop [1, $] [$, $, Per] [] E2 Pop [] [$, $, $] [] E2 Pop Figure 2: Example run of the cache transition system constructing the graph for the sentence “John wants to go” with cache size of 3. The left four columns show the parser configurations after taking the actions shown in the last column. E1 = {(Per, want-01, L-ARG0)}, E2 = {(Per, want-01, L-ARG0), (Per, go-01, L-ARG0), (want-01, go-01, R-ARG1)}. In the first step, which is called concept identification, we map the input sentence w1:n′ = w1, . . . , wn′ to a sequence of concepts c1:n = c1, . . . , cn. We decouple the problem of concept identification from the transition system and initialize the buffer with a recognized concept sequence from another classifier, which we will introduce later. As the sequence-to-sequence model uses all possible output actions as the target vocabulary, this can significantly reduce the target vocabulary size. The transitions of the parser are specified as follows. 1. Pop pops a pair (i, v) from the stack, where the integer i records the position in the cache that it originally came from. We place concept v in position i in the cache, shifting the remainder of the cache one position to the right, and discarding the last element in the cache. 2. Shift signals that we will start processing the next input concept, which will become a new vertex in the output graph. 3. PushIndex(i) shifts the next input concept out of the buffer and moves it into the last position of the cache. We also take out the concept vi appearing at position i in the cache and push it onto the stack σ, along with the integer i recording its original position in the cache.2 2Our transition design is different from Peng et al. (2018) in two ways: the PushIndex phase is initiated before making all the arc decisions; the newly introduced concept is placed at the last cache position instead of the leftmost buffer position, which essentially increases the cache size by 1. 4. Arc(i, d, l) builds an arc with direction d and label l between the rightmost concept and the i-th concept in the cache. The label l is NULL if no arc is made and we use the action NOARC in this case. Otherwise we decompose the arc decision into two actions ARC and d-l. We consider all arc decisions between the rightmost cache concept and each of the other concepts in the cache. We can consider this phase as first making a binary decision whether there is an arc, and then predicting the label in case there is one, between each concept pair. Given the sentence “John wants to go” and the recognized concept sequence “Per want-01 go-01” (person name category Per for “John”), our cache transition parser can construct the AMR graph shown in Figure 1 using the run shown in Figure 2 with cache size of 3. 2.1 Oracle Extraction Algorithm We use the following oracle algorithm (Nivre, 2008) to derive the sequence of actions that leads to the gold AMR graph for a cache transition parser with cache size m. The correctness of the oracle is shown by Gildea et al. (2018). Let EG be the set of edges of the gold graph G. We maintain the set of vertices that is not yet shifted into the cache as S, which is initialized with all vertices in G. The vertices are ordered according to their aligned position in the word sequence and the unaligned vertices are listed according to their order in the depth-first traversal of the graph. The oracle algorithm can look into 1845 Figure 3: Sequence-to-sequence model with soft attention, encoding a word sequence and concept sequence separately by two BiLSTM encoders. EG to decide which transition to take next, or else to decide that it should fail. This decision is based on the mutually exclusive rules listed below. 1. ShiftOrPop phase: the oracle chooses transition Pop, in case there is no edge (vm, v) in EG such that vertex v is in S, or chooses transition Shift and proceeds to the next phase. 2. PushIndex phase: in this phase, the oracle first chooses a position i (as explained below) in the cache to place the candidate concept and removes the vertex at this position and places its index, vertex pair onto the stack. The oracle chooses transition PushIndex(i) and proceeds to the next phase. 3. ArcBinary, ArcLabel phases: between the rightmost cache concept and each concept in the cache, we make a binary decision about whether there is an arc between them. If there is an arc, the oracle chooses its direction and label. After arc decisions to m−1 cache concepts are made, we jump to the next step. 4. If the stack and buffer are both empty, and the cache is in the initial state, the oracle finishes with success, otherwise we proceed to the first step. We use the equation below to choose the cache concept to take out in the step PushIndex(i). For j ∈[|β|], we write βj to denote the j-th vertex in β. We choose a vertex vi∗in η such that: i∗= argmax i∈[m] min {j | (vi, βj) ∈EG} Figure 4: Sequence-to-sequence model with monotonic hard attention. Different colors show the changes of hard attention focus. In words, vi∗is the concept from the cache whose closest neighbor in the buffer β is furthest forward in β. We move out of the cache vertex vi∗and push it onto the stack, for later processing. For each training example (x1:n, g), the transition system generates the output AMR graph g from the input sequence x1:n through an oracle sequence a1:q ∈Σ∗ a, where Σa is the union of all possible actions. We model the probability of the output with the action sequence: P(a1:q|x1:n) = q t=1 P(at|a1, . . . , at−1, x1:n; θ) which we estimate using a sequence-to-sequence model, as we will describe in the next section. 3 Soft vs Hard Attention for Sequence-to-action-sequence Shown in Figure 3, our sequence-to-sequence model takes a word sequence w1:n′ and its mapped concept sequence c1:n as the input, and the action sequence a1:q as the output. It uses two BiLSTM encoders, each encoding an input sequence. As the two encoders have the same structure, we only introduce the encoder for the word sequence in detail below. 3.1 BiLSTM Encoder Given an input word sequence w1:n′, we use a bidirectional LSTM to encode it. At each step j, the current hidden states ←−h w j and −→h w j are generated from the previous hidden states ←−h w j+1 and −→h w j−1, 1846 and the representation vector xj of the current input word wj: ←−h w j = LSTM(←−h w j+1, xj) −→h w j = LSTM(−→h w j−1, xj) The representation vector xj is the concatenation of the embeddings of its word, lemma, and POS tag, respectively. Then the hidden states of both directions are concatenated as the final hidden state for word wj: hw j = [←−h w j ; −→h w j ] Similarly, for the concept sequence, the final hidden state for concept cj is: hc j = [←−h c j; −→h c j] 3.2 LSTM Decoder with Soft Attention We use an attention-based LSTM decoder (Bahdanau et al., 2014) with two attention memories Hw and Hc, where Hw is the concatenation of the state vectors of all input words, and Hc for input concepts correspondingly: Hw = [hw 1 ; hw 2 ; . . . ; hw n′] (1) Hc = [hc 1; hc 2; . . . ; hc n] (2) The decoder yields an action sequence a1, a2, . . . , aq as the output by calculating a sequence of hidden states s1, s2 . . . , sq recurrently. While generating the t-th output action, the decoder considers three factors: (1) the previous hidden state of the LSTM model st−1; (2) the embedding of the previous generated action et−1; and (3) the previous context vectors for words µw t−1 and concepts µc t−1, which are calculated using Hw and Hc, respectively. When t = 1, we initialize µ0 as a zero vector, and set e0 to the embedding of the start token “⟨s⟩”. The hidden state s0 is initialized as: s0 = Wd[←−h w 1 ; −→h w n ; ←−h c 1; −→h c n] + bd, where Wd and bd are model parameters. For each time-step t, the decoder feeds the concatenation of the embedding of previous action et−1 and the previous context vectors for words µw t−1 and concepts µc t−1 into the LSTM model to update its hidden state. st = LSTM(st−1, [et−1; µw t−1; µc t−1]) (3) Then the attention probabilities for the word sequence and the concept sequence are calculated similarly. Take the word sequence as an example, αw t,i on hw i ∈Hw for time-step t is calculated as: ϵt,i = vT c tanh(Whhw i + Wsst + bc) αw t,i = exp(ϵt,i) PN j=1 exp(ϵt,j) Wh, Ws, vc and bc are model parameters. The new context vector µw t = Pn i=1 αw t,ihw i . The calculation of µc t follows the same procedure, but with a different set of model parameters. The output probability distribution over all actions at the current state is calculated by: PΣa = softmax(Va[st; µw t ; µc t] + ba), (4) where Va and ba are learnable parameters, and the number of rows in Va represents the number of all actions. The symbol Σa is the set of all actions. 3.3 Monotonic Hard Attention for Transition Systems When we process each buffer input, the next few transition actions are closely related to this input position. The buffer maintains the order information of the input sequence and is processed strictly left-to-right, which essentially encodes a monotone alignment between the transition action sequence and the input sequence. As we have generated a concept sequence from the input word sequence, we maintain two hard attention pointers, lw and lc, to model monotonic attention to word and concept sequences respectively. The update to the decoder state now relies on a single position of each input sequence in contrast to Equation 3: st = LSTM(st−1, [et−1; hw lw; hc lc]) (5) Control Mechanism. Both pointers are initialized as 0 and advanced to the next position deterministically. We move the concept attention focus lc to the next position after arc decisions to all the other m −1 cache concepts are made. We move the word attention focus lw to its aligned position in case the new concept is aligned, otherwise we don’t move the word focus. As shown in Figure 4, after we have made arc decisions from concept want-01 to the other cache concepts, we move the concept focus to the next concept go-01. As this concept is aligned, we move the word focus to its aligned position go in the word sequence and skip the unaligned word to. 1847 3.4 Transition State Features for Decoder Another difference of our model with Buys and Blunsom (2017) is that we extract features from the current transition state configuration Ct: ef(Ct) = [ef1(Ct); ef2(Ct); · · · ; efl(Ct)] where l is the number of features extracted from Ct and efk(Ct) (k = 1, . . . , l) represents the embedding for the k-th feature, which is learned during training. These feature embeddings are concatenated as ef(Ct), and fed as additional input to the decoder. For the soft attention decoder: st = LSTM(st−1, [et−1; µw t−1; µc t−1; ef(Ct)]) and for the hard attention decoder: st = LSTM(st−1, [et−1; hw lw; hc lc; ef(Ct)]) We use the following features in our experiments: 1. Phase type: indicator features showing which phase the next transition is. 2. ShiftOrPop features: token features3 for the rightmost cache concept and the leftmost buffer concept. Number of dependencies to words on the right, and the top three dependency labels for them. 3. ArcBinary or ArcLabel features: token features for the rightmost concept and the current cache concept it makes arc decisions to. Word, concept and dependency distance between the two concepts. The labels for the two most recent outgoing arcs for these two concepts and their first incoming arc and the number of incoming arcs. Dependency label between the two positions if there is a dependency arc between them. 4. PushIndex features: token features for the leftmost buffer concept and all the concepts in the cache. The phase type features are deterministic from the last action output. For example, if the last action output is Shift, the current phase type would be PushIndex. We only extract corresponding features for this phase and fill all the other feature types with -NULL- as placeholders. The features for other phases are similar. 3Concept, concept category at the specified position in concept sequence. And the word, lemma, POS tag at the aligned input position. 4 AMR Parsing 4.1 Training and Decoding We train our models using the cross-entropy loss, over each oracle action sequence a∗ 1, . . . , a∗ q: L = − q X t=1 log P(a∗ t |a∗ 1, . . . , a∗ t−1, X; θ), (6) where X represents the input word and concept sequences, and θ is the model parameters. Adam (Kingma and Ba, 2014) with a learning rate of 0.001 is used as the optimizer, and the model that yields the best performance on the dev set is selected to evaluate on the test set. Dropout with rate 0.3 is used during training. Beam search with a beam size of 10 is used for decoding. Both training and decoding use a Tesla K20X GPU. Hidden state sizes for both encoder and decoder are set to 100. The word embeddings are initialized from Glove pretrained word embeddings (Pennington et al., 2014) on Common Crawl, and are not updated during training. The embeddings for POS tags and features are randomly initialized, with the sizes of 20 and 50, respectively. 4.2 Preprocessing and Postprocessing As the AMR data is very sparse, we collapse some subgraphs or spans into categories based on the alignment. We define some special categories such as named entities (NE), dates (DATE), single rooted subgraphs involving multiple concepts (MULT)4, numbers (NUMBER) and phrases (PHRASE). The phrases are extracted based on the multiple-to-one alignment in the training data. One example phrase is more than which aligns to a single concept more-than. We first collapse spans and subgraphs into these categories based on the alignment from the JAMR aligner (Flanigan et al., 2014), which greedily aligns a span of words to AMR subgraphs using a set of heuristics. This categorization procedure enables the parser to capture mappings from continuous spans on the sentence side to connected subgraphs on the AMR side. We use the semi-Markov model from Flanigan et al. (2016) as the concept identifier, which jointly segments the sentence into a sequence of spans and maps each span to a subgraph. During decoding, our output has categories, and we need to map 4For example, verbalization of “teacher” as “(person :ARG0-of teach-01)”, or “minister” as “(person :ARG0-of (have-org-role-91 :ARG2 minister))”. 1848 ShiftOrPop PushIndex ArcBinary ArcLabel Peng et al. (2018) 0.87 0.87 0.83 0.81 Soft+feats 0.93 0.84 0.91 0.75 Hard+feats 0.94 0.85 0.93 0.77 Table 1: Performance breakdown of each transition phase. each category to the corresponding AMR concept or subgraph. We save a table Q which shows the original subgraph each category is collapsed from, and map each category to its original subgraph representation. We also use heuristic rules to generate the target-side AMR subgraph representation for NE, DATE, and NUMBER based on the source side tokens. 5 Experiments We evaluate our system on the released dataset (LDC2015E86) for SemEval 2016 task 8 on meaning representation parsing (May, 2016). The dataset contains 16,833 training, 1,368 development, and 1,371 test sentences which mainly cover domains like newswire, discussion forum, etc. All parsing results are measured by Smatch (version 2.0.2) (Cai and Knight, 2013). 5.1 Experiment Settings We categorize the training data using the automatic alignment and dump a template for date entities and frequent phrases from the multiple to one alignment. We also generate an alignment table from tokens or phrases to their candidate targetside subgraphs. For the dev and test data, we first extract the named entities using the Illinois Named Entity Tagger (Ratinov and Roth, 2009) and extract date entities by matching spans with the date template. We further categorize the dataset with the categories we have defined. After categorization, we use Stanford CoreNLP (Manning et al., 2014) to get the POS tags and dependencies of the categorized dataset. We run the oracle algorithm separately for training and dev data (with alignment) to get the statistics of individual phases. We use a cache size of 5 in our experiments. 5.2 Results Individual Phase Accuracy We first evaluate the prediction accuracy of individual phases on the dev oracle data assuming gold prediction history. The four transition phases ShiftOrPop, PushIndex, ArcBinary, and ArcLabel account for 25%, 12.5%, 50.1%, and 12.4% of the total transition actions respectively. Table 1 shows the phase-wise accuracy of our sequence-to-sequence model. Peng et al. (2018) use a separate feedforward network to predict each phase independently. We use the same alignment from the SemEval dataset as in Peng et al. (2018) to avoid differences resulting from the aligner. Soft+feats shows the performance of our sequence-to-sequence model with soft attention and transition state features, while Hard+feats is using hard attention. We can see that the hard attention model outperforms the soft attention model in all phases, which shows that the single-pointer attention finds more relevant information than the soft attention on the relatively small dataset. The sequence-to-sequence models perform better than the feedforward model of Peng et al. (2018) on ShiftOrPop and ArcBinary, which shows that the whole-sentence context information is important for the prediction of these two phases. On the other hand, the sequence-tosequence models perform worse than the feedforward models on PushIndex and ArcLabel. One possible reason is that the model tries to optimize the overall accuracy, while these two phases account for fewer than 25% of the total transition actions and might be less attended to during the update. Impact of Different Components Table 2 shows the impact of different components for the sequence-to-sequence model. We can see that the transition state features play a very important role for predicting the correct transition action. This is because different transition phases have very different prediction behaviors and need different types of local information for the prediction. Relying on the sequence-to-sequence model alone does not perform well in disambiguating these choices, while the transition state can enforce direct constraints. We can also see that while the hard attention only attends to one position of the input, it performs slightly better than the soft attention model, while the time complexity is lower. Impact of Different Cache Sizes The cache size of the transition system can be optimized as a trade-off between coverage of AMR graphs and the prediction accuracy. While larger cache size increases the coverage of AMR graphs, it complicates the prediction procedure with more cache decisions to make. From Table 3 we can see that 1849 System P R F Soft 0.55 0.51 0.53 Soft+feats 0.69 0.63 0.66 Hard+feats 0.70 0.64 0.67 Table 2: Impact of various components for the sequence-to-sequence model (dev). Cache Size P R F 4 0.69 0.63 0.66 5 0.70 0.64 0.67 6 0.69 0.64 0.66 Table 3: Impact of cache size for the sequenceto-sequence model, hard attention (dev). the hard attention model performs best with cache size 5. The soft attention model also achieves best performance with the same cache size. Comparison with other Parsers Table 4 shows the comparison with other AMR parsers. The first three systems are some competitive neural models. We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017). Konstas et al. (2017) use a linearization approach that linearizes the AMR graph to a sequence structure and use selftraining on 20M unlabeled Gigaword sentences. Our model achieves better results without using additional unlabeled data, which shows that relevant information from the transition system is very useful for the prediction. Our model also outperforms the stack-LSTM model by Ballesteros and Al-Onaizan (2017), while their model is evaluated on the previous release of LDC2014T12. System P R F Buys and Blunsom (2017) – – 0.60 Konstas et al. (2017) 0.60 0.65 0.62 Ballesteros and Al-Onaizan (2017)* – – 0.64 Damonte et al. (2017) – – 0.64 Peng et al. (2018) 0.69 0.59 0.64 Wang et al. (2015b) 0.64 0.62 0.63 Wang et al. (2015a) 0.70 0.63 0.66 Flanigan et al. (2016) 0.70 0.65 0.67 Wang and Xue (2017) 0.72 0.65 0.68 Ours soft attention 0.68 0.63 0.65 Ours hard attention 0.69 0.64 0.66 Table 4: Comparison to other AMR parsers. *Model has been trained on the previous release of the corpus (LDC2014T12). System P R F Peng et al. (2018) 0.44 0.28 0.34 Damonte et al. (2017) – – 0.41 JAMR 0.47 0.38 0.42 Ours 0.58 0.34 0.43 Table 5: Reentrancy statistics. We also show the performance of some of the best-performing models. While our hard attention achieves slightly lower performance in comparison with Wang et al. (2015a) and Wang and Xue (2017), it is worth noting that their approaches of using WordNet, semantic role labels and word cluster features are complimentary to ours. The alignment from the aligner and the concept identification identifier also play an important role for improving the performance. Wang and Xue (2017) propose to improve AMR parsing by improving the alignment and concept identification, which can also be combined with our system to improve the performance of a sequence-to-sequence model. Dealing with Reentrancy Reentrancy is an important characteristic of AMR, and we evaluate the Smatch score only on the reentrant edges following Damonte et al. (2017). From Table 5 we can see that our hard attention model significantly outperforms the feedforward model of Peng et al. (2018) in predicting reentrancies. This is because predicting reentrancy is directly related to the ArcBinary phase of the cache transition system since it decides to make multiple arc decisions to the same vertex, and we can see from Table 1 that the hard attention model has significantly better prediction accuracy in this phase. We also compare the reentrancy results of our transition system with two other systems, Damonte et al. (2017) and JAMR, where these statistics are available. From Table 5, we can see that our cache transition system slightly outperforms these two systems in predicting reentrancies. Figure 5 shows a reentrancy example where JAMR and the feedforward network of Peng et al. (2018) do not predict well, while our system predicts the correct output. JAMR fails to predict the reentrancy arc from desire-01 to i, and connects the wrong arc from “live-01” to “-” instead of from “desire-01”. The feedforward model of Peng et al. (2018) fails to predict the two arcs from desire-01 1850 i desire-01 live-01 any city ARG0 ARG0 polarity ARG1 location $ $ i desire-01 polarity ARG0 $ i desire-01 live-01 ARG1 ARG0 Our hard attention output: Sentence: I have no desire to live in any city . Cache arc decisions creating the reentrancy (cache size of 5): JAMR output: Peng et al. (2018) output: mod i desire-01 live-01 any city polarity ARG1 location mod i desire-01 live-01 any city ARG0 polarity ARG1 location mod Figure 5: An example showing how our system predicts the correct reentrancy. and live-01 to i. This error is because their feedforward ArcBinary classifier does not model longterm dependency and usually prefers making arcs between words that are close and not if they are distant. Our classifier, which encodes both word and concept sequence information, can accurately predict the reentrancy through the two arc decisions shown in Figure 5. When desire-01 and live01 are shifted into the cache respectively, the transition system makes a left-going arc from each of them to the same concept i, thus creating the reentrancy as desired. 6 Conclusion In this paper, we have presented a sequence-toaction-sequence approach for cache transition systems and applied it to AMR parsing. To address the data sparsity issue for neural AMR parsing, we show that the transition state features are very helpful in constraining the possible output and improving the performance of sequence-to-sequence models. We also show that the monotonic hard attention model can be generalized to the transitionbased framework and outperforms the soft attention model when limited data is available. While we are focused on AMR parsing in this paper, in future work our cache transition system and the presented sequence-to-sequence models can be potentially applied to other semantic graph parsing tasks (Oepen et al., 2015; Du et al., 2015; Zhang et al., 2016; Cao et al., 2017). Acknowledgments We gratefully acknowledge the assistance of Hao Zhang from Google, New York for the monotonic hard attention idea and the helpful comments and suggestions. References Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2004–2015, Vancouver, Canada. Association for Computational Linguistics. Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699–1710, Lisbon, Portugal. Association for Computational Linguistics. 1851 Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Miguel Ballesteros and Yaser Al-Onaizan. 2017. AMR parsing using stack-LSTMs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1269–1275. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Jan Buys and Phil Blunsom. 2017. Robust incremental neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1215–1226. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-13), pages 748–752. Junjie Cao, Sheng Huang, Weiwei Sun, and Xiaojun Wan. 2017. Parsing to 1-endpoint-crossing, pagenumber-2 graphs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2110–2120. Marco Damonte, Shay B Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 536–546. Yantao Du, Fan Zhang, Xun Zhang, Weiwei Sun, and Xiaojun Wan. 2015. Peking: Building semantic dependency graphs with a hybrid parser. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 927–931. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 334–343. Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202–1206. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-14), pages 1426–1436, Baltimore, Maryland. Daniel Gildea, Giorgio Satta, and Xiaochang Peng. 2018. Cache transition systems for graph parsing. Computational Linguistics, 44(1):85–118. Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R Voss, Jiawei Han, and Avirup Sil. 2016. Liberal event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 258–268. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146–157, Vancouver, Canada. Association for Computational Linguistics. Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. 2015. Toward abstractive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1077–1086. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In ACL (System Demonstrations), pages 55–60. Jonathan May. 2016. SemEval-2016 task 8: Meaning representation parsing. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1063–1073, San Diego, California. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Rik van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. arXiv preprint arXiv:1705.09980. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. Semeval 2015 1852 task 18: Broad-coverage semantic dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 915–926, Denver, Colorado. Xiaochang Peng, Daniel Gildea, and Giorgio Satta. 2018. AMR parsing with cache transition systems. In Proceedings of the National Conference on Artificial Intelligence (AAAI-18). Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement grammar based approach for AMR parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning (CoNLL-15), pages 32– 41, Beijing, China. Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the data sparsity issue in neural AMR parsing. In Proceedings of the European Chapter of the ACL (EACL-17). Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing English into abstract meaning representation using syntaxbased machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1143–1154, Lisbon, Portugal. Association for Computational Linguistics. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147–155, Boulder, Colorado. Association for Computational Linguistics. Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1054–1059. Chuan Wang and Nianwen Xue. 2017. Getting the most out of AMR parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1257–1268. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based AMR parsing with refined actions and auxiliary analyzers. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL-15), pages 857–862, Beijing, China. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for AMR parsing. In Proceedings of the 2015 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-15), pages 366–375, Denver, Colorado. Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep dependency structures. Computational Linguistics, 42(3):353–389.
2018
171
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1853–1862 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1853 Batch IS NOT Heavy: Learning Word Representations From All Samples ∗Xin Xin1, ∗Fajie Yuan1, Xiangnan He2, Joemon M.Jose1 1School of Computing Science, University of Glasgow, UK 2School of Computing, National University of Singapore {x.xin.1,f.yuan.1}@research.gla.ac.uk, [email protected] Abstract Stochastic Gradient Descent (SGD) with negative sampling is the most prevalent approach to learn word representations. However, it is known that sampling methods are biased especially when the sampling distribution deviates from the true data distribution. Besides, SGD suffers from dramatic fluctuation due to the onesample learning scheme. In this work, we propose AllVec that uses batch gradient learning to generate word representations from all training samples. Remarkably, the time complexity of AllVec remains at the same level as SGD, being determined by the number of positive samples rather than all samples. We evaluate AllVec on several benchmark tasks. Experiments show that AllVec outperforms samplingbased SGD methods with comparable efficiency, especially for small training corpora. 1 Introduction Representing words using dense and real-valued vectors, aka word embeddings, has become the cornerstone for many natural language processing (NLP) tasks, such as document classification (Sebastiani, 2002), parsing (Huang et al., 2012), discourse relation recognition (Lei et al., 2017) and named entity recognition (Turian et al., 2010). Word embeddings can be learned by optimizing that words occurring in similar contexts have similar embeddings, i.e. the well-known distributional hypothesis (Harris, 1954). A representative method is skip-gram (SG) (Mikolov et al., 2013a,b), which realizes the hypothesis using a ∗The first two authors contributed equally to this paper and share the first-authorship. (a) (b) Figure 1: Impact of different settings of negative sampling on skip-gram for the word analogy task on Text8. Clearly, the accuracy depends largely on (a) the sampling size of negative words, and (b) the sampling distribution (β = 0 means the uniform distribution and β = 1 means the word frequency distribution). shallow neural network model. The other family of methods is count-based, such as GloVe (Pennington et al., 2014) and LexVec (Salle et al., 2016a,b), which exploit low-rank models such as matrix factorization (MF) to learn embeddings by reconstructing the word co-occurrence statistics. By far, most state-of-the-art embedding methods rely on SGD and negative sampling for optimization. However, the performance of SGD is highly sensitive to the sampling distribution and the number of negative samples (Chen et al., 2018; Yuan et al., 2016), as shown in Figure 1. Essentially, sampling is biased, making it difficult to converge to the same loss with all examples, regardless of how many update steps have been taken. Moreover, SGD exhibits dramatic fluctuation and suffers from overshooting on local minimums (Ruder, 2016). These drawbacks of SGD can be attributed to its one-sample learning scheme, which updates parameters based on one training sample in each step. To address the above-mentioned limitations of SGD, a natural solution is to perform exact (full) batch learning. In contrast to SGD, batch learning does not involve any sampling procedure and computes the gradient over all training samples. As such, it can easily converge to a better optimum in a more stable way. Nevertheless, a well-known 1854 difficulty in applying full batch learning lies in the expensive computational cost for large-scale data. Taking the word embedding learning as an example, if the vocabulary size is |V |, then evaluating the loss function and computing the full gradient takes O(|V |2k) time, where k is the embedding size. This high complexity is unaffordable in practice, since |V |2 can easily reach billion level or even higher. In this paper, we introduce AllVec, an exact and efficient word embedding method based on full batch learning. To address the efficiency challenge in learning from all training samples, we devise a regression-based loss function for word embedding, which allows fast optimization with memorization strategies. Specifically, the acceleration is achieved by reformulating the expensive loss over all negative samples using a partition and a decouple operation. By decoupling and caching the bottleneck terms, we succeed to use all samples for each parameter update in a manageable time complexity which is mainly determined by the positive samples. The main contributions of this work are summarized as follows: • We present a fine-grained weighted least square loss for learning word embeddings. Unlike GloVe, it explicitly accounts for all negative samples and reweights them with a frequency-aware strategy. • We propose an efficient and exact optimization algorithm based on full batch gradient optimization. It has a comparable time complexity with SGD, but being more effective and stable due to the consideration of all samples in each parameter update. • We perform extensive experiments on several benchmark datasets and tasks to demonstrate the effectiveness, efficiency, and convergence property of our AllVec method. 2 Related Work 2.1 Skip-gram with Negative Sampling Mikolov et al. (2013a,b) proposed the skip-gram model to learn word embeddings. SG formulates the problem as a predictive task, aiming at predicting the proper context c for a target word w within a local window. To speed up the training process, it applies the negative sampling (Mikolov et al., 2013b) to approximate the full softmax. That is, each positive (w, c) pair is trained with n randomly sampled negative pairs (w, wi). The sampled loss function of SG is defined as LSG wc =log σ(Uw ˜U T c )+ n X i=1 Ewi∼Pn(w) log σ(−Uw ˜U T wi) where Uw and ˜Uc denote the k-dimensional embedding vectors for word w and context c. Pn(w) is the distribution from which negative context wi is sampled. Plenty of research has been done based on SG, such as the use of prior knowledge from another source (Kumar and Araki, 2016; Liu et al., 2015a; Bollegala et al., 2016), incorporating word type information (Cao and Lu, 2017; Niu et al., 2017), character level n-gram models (Bojanowski et al., 2016; Joulin et al., 2016) and jointly learning with topic models like LDA (Shi et al., 2017; Liu et al., 2015b). 2.2 Importance of the Sampling Distribution Mikolov et al. (2013b) showed that the unigram distribution raised to the 3/4th power as Pn(w) significantly outperformed both the unigram and the uniform distribution. This suggests that the sampling distribution (of negative words) has a great impact on the embedding quality. Furthermore, Chen et al. (2018) and Guo et al. (2018) recently found that replacing the original sampler with adaptive samplers could result in better performance. The adaptive samplers are used to find more informative negative examples during the training process. Compared with the original word-frequency based sampler, adaptive samplers adapt to both the target word and the current state of the model. They also showed that the finegrained samplers not only speeded up the convergence but also significantly improved the embedding quality. Similar observations were also found in other fields like collaborative filtering (Yuan et al., 2016). While being effective, it is proven that negative sampling is a biased approximation and does not converges to the same loss as the full softmax — regardless of how many update steps have been taken (Bengio and Sen´ecal, 2008; Blanc and Rendle, 2017). 2.3 Count-based Embedding Methods Another line of research is the count-based embedding, such as GloVe (Pennington et al., 2014). GloVe performs a biased MF on the word-context co-occurrence statistics, which is a common ap1855 proach in the field of collaborative filtering (Koren, 2008). However, GloVe only formulates the loss on positive entries of the co-occurrence matrix, meaning that negative signals about wordcontext co-occurrence are discarded. A remedy solution is LexVec (Salle et al., 2016a,b) which integrates negative sampling into MF. Some other methods (Li et al., 2015; Stratos et al., 2015; Ailem et al., 2017) also use MF to approximate the word-context co-occurrence statistics. Although predictive models and count-based models seem different at first glance, Levy and Goldberg (2014) proved that SG with negative sampling is implicitly factorizing a shifted pointwise mutual information (PMI) matrix, which means that the two families of embedding models resemble each other to a certain degree. Our proposed method departs from all above methods by using the full batch gradient optimizer to learn from all (positive and negative) samples. We propose a fast learning algorithm to show that such batch learning is not “heavy” even with tens of billions of training examples. 3 AllVec Loss In this work, we adopt the regression loss that is commonly used in count-based models (Pennington et al., 2014; Stratos et al., 2015; Ailem et al., 2017) to perform matrix factorization on word cooccurrence statistics. As highlighted, to retain the modeling fidelity, AllVec eschews using any sampling but optimizes the loss on all positive and negative word-context pairs. Given a word w and a symmetric window of win contexts, the set of positive contexts can be obtained by sliding through the corpus. Let c denote a specific context, Mwc be the number of cooccurred (w, c) pairs in the corpus within the window. Mwc =0 means that the pair (w, c) has never been observed, i.e. the negative signal. rwc is the association coefficient between w and c, which is calculated from Mwc. Specifically, we use r+ wc to denote the ground truth value for positive (w, c) pairs and a constant value r−(e.g., 0 or -1) for negative ones since there is no interaction between w and c in negative pairs. Finally, with all positive and negative pairs considered, a regular loss function can be given as Eq.(1), where V is the vocabulary and S is the set of positive pairs. α+ wc and α− wc represent the weight for positive and negative (w, c) pairs, respectively. L = X (w,c)∈S α+ wc(r+ wc −Uw ˜U T c )2 | {z } LP + X (w,c)∈(V×V )\S α− wc(r−−Uw ˜U T c )2 | {z } LN (1) When it comes to r+ wc, there are several choices. For example, GloVe applies the log of Mwc with bias terms for w and c. However, research from Levy and Goldberg (2014) showed that the SG model with negative sampling implicitly factorizes a shifted PMI matrix. The PMI value for a (w, c) pair can be defined as PMIwc = log P(w, c) P(w)P(c) = logMwcM∗∗ Mw∗M∗c (2) where ‘*’ denotes the summation of all corresponding indexes (e.g., Mw∗= P c∈V Mwc). Inspired by this connection, we set r+ wc as the positive point-wise mutual information (PPMI) which has been commonly used in the NLP literature (Stratos et al., 2015; Levy and Goldberg, 2014). Sepcifically, PPMI is the positive version of PMI by setting the negative values to zero. Finally, r+ wc is defined as r+ wc = PPMIwc = max(PMIwc, 0) (3) 3.1 Weighting Strategies Regarding α+ wc, we follow the design in GloVe, where it is defined as α+ wc = ( (Mwc/xmax)ρ Mwc < xmax 1 Mwc ≥xmax (4) As for the weight for negative instances α− wc, considering that there is no interaction between w and negative c, we set α− wc as α− c (or α− w), which means that the weight is determined by the word itself rather than the word-context interaction. Note that either α− wc = α− c or α− wc = α− w does not influence the complexity of AllVec learning algorithm described in the next section. The design of α− c is inspired by the frequency-based oversampling scheme in skip-gram and missing data reweighting in recommendation (He et al., 2016). The intuition is that a word with high frequency is more likely to be a true negative context word if there is no observed word-context interactions. Hence, to effectively differentiate the positive and negative examples, we assign a higher weight for the negative examples that have a higher word fre1856 quency, and a smaller weight for infrequent words. Formally, α− wc is defined as α− wc = α− c = α0 Mδ ∗c P c∈V Mδ∗c (5) where α0 can be seen as a global weight to control the overall importance of negative samples. α0 = 0 means that no negative information is utilized in the training. The exponent δ is used for smoothing the weights. Specially, δ = 0 means a uniform weight for all negative examples and δ = 1 means that no smoothing is applied. 4 Fast Batch Gradient Optimization Once specifying the loss function, the main challenge is how to perform an efficient optimization for Eq.(1). In the following, we develop a fast batch gradient optimization algorithm that is based on a partition reformulation for the loss and a decouple operation for the inner product. 4.1 Loss Partition As can be seen, the major computational cost in Eq.(1) lies in the term LN, because the size of (V ×V ) \ S is very huge, which typically contains over billions of negative examples. To this end, we show our first key design that separates the loss of negative samples into the difference between the loss on all samples and that on positive samples1. The loss partition serves as the prerequisite for the efficient computation of full batch gradients. LN= X w∈V X c∈V α− c (r−−Uw ˜U T c )2− X (w,c)∈S α− c (r−−Uw ˜U T c )2 (6) By replacing LN in Eq.(1) with Eq.(6), we can obtain a new loss function with a more clear structure. We further simplify the loss function by merging the terms on positive examples. Finally, we achieve a reformulated loss L = X w∈V X c∈V α− c (r−−Uw ˜U T c ) 2 | {z } LA + X (w,c)∈S (α+ wc −α− c )(∆−Uw ˜U T c ) 2 | {z } LP ′ +C (7) where ∆= (α+ wcr+ wc −α− c r−)/(α+ wc −α− c ). It can be seen that the new loss function consists of two components: the loss LA on the whole V × V training examples and LP ′ on positive examples. The major computation now lies in LA which has 1The idea here is similar to that used in (He et al., 2016; Li et al., 2016) for a different problem. a time complexity of O(k|V |2). In the following, we show how to reduce the huge volume of computation by a simple mathematical decouple. 4.2 Decouple To clearly show the decouple operation, we rewrite LA as eLA by omitting the constant term α− c (r−)2. Note that uwd and ˜ucd denote the d-th element in Uw and ˜Uc, respectively. eLA = X w∈V X c∈V α− c k X d=0 uwd˜ucd k X d′=0 uwd′ ˜ucd′ −2r−X w∈V X c∈V α− c k X d=0 uwd˜ucd (8) Now we show our second key design that is based on a decouple manipulation for the inner product operation. Interestingly, we observe that the summation operator and elements in Uw and ˜Uc can be rearranged by the commutative property (Dai et al., 2007), as shown below. eLA = k X d=0 k X d′=0 X w∈V uwduwd′ X c∈V α− c ˜ucd˜ucd′ −2r− k X d=0 X w∈V uwd X c∈V α− c ˜ucd (9) An important feature in Eq.(9) is that the original inner product terms are disappeared, while in the new equation P c∈V α− c ˜ucd˜ucd′ and P c∈V α− c ˜ucd are “constant” values relative to uwduwd′ and uwd respectively. This means that they can be pre-calculated before training in each iteration. Specifically, we define pw dd′, pc dd′, qw d and qc d as the pre-calculated terms pw dd′ = X w∈V uwduwd′ qw d = X w∈V uwd pc dd′ = X c∈V α− c ˜ucd˜ucd′ qc d = X c∈V α− c ˜ucd (10) Then the computation of ˜LA can be simplified to Pk d=0 Pk d′=0 pw dd′pc dd′ −2r−qw d qc d. It can be seen that the time complexity to compute all pw dd′ is O(|V |k2), and similarly, O(|V |k2) for pc dd′ and O(|V |k) for qw d and qc d. With all terms pre-calculated before each iteration, the time complexity of computing ˜LA is just O(k2). As a result, the total time complexity of computing LA is decreased to O(2|V |k2 +2|V |k+k2) ≈O(2|V |k2), which is much smaller than the original O(k|V |2). Moreover, it’s worth noting that our efficient computation for ˜LA is strictly equal to its original value, which means AllVec does not introduce any approximation in evaluating the loss function. Finally, we can derive the batch gradients for 1857 uwd and ˜ucd as ∂L ∂uwd = k X d′=0 uwd′pc dd′ − X c∈I+ w Λ · ˜ucd −r−qc d ∂L ∂˜ucd = k X d′=0 ˜ucd′pw dd′α− c − X w∈I+ c Λ · uwd −r−α− c qw d (11) where I+ w denotes the set of positive contexts for w, I+ c denotes the set of positive words for c and Λ = (α+ wc−α− c )(∆−Uw ˜U T c ). Algorithm 1 shows the training procedure of AllVec. Algorithm 1 AllVec learning Input: corpus Γ, win, α0, δ, iter, learning rate η Output: embedding matrices U and ˜U 1: Build vocabulary V from Γ 2: Obtain all positive (w, c) and Mwc from Γ 3: Compute all r+ wc, α+ wc and α− c 4: Initialize U and ˜U 5: for i = 1, ..., iter do 6: for d ∈{0, .., k} do 7: Compute and store qc d ▷O(|V |k) 8: for d′ ∈{0, .., k} do 9: Compute and store pc dd′ ▷O(|V |k2) 10: end for 11: end for 12: for w ∈V do 13: Compute Λ ▷O(|S|k) 14: for d ∈{0, .., k} do 15: Update uwd ▷O(|S|k + |V |k2) 16: end for 17: end for 18: Repeat 6-17 for ˜ucd ▷O(2|S|k+2|V |k2) 19: end for 4.3 Time Complexity Analysis In the following, we show that AllVec can achieve the same time complexity with negative sampling based SGD methods. Given the sample size n, the total time complexity for SG is O((n + 1)|S|k), where n + 1 denotes n negative samples and 1 positive example. Regarding the complexity of AllVec, we can see that the overall complexity of Algorithm 1 is O(4|S|k + 4|V |k2). For the ease of discussion, we denote c as the average number of positive contexts for a word in the training corpus, i.e. |S| = c|V | (c ≥1000 in most cases). We then obtain the ratio 4|S|k + 4|V |k2 (n + 1)|S|k = 4 n + 1(1 + k c ) (12) where k is typically set from 100 to 300 (Mikolov et al., 2013a; Pennington et al., 2014), resulting in k ≤c. Hence, we can give the lower and upper bound for the ratio: 4 n+1 < 4|S|k+4|V |k2 (n+1)|S|k = 4 n+1(1+ k c )≤ 8 n+1 (13) The above analysis suggests that the complexity of AllVec is same as that of SGD with negative sample size between 3 and 7. In fact, considering that c is much larger than k in most datasets, the major cost of AllVec comes from the part 4|S|k (see Section 5.4 for details), which is linear with respect to the number of positive samples. 5 Experiments We conduct experiments on three popular evaluation tasks, namely word analogy (Mikolov et al., 2013a), word similarity (Faruqui and Dyer, 2014) and QVEC (Tsvetkov et al., 2015). Word analogy task. The task aims to answer questions like, “a is to b as c is to ?”. We adopt the Google testbed2 which contains 19, 544 such questions in two categories: semantic and syntactic. The semantic questions are usually analogies about people or locations, like “king is to man as queen is to ?”, while the syntactic questions focus on forms or tenses, e.g., “swimming is to swim as running to ?”. Word similarity tasks. We perform evaluation on six datasets, including MEN (Bruni et al., 2012), MC (Miller and Charles, 1991), RW (Luong et al., 2013), RG (Rubenstein and Goodenough, 1965), WS-353 Similarity (WSim) and Relatedness (WRel) (Finkelstein et al., 2001). We compute the spearman rank correlation between the similarity scores calculated based on the trained embeddings and human labeled scores. QVEC. QVEC is an intrinsic evaluation metric of word embeddings based on the alignment to features extracted from manually crafted lexical resources. QVEC has shown strong correlation with the performance of embeddings in several semantic tasks (Tsvetkov et al., 2015). We compare AllVec with the following word embedding methods. • SG: This is the original skip-gram model with SGD and negative sampling (Mikolov et al., 2013a,b). • SGA: This is the skip-gram model with an adaptive sampler (Chen et al., 2018). 2https://code.google.com/archive/p/word2vec/ 1858 Table 1: Corpora statistics. Corpus Tokens Vocab Size Text8 17M 71K 100M NewsIR 78M 83K 500M Wiki-sub 0.8B 190K 4.0G Wiki-all 2.3B 200K 13.4G • GloVe: This method applies biased MF on the positive samples of word co-occurrence matrix (Pennington et al., 2014). • LexVec: This method applies MF on the PPMI matrix. The optimization is done with negative sampling and mini-batch gradient descent (Salle et al., 2016b). For all baselines, we use the original implementation released by the authors. 5.1 Datasets and Experimental Setup We evaluate the performance of AllVec on four real-world corpora, namely Text83, NewsIR4, Wiki-sub and Wiki-all. Wiki-sub is a subset of 2017 Wikipedia dump5. All corpora have been pre-processed by a standard pipeline (i.e. removing non-textual elements, lowercasing and tokenization). Table 1 summarizes the statistics of these corpora. To obtain Mwc for positive (w, c) pairs, we follow GloVe where word pairs that are x words apart contribute 1/x to Mwc. The window size is set as win = 8. Regarding α+ wc, we set xmax = 100 and ρ = 0.75. For a fair comparison, the embedding size k is set as 200 for all models and corpora. AllVec can be easily trained by AdaGrad (Zeiler, 2012) like GloVe or Newton-like (Bayer et al., 2017; Bradley et al., 2011) second order methods. For models based on negative sampling (i.e. SG, SGA and LexVec), the sample size is set as n = 25 for Text8, n = 10 for NewsIR and n = 5 for Wiki-sub and Wiki-all. The setting is also suggested by Mikolov et al. (2013b). Other detailed hyper-parameters are reported in Table 2. 5.2 Accuracy Comparison We present results on the word analogy task in Table 2. As shown, AllVec achieves the highest total accuracy (Tot.) in all corpora, particu3http://mattmahoney.net/dc/text8.zip 4http://research.signalmedia.co/newsir16/signaldataset.html 5https://dumps.wikimedia.org/enwiki/ larly in smaller corpora (Text8 and NewsIR). The reason is that in smaller corpora the number of positive (w, c) pairs is very limited, thus making use of negative examples will bring more benefits. Similar reason also explains the poor accuracy of GloVe in Text8, because GloVe does not consider negative samples. Even in the very large corpus (Wiki-all), ignoring negative samples still results in sub-optimal performance. Our results also show that SGA achieves better performance than SG, which demonstrates the importance of a good sampling strategy. However, regardless what sampler (except the full softmax sampling) is utilized and how many updates are taken, sampling is still a biased approach. AllVec achieves the best performance because it is trained on the whole batch data for each parameter update rather than a fraction of sampled data. Another interesting observation is AllVec performs better in semantic tasks in general. The reason is that our model utilizes global co-occurrence statistics, which capture more semantic signals than syntactic signals. While both AllVec and GloVe use global contexts, AllVec performs much better than GloVe in syntactic tasks. We argue that the main reason is because AllVec can distill useful signals from negative examples, while GloVe simply ignores all negative information. By contrast, local-window based methods, such as SG and SGA, are more effective to capture local sentence features, resulting in good performance on syntactic analogies. However, Rekabsaz et al. (2017) argues that these local-window based methods may suffer from the topic shifting issue. Table 3 and Table 4 provide results in the word similarity and QVEC tasks. We can see that AllVec achieves the best performance in most tasks, which admits the advantage of batch learning with all samples. Interestingly, although GloVe performs well in semantic analogy tasks, it shows extremely worse results in word similarity and QVEC. The reason shall be the same as that it performs poorly in syntactic tasks. 5.3 Impact of α− c In this subsection, we investigate the impact of the proposed weighting scheme for negative (context) words. We show the performance change of word analogy tasks on NewsIR in Figure 2 by tuning α0 and δ. Results in other corpora show similar trends thus are omitted due to space limitation. 1859 Table 2: Results (“Tot.” denotes total accuracy) on the word analogy task. Corpus Text8 NewsIR para. Sem. Syn. Tot. para. Sem. Syn. Tot. SG 1e-4 8 25 47.51 32.26 38.60 1e-5 10 10 70.81 47.48 58.10 SGA 6e-3 48.10 33.78 39.74 6e-3 71.74 48.71 59.20 GloVe 10 15 1 45.11 26.89 34.47 50 8 1 78.79 41.58 58.52 LexVec 1e-4 25 51.87 31.78 40.14 1e-5 10 76.11 39.09 55.95 AllVec 350 0.75 56.66 32.42 42.50 100 0.8 78.47 48.33 61.57 Wiki-sub Wiki-all SG 1e-5 10 5 72.05 55.88 63.24 1e-5 10 5 73.91 61.91 67.37 SGA 6e-3 73.93 56.10 63.81 6e-3 75.11 61.94 67.92 GloVe 100 8 1 77.22 53.16 64.13 100 8 1 77.38 58.94 67.33 LexVec 1e-5 5 75.95 52.78 63.33 1e-5 5 76.31 56.83 65.48 AllVec 100 0.75 76.66 54.72 64.75 50 0.75 77.64 60.96 68.52 The parameter columns (para.) for each model are given from left to right as follows. SG: subsampling of frequent words, window size and the number of negative samples; SGA: λ (Chen et al., 2018) that controls the distribution of the rank, the other parameters are the same with SG; GloVe: xmax, window size and symmetric window; LexVec: subsampling of frequent words and the number of negative samples; AllVec: the negative weight α0 and δ. Boldface denotes the highest total accuracy. Figure 2(a) shows the impact of the overall weight α0 by setting δ as 0.75 (inspired by the setting of skip-gram). Clearly, we observe that all results (including semantic, syntactic and total accuracy) have been greatly improved when α0 increases from 0 to a larger value. As mentioned before, α0 = 0 means that no negative information is considered. This observation verifies that negative samples are very important for learning good embeddings. It also helps to explain why GloVe performs poorly on syntactic tasks. In addition, we find that in all corpora the optimal results are usually obtained when α0 falls in the range of 50 to 400. For example, in the NewIR corpus as shown, AllVec achieves the best performance when α0 = 100. Figure 2(b) shows the impact of δ with α0 = 100. As mentioned before, δ = 0 denotes a uniform value for all negative words and δ = 1 denotes that no smoothing is applied to word frequency. We can see that the total accuracy is only around 55% when δ = 0. By increasing its value, the performance is gradually improved, achieving the highest score when δ is around 0.8. Further increase of δ will degrade the total accuracy. This analysis demonstrates the effectiveness of the proposed negative weighting scheme. 5.4 Convergence Rate and Runtime Figure 3(a) compares the convergence between AllVec and GloVe on NewsIR. Clearly, AllVec ex(a) (b) Figure 2: Effect of α0 and δ on NewsIR. (a) (b) Figure 3: Convergence and runtime. hibits a more stable convergence due to its full batch learning. In contrast, GloVe has a more dramatic fluctuation because of the one-sample learning scheme. Figure 3(b) shows the relationship between the embedding size k and runtime on NewsIR. Although the analysis in Section 4.3 demonstrates that the time complexity of AllVec is O(4|S|k + 4|V |k2), the actual runtime shows a near linear relationship with k. This is because 4|V |k2/4|S|k = k/c, where c generally ranges from 1000 ∼6000 and k is set from 200 to 300 in practice. The above ratio explains the fact that 4|S|k dominates the complexity, which is linear 1860 Table 3: Results on the word similarity task. Corpus Text8 NewsIR MEN MC RW RG WSim WRel MEN MC RW RG WSim WRel SG .6868 .6776 .3336 .6904 .7082 .6539 .7293 .7328 .3705 .7184 .7176 .6147 SGA .6885 .6667 .3399 .7035 .7291 .6708 .7409 .7513 .3797 .7508 .7442 .6398 GloVe .4999 .3349 .2614 .3367 .5168 .5115 .5839 .5637 .2487 .6284 .6029 .5329 LexVec .6660 .6267 .2935 .6076 .7005 .6862 .7301 .8403 .3614 .8341 .7404 .6545 AllVec .6966 .6975 .3424 .6588 .7484 .7002 .7407 .7642 .4610 .7753 .7453 .6322 Wiki-sub Wiki-all SG .7532 .7943 .4250 .7555 .7627 .6563 .7564 .8083 .4311 .7678 .7662 .6485 SGA .7465 .7983 .4296 .7623 .7715 .6560 .7577 .7940 .4379 .7683 .7110 .6488 GloVe .6898 .6963 .3184 .7041 .6669 .5629 .7370 .7767 .3197 .7499 .7359 .6336 LexVec .7318 .7591 .4225 .7628 .7292 .6219 .7256 .8219 .4383 .7797 .7548 .6091 AllVec .7155 .8305 .4667 .7945 .7675 .6459 .7396 .7840 .4966 .7800 .7492 .6518 Table 4: Results on QVEC. Qvec Text8 NewsIR Wiki-sub Wiki-all SG .3999 .4182 .4280 .4306 SGA .4062 .4159 .4419 .4464 GloVe .3662 .3948 .4174 .4206 LexVec .4211 .4172 .4332 .4396 AllVec .4211 .4319 .4351 .4489 with k and |S|. We also compare the overall runtime of AllVec and SG on NewsIR and show the results in Table 5. As can be seen, the runtime of AllVec falls in the range of SG-3 and SG-7 in a single iteration, which confirms the theoretical analysis in Section 4.3. In contrast with SG, AllVec needs more iterations to converge. The reason is that each parameter in SG is updated many times during each iteration, although only one training example is used in each update. Despite this, the total run time of AllVec is still in a feasible range. Assuming the convergence is measured by the number of parameter updates, our AllVec yields a much faster convergence rate than the one-sample SG method. In practice, the runtime of our model in each iteration can be further reduced by increasing the number of parallel workers. Although baseline methods like SG and GloVe can also be parallelized, the stochastic gradient steps in these methods unnecessarily influence each other as there is no exact way to separate these updates for different workers. In other words, the parallelization of SGD is not well suited to a large number of workTable 5: Comparison of runtime. Model SI Iter Tot. SG-3 259s 15 65m SG-7 521s 15 131m SG-10 715s 15 179m AllVec 388s 50 322m SG-n represents n negative samples for skip-gram, SI represents the runtime for a single iteration, and Tot. denotes the total runtime. All models are of embedding size 200 and trained with 8 threads. ers. In contrast, the parameter updates in AllVec are completely independent of each other, therefore AllVec does not have the update collision issue. This means we can achieve the embarrassing parallelization by simply separating the updates by words; that is, letting different workers update the model parameters for disjoint sets of words. As such, AllVec can provide a near linear scaling without any approximation since there is no potential conflicts between updates. 6 Conclusion In this paper, we presented AllVec, an efficient batch learning based word embedding model that is capable to leverage all positive and negative training examples without any sampling and approximation. In contrast with models based on SGD and negative sampling, AllVec shows more stable convergence and better embedding quality by the all-sample optimization. Besides, both theoretical analysis and experiments demonstrate that AllVec achieves the same time complexity with the classic SGD models. In future, we will extend 1861 our proposed all-sample learning scheme to deep learning methods, which are more expressive than the shallow embedding model. Moreover, we will integrate prior knowledge, such as the words that are synonyms and antonyms, into the word embedding process. Lastly, we are interested in exploring the recent adversarial learning techniques to enhance the robustness of word embeddings. Acknowledgements. This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its IRC@SG Funding Initiative. Joemon M.Jose and Xiangnan He are corresponding authors. References Melissa Ailem, Aghiles Salah, and Mohamed Nadif. 2017. Non-negative matrix factorization meets word embedding. In SIGIR, pages 1081–1084. Immanuel Bayer, Xiangnan He, Bhargav Kanagal, and Steffen Rendle. 2017. A generic coordinate descent framework for learning from implicit feedback. In WWW, pages 1341–1350. Yoshua Bengio and Jean-S´ebastien Sen´ecal. 2008. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, pages 713–722. Guy Blanc and Steffen Rendle. 2017. Adaptive sampled softmax with kernel based sampling. arXiv preprint arXiv:1712.00527. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Danushka Bollegala, Mohammed Alsuhaibani, Takanori Maehara, and Ken-ichi Kawarabayashi. 2016. Joint word representation learning using a corpus and a semantic lexicon. In AAAI, pages 2690–2696. Joseph K Bradley, Aapo Kyrola, Danny Bickson, and Carlos Guestrin. 2011. Parallel coordinate descent for l1-regularized loss minimization. arXiv preprint arXiv:1105.5379. Elia Bruni, Gemma Boleda, Marco Baroni, and NamKhanh Tran. 2012. Distributional semantics in technicolor. In ACL, volume 1, pages 136–145. Shaosheng Cao and Wei Lu. 2017. Improving word embeddings with convolutional feature learning and subword information. In AAAI, pages 3144–3151. Long Chen, Fajie Yuan, Joemon M Jose, and Weinan Zhang. 2018. Improving negative sampling for word representation using self-embedded features. In WSDM, pages 99–107. Wenyuan Dai, Gui-Rong Xue, Qiang Yang, and Yong Yu. 2007. Co-clustering based classification for outof-domain documents. In SIGKDD, pages 210–219. Manaal Faruqui and Chris Dyer. 2014. Community evaluation and exchange of word vectors at wordvectors. org. In ACL, pages 19–24. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In WWW, pages 406–414. Guibing Guo, SC Ouyang, and Fajie Yuan. 2018. Approximating word ranking and negative sampling for word embedding. In IJCAI. Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146–162. Xiangnan He, Hanwang Zhang, Min-Yen Kan, and TatSeng Chua. 2016. Fast matrix factorization for online recommendation with implicit feedback. In SIGIR, pages 549–558. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In ACL, pages 873–882. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv´e J´egou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In SIGKDD, pages 426–434. Abhishek Kumar and Jun Araki. 2016. Incorporating relational knowledge into word representations using subspace regularization. In ACL, volume 2, pages 506–511. Wenqiang Lei, Xuancong Wang, Meichun Liu, Ilija Ilievski, Xiangnan He, and Min-Yen Kan. 2017. Swim: A simple word interaction model for implicit discourse relation recognition. In IJCAI, pages 4026–4032. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In NIPS, pages 2177–2185. Huayu Li, Richang Hong, Defu Lian, Zhiang Wu, Meng Wang, and Yong Ge. 2016. A relaxed ranking-based factor model for recommender system from implicit feedback. In IJCAI, pages 1683– 1689. Yitan Li, Linli Xu, Fei Tian, Liang Jiang, Xiaowei Zhong, and Enhong Chen. 2015. Word embedding revisited: A new representation learning and explicit matrix factorization perspective. In IJCAI, pages 3650–3656. 1862 Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015a. Learning semantic word embeddings based on ordinal knowledge constraints. In ACL, volume 1, pages 1501–1511. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015b. Topical word embeddings. In AAAI, pages 2418–2424. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL, pages 104–113. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6(1):1–28. Yilin Niu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Improved word representation learning with sememes. In ACL, volume 1, pages 2049– 2058. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Navid Rekabsaz, Mihai Lupu, Allan Hanbury, and Hamed Zamani. 2017. Word embedding causes topic shifting; exploit global context! In SIGIR, pages 1105–1108. Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. Alexandre Salle, Marco Idiart, and Aline Villavicencio. 2016a. Enhancing the lexvec distributed word representation model using positional contexts and external memory. arXiv preprint arXiv:1606.01283. Alexandre Salle, Marco Idiart, and Aline Villavicencio. 2016b. Matrix factorization using window sampling and negative sampling for improved word representations. arXiv preprint arXiv:1606.00819. Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1):1–47. Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, and Kwun Ping Lai. 2017. Jointly learning word embeddings and latent topics. In SIGIR, pages 375– 384. Karl Stratos, Michael Collins, and Daniel Hsu. 2015. Model-based word embeddings from decompositions of count matrices. In ACL, volume 1, pages 1282–1291. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In EMNLP, pages 2049–2054. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In ACL, pages 384– 394. Fajie Yuan, Guibing Guo, Joemon M Jose, Long Chen, Haitao Yu, and Weinan Zhang. 2016. Lambdafm: learning optimal ranking with factorization machines using lambda surrogates. In CIKM, pages 227–236. ACM. Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
2018
172
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1863–1873 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1863 Backpropagating through Structured Argmax using a SPIGOT Hao Peng♦ Sam Thomson♣ Noah A. Smith♦ ♦Paul G. Allen School of Computer Science & Engineering, University of Washington ♣School of Computer Science, Carnegie Mellon University {hapeng,nasmith}@cs.washington.edu, [email protected] Abstract We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e.g., parsing) in intermediate layers. SPIGOT requires no marginal inference, unlike structured attention networks (Kim et al., 2017) and some reinforcement learning-inspired solutions (Yogatama et al., 2017). Like socalled straight-through estimators (Hinton, 2012), SPIGOT defines gradient-like quantities associated with intermediate nondifferentiable operations, allowing backpropagation before and after them; SPIGOT’s proxy aims to ensure that, after a parameter update, the intermediate structure will remain well-formed. We experiment on two structured NLP pipelines: syntactic-then-semantic dependency parsing, and semantic parsing followed by sentiment classification. We show that training with SPIGOT leads to a larger improvement on the downstream task than a modularly-trained pipeline, the straight-through estimator, and structured attention, reaching a new state of the art on semantic dependency parsing. 1 Introduction Learning methods for natural language processing are increasingly dominated by end-to-end differentiable functions that can be trained using gradient-based optimization. Yet traditional NLP often assumed modular stages of processing that formed a pipeline; e.g., text was tokenized, then tagged with parts of speech, then parsed into a phrase-structure or dependency tree, then semantically analyzed. Pipelines, which make “hard” (i.e., discrete) decisions at each stage, appear to be incompatible with neural learning, leading many researchers to abandon earlier-stage processing. Inspired by findings that continue to see benefit from various kinds of linguistic or domain-specific preprocessing (He et al., 2017; Oepen et al., 2017; Ji and Smith, 2017), we argue that pipelines can be treated as layers in neural architectures for NLP tasks. Several solutions are readily available: • Reinforcement learning (most notably the REINFORCE algorithm; Williams, 1992), and structured attention (SA; Kim et al., 2017). These methods replace argmax with a sampling or marginalization operation. We note two potential downsides of these approaches: (i) not all argmax-able operations have corresponding sampling or marginalization methods that are efficient, and (ii) inspection of intermediate outputs, which could benefit error analysis and system improvement, is more straightforward for hard decisions than for posteriors. • The straight-through estimator (STE; Hinton, 2012) treats discrete decisions as if they were differentiable and simply passes through gradients. While fast and surprisingly effective, it ignores constraints on the argmax problem, such as the requirement that every word has exactly one syntactic parent. We will find, experimentally, that the quality of intermediate representations degrades substantially under STE. This paper introduces a new method, the structured projection of intermediate gradients optimization technique (SPIGOT; §2), which defines a proxy for the gradient of a loss function with respect to the input to argmax. Unlike STE’s gradient proxy, SPIGOT aims to respect the constraints 1864 in the argmax problem. SPIGOT can be applied with any intermediate layer that is expressible as a constrained maximization problem, and whose feasible set can be projected onto. We show empirically that SPIGOT works even when the maximization and the projection are done approximately. We offer two concrete architectures that employ structured argmax as an intermediate layer: semantic parsing with syntactic parsing in the middle, and sentiment analysis with semantic parsing in the middle (§3). These architectures are trained using a joint objective, with one part using data for the intermediate task, and the other using data for the end task. The datasets are not assumed to overlap at all, but the parameters for the intermediate task are affected by both parts of the training data. Our experiments (§4) show that our architecture improves over a state-of-the-art semantic dependency parser, and that SPIGOT offers stronger performance than a pipeline, SA, and STE. On sentiment classification, we show that semantic parsing offers improvement over a BiLSTM, more so with SPIGOT than with alternatives. Our analysis considers how the behavior of the intermediate parser is affected by the end task (§5). Our code is open-source and available at https:// github.com/Noahs-ARK/SPIGOT. 2 Method Our aim is to allow a (structured) argmax layer in a neural network to be treated almost like any other differentiable function. This would allow us to place, for example, a syntactic parser in the middle of a neural network, so that the forward calculation simply calls the parser and passes the parse tree to the next layer, which might derive syntactic features for the next stage of processing. The challenge is in the backward computation, which is key to learning with standard gradientbased methods. When its output is discrete as we assume here, argmax is a piecewise constant function. At every point, its gradient is either zero or undefined. So instead of using the true gradient, we will introduce a proxy for the gradient of the loss function with respect to the inputs to argmax, allowing backpropagation to proceed through the argmax layer. Our proxy is designed as an improvement to earlier methods (discussed below) that completely ignore constraints on the argmax operation. It accomplishes this through a projection of the gradients. We first lay out notation, and then briefly review max-decoding and its relaxation (§2.1). We define SPIGOT in §2.2, and show how to use it to backpropagate through NLP pipelines in §2.3. Notation. Our discussion centers around two tasks: a structured intermediate task followed by an end task, where the latter considers the outputs of the former (e.g., syntactic-then-semantic parsing). Inputs are denoted as x, and end task outputs as y. We use z to denote intermediate structures derived from x. We will often refer to the intermediate task as “decoding”, in the structured prediction sense. It seeks an output ˆz = argmaxz∈Z S from the feasible set Z, maximizing a (learned, parameterized) scoring function S for the structured intermediate task. L denotes the loss of the end task, which may or may not also involve structured predictions. We use ∆k−1 = {p ∈Rk | 1⊤p = 1, p ≥0} to denote the (k −1)-dimensional simplex. We denote the domain of binary variables as B = {0, 1}, and the unit interval as U = [0, 1]. By projection of a vector v onto a set A, we mean the closest point in A to v, measured by Euclidean distance: projA(v) = argminv′∈A ∥v′ −v∥2. 2.1 Relaxed Decoding Decoding problems are typically decomposed into a collection of “parts”, such as arcs in a dependency tree or graph. In such a setup, each element of z, zi, corresponds to one possible part, and zi takes a boolean value to indicate whether the part is included in the output structure. The scoring function S is assumed to decompose into a vector s(x) of part-local, input-specific scores: ˆz = argmax z∈Z S(x, z) = argmax z∈Z z⊤s(x) (1) In the following, we drop s’s dependence on x for clarity. In many NLP problems, the output space Z can be specified by linear constraints (Roth and Yih, 2004): A z ψ  ≤b, (2) where ψ are auxiliary variables (also scoped by argmax), together with integer constraints (typically, each zi ∈B). 1865 ˆz ˜z ˆz −⌘rˆzL −rsL ˆz ˆz −⌘rˆzL −rsL Figure 1: The original feasible set Z (red vertices), is relaxed into a convex polytope P (the area encompassed by blue edges). Left: making a gradient update to ˆz makes it step outside the polytope, and it is projected back to P, resulting in the projected point ˜z. ∇sL is then along the edge. Right: updating ˆz keeps it within P, and thus ∇sL = η∇ˆzL. The problem in Equation 1 can be NP-complete in general, so the {0, 1} constraints are often relaxed to [0, 1] to make decoding tractable (Martins et al., 2009). Then the discrete combinatorial problem over Z is transformed into the optimization of a linear objective over a convex polytope P={p ∈Rd|Ap≤b}, which is solvable in polynomial time (Bertsimas and Tsitsiklis, 1997). This is not necessary in some cases, where the argmax can be solved exactly with dynamic programming. 2.2 From STE to SPIGOT We now view structured argmax as an activation function that takes a vector of input-specific partscores s and outputs a solution ˆz. For backpropagation, to calculate gradients for parameters of s, the chain rule defines: ∇sL = J ∇ˆzL, (3) where the Jacobian matrix J = ∂ˆz ∂s contains the derivative of each element of ˆz with respect to each element of s. Unfortunately, argmax is a piecewise constant function, so its Jacobian is either zero (almost everywhere) or undefined (in the case of ties). One solution, taken in structured attention, is to replace the argmax with marginal inference and a softmax function, so that ˆz encodes probability distributions over parts (Kim et al., 2017; Liu and Lapata, 2018). As discussed in §1, there are two reasons to avoid this modification. Softmax can only be used when marginal inference is feasible, by sum-product algorithms for example (Eisner, 2016; Friesen and Domingos, 2016); in general marginal inference can be #P-complete. Further, a soft intermediate layer will be less amenable to inspection by anyone wishing to understand and improve the model. In another line of work, argmax is augmented with a strongly-convex penalty on the solutions (Martins and Astudillo, 2016; Amos and Kolter, 2017; Niculae and Blondel, 2017; Niculae et al., 2018; Mensch and Blondel, 2018). However, their approaches require solving a relaxation even when exact decoding is tractable. Also, the penalty will bias the solutions found by the decoder, which may be an undesirable conflation of computational and modeling concerns. A simpler solution is the STE method (Hinton, 2012), which replaces the Jacobian matrix in Equation 3 by the identity matrix. This method has been demonstrated to work well when used to “backpropagate” through hard threshold functions (Bengio et al., 2013; Friesen and Domingos, 2018) and categorical random variables (Jang et al., 2016; Choi et al., 2017). Consider for a moment what we would do if ˆz were a vector of parameters, rather than intermediate predictions. In this case, we are seeking points in Z that minimize L; denote that set of minimizers by Z∗. Given ∇ˆzL and step size η, we would update ˆz to be ˆz −η∇ˆzL. This update, however, might not return a value in the feasible set Z, or even (if we are using a linear relaxation) the relaxed set P. SPIGOT therefore introduces a projection step that aims to keep the “updated” ˆz in the feasible set. Of course, we do not directly update ˆz; we continue backpropagation through s and onward to the parameters. But the projection step nonetheless alters the parameter updates in the way that our proxy for “∇sL” is defined. The procedure is defined as follows: ˆp = ˆz −η∇ˆzL, (4a) ˜z = projP(ˆp), (4b) ∇sL ≜ˆz −˜z. (4c) First, the method makes an “update” to ˆz as if it contained parameters (Equation 4a), letting ˆp denote the new value. Next, ˆp is projected back onto the (relaxed) feasible set (Equation 4b), yielding a feasible new value ˜z. Finally, the gradients with respect to s are computed by Equation 4c. Due to the convexity of P, the projected point ˜z will always be unique, and is guaranteed to be no farther than ˆp from any point in Z∗(Luenberger and Ye, 2015).1 Compared to STE, SPIGOT in1Note that this property follows from P’s convexity, and we do not assume the convexity of L. 1866 volves a projection and limits ∇sL to a smaller space to satisfy constraints. See Figure 1 for an illustration. When efficient exact solutions (such as dynamic programming) are available, they can be used. Yet, we note that SPIGOT does not assume the argmax operation is solved exactly. 2.3 Backpropagation through Pipelines Using SPIGOT, we now devise an algorithm to “backpropagate” through NLP pipelines. In these pipelines, an intermediate task’s output is fed into an end task for use as features. The parameters of the complete model are divided into two parts: denote the parameters of the intermediate task model by φ (used to calculate s), and those in the end task model as θ.2 As introduced earlier, the end-task loss function to be minimized is L, which depends on both φ and θ. Algorithm 1 describes the forward and backward computations. It takes an end task training pair ⟨x, y⟩, along with the intermediate task’s feasible set Z, which is determined by x. It first runs the intermediate model and decodes to get intermediate structure ˆz, just as in a standard pipeline. Then forward propagation is continued into the end-task model to compute loss L, using ˆz to define input features. Backpropagation in the endtask model computes ∇θL and ∇ˆzL, and ∇sL is then constructed using Equations 4. Backpropagation then continues into the intermediate model, computing ∇φL. Due to its flexibility, SPIGOT is applicable to many training scenarios. When there is no ⟨x, z⟩ training data for the intermediate task, SPIGOT can be used to induce latent structures for the end-task (Yogatama et al., 2017; Kim et al., 2017; Choi et al., 2017, inter alia). When intermediate-task training data is available, one can use SPIGOT to adopt joint learning by minimizing an interpolation of L (on end-task data ⟨x, y⟩) and an intermediate-task loss function eL (on intermediate task data ⟨x, z⟩). This is the setting in our experiments; note that we do not assume any overlap in the training examples for the two tasks. 3 Solving the Projections In this section we discuss how to compute approximate projections for the two intermediate tasks 2Nothing prohibits tying across pre-argmax parameters and post-argmax parameters; this separation is notationally convenient but not at all necessary. Algorithm 1 Forward and backward computation with SPIGOT. 1: procedure SPIGOT(x, y, Z) 2: Construct A, b such that Z = {p ∈Zd | Ap ≤b} 3: P ←{p ∈Rd | Ap ≤b} ▷Relaxation 4: Forwardprop and compute sφ(x) 5: ˆz ←argmaxz∈Z z⊤sφ(x) ▷Intermediate decoding 6: Forwardprop and compute L given x, y, and ˆz 7: Backprop and compute ∇θL and ∇ˆzL 8: ˜z ←projP(ˆz −η∇ˆzL) ▷Projection 9: ∇sL ←ˆz −˜z 10: Backprop and compute ∇φL 11: end procedure considered in this work, arc-factored unlabeled dependency parsing and first-order semantic dependency parsing. In early experiments we observe that for both tasks, projecting with respect to all constraints of their original formulations using a generic quadratic program solver was prohibitively slow. Therefore, we construct relaxed polytopes by considering only a subset of the constraints.3 The projection then decomposes into a series of singly constrained quadratic programs (QP), each of which can be efficiently solved in linear time. The two approximate projections discussed here are used in backpropagation only. In the forward pass, we solve the decoding problem using the models’ original decoding algorithms. Arc-factored unlabeled dependency parsing. For unlabeled dependency trees, we impose [0, 1] constraints and single-headedness constraints.4 Formally, given a length-n input sentence, excluding self-loops, an arc-factored parser considers d = n(n −1) candidate arcs. Let i→j denote an arc from the ith token to the jth, and σ(i→j) denote its index. We construct the relaxed feasible set by: PDEP =   p ∈Ud X i̸=j pσ(i→j) = 1, ∀j   , (5) i.e., we consider each token j individually, and force single-headedness by constraining the number of arcs incoming to j to sum to 1. Algorithm 2 summarizes the procedure to project onto PDEP. 3A parallel work introduces an active-set algorithm to solve the same class of quadratic programs (Niculae et al., 2018). It might be an efficient approach to solve the projections in Equation 4b, which we leave to future work. 4 It requires O(n2) auxiliary variables and O(n3) additional constraints to ensure well-formed tree structures (Martins et al., 2013). 1867 Line 3 forms a singly constrained QP, and can be solved in O(n) time (Brucker, 1984). Algorithm 2 Projection onto the relaxed polytope PDEP for dependency tree structures. Let bold σ(·→j) denote the index set of arcs incoming to j. For a vector v, we use vσ(·→j) to denote vector [vk]k∈σ(·→j). 1: procedure DEPPROJ(ˆp) 2: for j = 1, 2, . . . , n do 3: ˜zσ(·→j) ←proj∆n−2 ˆpσ(·→j)  4: end for 5: return ˜z 6: end procedure First-order semantic dependency parsing. Semantic dependency parsing uses labeled bilexical dependencies to represent sentence-level semantics (Oepen et al., 2014, 2015, 2016). Each dependency is represented by a labeled directed arc from a head token to a modifier token, where the arc label encodes broadly applicable semantic relations. Figure 2 diagrams a semantic graph from the DELPH-IN MRS-derived dependencies (DM), together with a syntactic tree. We use a state-of-the-art semantic dependency parser (Peng et al., 2017) that considers three types of parts: heads, unlabeled arcs, and labeled arcs. Let σ(i ℓ→j) denote the index of the arc from i to j with semantic role ℓ. In addition to [0, 1] constraints, we constrain that the predictions for labeled arcs sum to the prediction of their associated unlabeled arc: PSDP ( p ∈Ud X ℓ pσ(i ℓ→j) = pσ(i→j), ∀i ̸= j ) . (6) This ensures that exactly one label is predicted if and only if its arc is present. The projection onto PSDP can be solved similarly to Algorithm 2. We drop the determinism constraint imposed by Peng et al. (2017) in the backward computation. 4 Experiments We empirically evaluate our method with two sets of experiments: using syntactic tree structures in semantic dependency parsing, and using semantic dependency graphs in sentiment classification. 4.1 Syntactic-then-Semantic Parsing In this experiment we consider an intermediate syntactic parsing task, followed by seman… became dismayed at poss arg1 arg2 ’s G-2 connections arrested traffickers to drug arg2 compound root arg2 arg1 arg2 Figure 2: A development instance annotated with both gold DM semantic dependency graph (red arcs on the top), and gold syntactic dependency tree (blue arcs at the bottom). A pretrained syntactic parser predicts the same tree as the gold; the semantic parser backpropagates into the intermediate syntactic parser, and changes the dashed blue arcs into dashed red arcs (§5). tic dependency parsing as the end task. We first briefly review the neural network architectures for the two models (§4.1.1), and then introduce the datasets (§4.1.2) and baselines (§4.1.3). 4.1.1 Architectures Syntactic dependency parser. For intermediate syntactic dependencies, we use the unlabeled arc-factored parser of Kiperwasser and Goldberg (2016). It uses bidirectional LSTMs (BiLSTM) to encode the input, followed by a multilayerperceptron (MLP) to score each potential dependency. One notable modification is that we replace their use of Chu-Liu/Edmonds’ algorithm (Chu and Liu, 1965; Edmonds, 1967) with the Eisner algorithm (Eisner, 1996, 2000), since our dataset is in English and mostly projective. Semantic dependency parser. We use the basic model of Peng et al. (2017) (denoted as NEURBOPARSER) as the end model. It is a first-order parser, and uses local factors for heads, unlabeled arcs, and labeled arcs. NEURBOPARSER does not use syntax. It first encodes an input sentence with a two-layer BiLSTM, and then computes part scores with two-layer tanh-MLPs. Inference is conducted with AD3 (Martins et al., 2015). To add syntactic features to NEURBOPARSER, we concatenate a token’s contextualized representation to that of its syntactic head, predicted by the intermediate parser. Formally, given length-n input sentence, we first run a BiLSTM. We use the concatenation of the two hidden representations hj = [−→ h j; ←− h j] at each position j as the contextualized token representations. We then concatenate 1868 hj with the representation of its head hHEAD(j) by ehj = [hj; hHEAD(j)] =  hj; X i̸=j ˆzσ(i→j) hi  , (7) where ˆz ∈Bn(n−1) is a binary encoding of the tree structure predicted by by the intermediate parser. We then use ehj anywhere hj would have been used in NEURBOPARSER. In backpropagation, we compute ∇ˆzL with an automatic differentiation toolkit (DyNet; Neubig et al., 2017). We note that this approach can be generalized to convolutional neural networks over graphs (Mou et al., 2015; Duvenaud et al., 2015; Kipf and Welling, 2017, inter alia), recurrent neural networks along paths (Xu et al., 2015; Roth and Lapata, 2016, inter alia) or dependency trees (Tai et al., 2015). We choose to use concatenations to control the model’s complexity, and thus to better understand which parts of the model work. We refer the readers to Kiperwasser and Goldberg (2016) and Peng et al. (2017) for further details of the parsing models. Training procedure. Following previous work, we minimize structured hinge loss (Tsochantaridis et al., 2004) for both models. We jointly train both models from scratch, by randomly sampling an instance from the union of their training data at each step. In order to isolate the effect of backpropagation, we do not share any parameters between the two models.5 Implementation details are summarized in the supplementary materials. 4.1.2 Datasets • For semantic dependencies, we use the English dataset from SemEval 2015 Task 18 (Oepen et al., 2015). Among the three formalisms provided by the shared task, we consider DELPH-IN MRS-derived dependencies (DM) and Prague Semantic Dependencies (PSD).6 It includes §00–19 of the WSJ corpus as training data, §20 and §21 for development and in-domain test data, resulting in a 33,961/1,692/1,410 train/dev./test split, and 5 Parameter sharing has proved successful in many related tasks (Collobert and Weston, 2008; Søgaard and Goldberg, 2016; Ammar et al., 2016; Swayamdipta et al., 2016, 2017, inter alia), and could be easily combined with our approach. 6We drop the third (PAS) because its structure is highly predictable from parts-of-speech, making it less interesting. DM PSD Model UF LF UF LF NEURBOPARSER – 89.4 – 77.6 FREDA3 – 90.4 – 78.5 PIPELINE 91.8 90.8 88.4 78.1 SA 91.6 90.6 87.9 78.1 STE 92.0 91.1 88.9 78.9 SPIGOT 92.4 91.6 88.6 78.9 (a) F1 on in-domain test set. DM PSD Model UF LF UF LF NEURBOPARSER – 84.5 – 75.3 FREDA3 – 85.3 – 76.4 PIPELINE 87.4 85.8 85.5 75.6 SA 87.3 85.6 84.9 75.9 STE 87.7 86.4 85.8 76.6 SPIGOT 87.9 86.7 85.5 77.1 (b) F1 on out-of-domain test set. Table 1: Semantic dependency parsing performance in both unlabeled (UF) and labeled (LF) F1 scores. Bold font indicates the best performance. Peng et al. (2017) does not report UF. 1,849 out-of-domain test instances from the Brown corpus.7 • For syntactic dependencies, we use the Stanford Dependency (de Marneffe and Manning, 2008) conversion of the the Penn Treebank WSJ portion (Marcus et al., 1993). To avoid data leak, we depart from standard split and use §20 and §21 as development and test data, and the remaining sections as training data. The number of training/dev./test instances is 40,265/2,012/1,671. 4.1.3 Baselines We compare to the following baselines: • A pipelined system (PIPELINE). The pretrained parser achieves 92.9 test unlabeled attachment score (UAS).8 7The organizers remove, e.g., instances with cyclic graphs, and thus only a subset of the WSJ corpus is included. See Oepen et al. (2015) for details. 8 Note that this number is not comparable to the parsing literature due to the different split. As a sanity check, we found in preliminary experiments that the same parser archi1869 • Structured attention networks (SA; Kim et al., 2017). We use the inside-outside algorithm (Baker, 1979) to populate z with arcs’ marginal probabilities, use log-loss as the objective in training the intermediate parser. • The straight-through estimator (STE; Hinton, 2012), introduced in §2.2. 4.1.4 Empirical Results Table 1 compares the semantic dependency parsing performance of SPIGOT to all five baselines. FREDA3 (Peng et al., 2017) is a state-of-the-art variant of NEURBOPARSER that is trained using multitask learning to jointly predict three different semantic dependency graph formalisms. Like the basic NEURBOPARSER model that we build from, FREDA3 does not use any syntax. Strong DM performance is achieved in a more recent work by using joint learning and an ensemble (Peng et al., 2018), which is beyond fair comparisons to the models discussed here. We found that using syntactic information improves semantic parsing performance: using pipelined syntactic head features brings 0.5– 1.4% absolute labeled F1 improvement to NEURBOPARSER. Such improvements are smaller compared to previous works, where dependency path and syntactic relation features are included (Almeida and Martins, 2015; Ribeyre et al., 2015; Zhang et al., 2016), indicating the potential to get better performance by using more syntactic information, which we leave to future work. Both STE and SPIGOT use hard syntactic features. By allowing backpropation into the intermediate syntactic parser, they both consistently outperform PIPELINE. On the other hand, when marginal syntactic tree structures are used, SA outperforms PIPELINE only on the out-of-domain PSD test set, and improvements under other cases are not observed. Compared to STE, SPIGOT outperforms STE on DM by more than 0.3% absolute labeled F1, both in-domain and out-of-domain. For PSD, SPIGOT achieves similar performance to STE on in-domain test set, but has a 0.5% absolute labeled F1 improvement on out-of-domain data, where syntactic parsing is less accurate. tecture achieves 93.5 UAS when trained and evaluated with the standard split, close to the results reported by Kiperwasser and Goldberg (2016). 4.2 Semantic Dependencies for Sentiment Classification Our second experiment uses semantic dependency graphs to improve sentiment classification performance. We are not aware of any efficient algorithm that solves marginal inference for semantic dependency graphs under determinism constraints, so we do not include a comparison to SA. 4.2.1 Architectures Here we use NEURBOPARSER as the intermediate model, as described in §4.1.1, but with no syntactic enhancements. Sentiment classifier. We first introduce a baseline that does not use any structural information. It learns a one-layer BiLSTM to encode the input sentence, and then feeds the sum of all hidden states into a two-layer ReLU-MLP. To use semantic dependency features, we concatenate a word’s BiLSTM-encoded representation to the averaged representation of its heads, together with the corresponding semantic roles, similarly to that in Equation 7.9 Then the concatenation is fed into an affine transformation followed by a ReLU activation. The rest of the model is kept the same as the BiLSTM baseline. Training procedure. We use structured hinge loss to train the semantic dependency parser, and log-loss for the sentiment classifier. Due to the discrepancy in the training data size of the two tasks (33K vs. 7K), we pre-train a semantic dependency parser, and then adopt joint training together with the classifier. In the joint training stage, we randomly sample 20% of the semantic dependency training instances each epoch. Implementations are detailed in the supplementary materials. 4.2.2 Datasets For semantic dependencies, we use the DM dataset introduced in §4.1.2. We consider a binary classification task using the Stanford Sentiment Treebank (Socher et al., 2013). It consists of roughly 10K movie review sentences from Rotten Tomatoes. The full dataset includes a rating on a scale from 1 to 5 for each constituent (including the full sentences), resulting in more than 200K instances. Following previous work (Iyyer et al., 2015), we only use full-sentence 9In a well-formed semantic dependency graph, a token may have multiple heads. Therefore we use average instead of the sum in Equation 7. 1870 Model Accuracy (%) BILSTM 84.8 PIPELINE 85.7 STE 85.4 SPIGOT 86.3 Table 2: Test accuracy of sentiment classification on Stanford Sentiment Treebank. Bold font indicates the best performance. instances, with neutral instances excluded (3s) and the remaining four rating levels converted to binary “positive” or “negative” labels. This results in a 6,920/872/1,821 train/dev./test split. 4.2.3 Empirical Results Table 2 compares our SPIGOT method to three baselines. Pipelined semantic dependency predictions brings 0.9% absolute improvement in classification accuracy, and SPIGOT outperforms all baselines. In this task STE achieves slightly worse performance than a fixed pre-trained PIPELINE. 5 Analysis We examine here how the intermediate model is affected by the end-task training signal. Is the endtask signal able to “overrule” intermediate predictions? We use the syntactic-then-semantic parsing model (§4.1) as a case study. Table 3 compares a pipelined system to one jointly trained using SPIGOT. We consider the development set instances where both syntactic and semantic annotations are available, and partition them based on whether the two systems’ syntactic predictions agree (SAME), or not (DIFF). The second group includes sentences with much lower syntactic parsing accuracy (91.3 vs. 97.4 UAS), and SPIGOT further reduces this to 89.6. Even though these changes hurt syntactic parsing accuracy, they lead to a 1.1% absolute gain in labeled F1 for semantic parsing. Furthermore, SPIGOT has an overall less detrimental effect on the intermediate parser than STE: using SPIGOT, intermediate dev. parsing UAS drops to 92.5 from the 92.9 pipelined performance, while STE reduces it to 91.8. We then take a detailed look and categorize the changes in intermediate trees by their correlations with the semantic graphs. Specifically, when a modifier m’s head is changed from h to h′ in the Split # Sent. Model UAS DM SAME 1011 PIPELINE 97.4 94.0 SPIGOT 97.4 94.3 DIFF 681 PIPELINE 91.3 88.1 SPIGOT 89.6 89.2 Table 3: Syntactic parsing performance (in unlabeled attachment score, UAS) and DM semantic parsing performance (in labeled F1) on different groups of the development data. Both systems predict the same syntactic parses for instances from SAME, and they disagree on instances from DIFF (§5). tree, we consider three cases: (a) h′ is a head of m in the semantic graph; (b) h′ is a modifier of m in the semantic graph; (c) h is the modifier of m in the semantic graph. The first two reflect modifications to the syntactic parse that rearrange semantically linked words to be neighbors. Under (c), the semantic parser removes a syntactic dependency that reverses the direction of a semantic dependency. These cases account for 17.6%, 10.9%, and 12.8%, respectively (41.2% combined) of the total changes. Making these changes, of course, is complicated, since they often require other modifications to maintain well-formedness of the tree. Figure 2 gives an example. 6 Related Work Joint learning in NLP pipelines. To avoid cascading errors, much effort has been devoted to joint decoding in NLP pipelines (Habash and Rambow, 2005; Cohen and Smith, 2007; Goldberg and Tsarfaty, 2008; Lewis et al., 2015; Zhang et al., 2015, inter alia). However, joint inference can sometimes be prohibitively expensive. Recent advances in representation learning facilitate exploration in the joint learning of multiple tasks by sharing parameters (Collobert and Weston, 2008; Blitzer et al., 2006; Finkel and Manning, 2010; Zhang and Weiss, 2016; Hashimoto et al., 2017, inter alia). Differentiable optimization. Gould et al. (2016) review the generic approaches to differentiation in bi-level optimization (Bard, 2010; Kunisch and Pock, 2013). Amos and Kolter (2017) extend their efforts to a class of subdifferentiable quadratic programs. However, they both require that the intermediate objective has an invertible Hessian, limiting their application 1871 in NLP. In another line of work, the steps of a gradient-based optimization procedure are unrolled into a single computation graph (Stoyanov et al., 2011; Domke, 2012; Goodfellow et al., 2013; Brakel et al., 2013). This comes at a high computational cost due to the second-order derivative computation during backpropagation. Moreover, constrained optimization problems (like many NLP problems) often require projection steps within the procedure, which can be difficult to differentiate through (Belanger and McCallum, 2016; Belanger et al., 2017). 7 Conclusion We presented SPIGOT, a novel approach to backpropagating through neural network architectures that include discrete structured decisions in intermediate layers. SPIGOT devises a proxy for the gradients with respect to argmax’s inputs, employing a projection that aims to respect the constraints in the intermediate task. We empirically evaluate our method with two architectures: a semantic parser with an intermediate syntactic parser, and a sentiment classifier with an intermediate semantic parser. Experiments show that SPIGOT achieves stronger performance than baselines under both settings, and outperforms stateof-the-art systems on semantic dependency parsing. Our implementation is available at https: //github.com/Noahs-ARK/SPIGOT. Acknowledgments We thank the ARK, Julian Michael, Minjoon Seo, Eunsol Choi, and Maxwell Forbes for their helpful comments on an earlier version of this work, and the anonymous reviewers for their valuable feedback. This work was supported in part by NSF grant IIS-1562364. References Mariana S. C. Almeida and Andr´e F. T. Martins. 2015. Lisbon: Evaluating TurboSemanticParser on multiple languages and out-of-domain data. In Proc. of SemEval. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Many languages, one parser. TACL 4:431–444. Brandon Amos and J. Zico Kolter. 2017. OptNet: Differentiable optimization as a layer in neural networks. In Proc. of ICML. J. K. Baker. 1979. Trainable grammars for speech recognition. In Speech Communication Papers for the 97th Meeting of the Acoustical Society of America. Jonathan F. Bard. 2010. Practical Bilevel Optimization: Algorithms and Applications. Springer. David Belanger and Andrew McCallum. 2016. Structured prediction energy networks. In Proc. of ICML. David Belanger, Bishan Yang, and Andrew McCallum. 2017. End-to-end learning for structured prediction energy networks. In Proc. of ICML. Yoshua Bengio, Nicholas Lonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv:1308.3432. Dimitris Bertsimas and John Tsitsiklis. 1997. Introduction to Linear Optimization. Athena Scientific. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP. Phil´emon Brakel, Dirk Stroobandt, and Benjamin Schrauwen. 2013. Training energy-based models for time-series imputation. Journal of Machine Learning Research 14:2771–2797. Peter Brucker. 1984. An O(n) algorithm for quadratic knapsack problems. Operations Research Letters 3(3):163 – 166. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2017. Unsupervised learning of task-specific tree structures with tree-LSTMs. arXiv:1707.02786. Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica 14:1396–1400. Shay B. Cohen and Noah A. Smith. 2007. Joint morphological and syntactic disambiguation. In Proc. of EMNLP. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of ICML. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. Stanford typed dependencies manual. Technical report, Stanford University. Justin Domke. 2012. Generic methods for optimization-based modeling. In Proc. of AISTATS. David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Proc. of NIPS. 1872 Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards 71B:233–240. Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in Probabilistic and Other Parsing Technologies, Springer Netherlands, pages 29–61. Jason Eisner. 2016. Inside-outside and forwardbackward algorithms are just backprop. In Proceedings of the EMNLP Workshop on Structured Prediction for NLP. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proc. of COLING. Jenny Rose Finkel and Christopher D. Manning. 2010. Hierarchical joint learning: Improving joint parsing and named entity recognition with non-jointly labeled data. In Proc. of ACL. Abram L. Friesen and Pedro M. Domingos. 2016. The sum-product theorem: A foundation for learning tractable models. In Proc. of ICML. Abram L. Friesen and Pedro M. Domingos. 2018. Deep learning as a mixed convex-combinatorial optimization problem. In Proc. of ICLR. Yoav Goldberg and Reut Tsarfaty. 2008. A single generative model for joint morphological segmentation and syntactic parsing. In Proc. of ACL. Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. 2013. Multi-prediction deep Boltzmann machines. In Proc. of NIPS. Stephen Gould, Basura Fernando, Anoop Cherian, Peter Anderson, Rodrigo Santa Cruz, and Edison Guo. 2016. On differentiating parameterized argmin and argmax problems with application to bi-level optimization. arXiv:1607.05447. Nizar Habash and Owen Rambow. 2005. Arabic tokenization, part-of-speech tagging and morphological disambiguation in one fell swoop. In Proc. ACL. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proc. of EMNLP. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and whats next. In Proc. of ACL. Geoffrey Hinton. 2012. Neural networks for machine learning. Coursera video lectures. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proc. of ACL. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with Gumbel-Softmax. arXiv:1611.01144. Yangfeng Ji and Noah A. Smith. 2017. Neural discourse structure for text categorization. In Proc. of ACL. Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. 2017. Structured attention networks. In Proc. of ICLR. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. TACL 4:313– 327. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In Proc. of ICLR. Karl Kunisch and Thomas Pock. 2013. A bilevel optimization approach for parameter learning in variational models. SIAM Journal on Imaging Sciences 6(2):938–983. Mike Lewis, Luheng He, and Luke Zettlemoyer. 2015. Joint A* CCG parsing and semantic role labelling. In Proc. of EMNLP. Yang Liu and Mirella Lapata. 2018. Learning structured text representations. TACL 6:63–75. David G. Luenberger and Yinyu Ye. 2015. Linear and Nonlinear Programming. Springer. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics 19(2):313–330. Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proc. of ICML. Andr´e F. T. Martins, Miguel B. Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In Proc. of ACL. Andr´e F. T. Martins, M´ario A. T. Figueiredo, Pedro M. Q. Aguiar, Noah A. Smith, and Eric P. Xing. 2015. AD3: Alternating directions dual decomposition for map inference in graphical models. Journal of Machine Learning Research 16:495–545. Andr´e F. T. Martins, Noah A. Smith, and Eric P. Xing. 2009. Polyhedral outer approximations with application to natural language parsing. In Proc. of ICML. Arthur Mensch and Mathieu Blondel. 2018. Differentiable dynamic programming for structured prediction and attention. arXiv:1802.03676 . Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Discriminative neural sentence modeling by tree-based convolution. In Proc. of EMNLP. 1873 Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The dynamic neural network toolkit. arXiv:1701.03980. Vlad Niculae and Mathieu Blondel. 2017. A regularized framework for sparse and structured neural attention. In Proc. of NIPS. Vlad Niculae, Andr F. T. Martins, Mathieu Blondel, and Claire Cardie. 2018. SparseMAP: Differentiable sparse structured inference. arXiv:1802.04223. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, and Zdenka Uresova. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency parsing. In Proc. of SemEval. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov´a, Dan Flickinger, Jan Hajiˇc, Angelina Ivanova, and Zdeˇnka Ureˇsov´a. 2016. Towards comparability of linguistic graph banks for semantic parsing. In Proc. of LREC. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. SemEval 2014 task 8: Broad-coverage semantic dependency parsing. In Proc. of SemEval. Stephan Oepen, Lilja vrelid, Jari Bjrne, Richard Johansson, Emanuele Lapponi, Filip Ginter, and Erik Velldal. 2017. The 2017 shared task on extrinsic parser evaluation. towards a reusable community infrastructure. In Proc. of the 2017 Shared Task on Extrinsic Parser Evaluation. Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In Proc. of ACL. Hao Peng, Sam Thomson, Swabha Swayamdipta, and Noah A. Smith. 2018. Learning joint semantic parsers from disjoint data. In Proc. of NAACL. Corentin Ribeyre, ´Eric Villemonte De La Clergerie, and Djam´e Seddah. 2015. Because syntax does matter: Improving predicate-argument structures parsing using syntactic features. In Proc. of NAACL. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proc. of NAACL. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proc. of ACL. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP. Anders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proc. of ACL. Veselin Stoyanov, Alexander Ropson, and Jason Eisner. 2011. Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In Proc. of AISTATS. Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Greedy, joint syntacticsemantic parsing with stack LSTMs. In Proc. of CoNLL. Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A. Smith. 2017. Frame-semantic parsing with softmax-margin segmental RNNs and a syntactic scaffold. arXiv:1706.09528. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proc. of ACL. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proc. of ICML. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning 8(3-4):229–256. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proc. of EMNLP. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In Proc. of ICLR. Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep dependency structures. Computational Linguistics 42(3):353–389. Yuan Zhang, Chengtao Li, Regina Barzilay, and Kareem Darwish. 2015. Randomized greedy inference for joint segmentation, POS tagging and dependency parsing. In Proc. NAACL. Yuan Zhang and David Weiss. 2016. Stackpropagation: Improved representation learning for syntax. In Proc. of ACL.
2018
173
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1874–1883 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1874 Learning How to Actively Learn: A Deep Imitation Learning Approach Ming Liu Wray Buntine Faculty of Information Technology, Monash University {ming.m.liu, wray.buntine, gholamreza.haffari} @ monash.edu Gholamreza Haffari Abstract Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary. We introduce a method that learns an AL policy using imitation learning (IL). Our IL-based approach makes use of an efficient and effective algorithmic expert, which provides the policy learner with good actions in the encountered AL situations. The AL strategy is then learned with a feedforward network, mapping situations to most informative query datapoints. We evaluate our method on two different tasks: text classification and named entity recognition. Experimental results show that our IL-based AL strategy is more effective than strong previous methods using heuristics and reinforcement learning. 1 Introduction For many real-world NLP tasks, labeled data is rare while unlabelled data is abundant. Active learning (AL) seeks to learn an accurate model with minimum amount of annotation cost. It is inspired by the observation that a model can get better performance if it is allowed to choose the data points on which it is trained. For example, the learner can identify the areas of the space where it does not have enough knowledge, and query those data points which bridge its knowledge gap. Traditionally, AL is performed using engineered heuristics in order to estimate the usefulness of unlabeled data points as queries to an annotator. Recent work (Fang et al., 2017; Bachman et al., 2017; Woodward and Finn, 2017) have focused on learning the AL querying strategy, as engineered heuristics are not flexible to exploit characteristics inherent to a given problem. The basic idea is to cast AL as a decision process, where the most informative unlabeled data point needs to be selected based on the history of previous queries. However, previous works train for the AL policy by a reinforcement learning (RL) formulation, where the rewards are provided at the end of sequences of queries. This makes learning the AL policy difficult, as the policy learner needs to deal with the credit assignment problem. Intuitively, the learner needs to observe many pairs of query sequences and the resulting end-rewards to be able to associate single queries with their utility scores. In this work, we formulate learning AL strategies as an imitation learning problem. In particular, we consider the popular pool-based AL scenario, where an AL agent is presented with a pool of unlabelled data. Inspired by the Dataset Aggregation (DAGGER) algorithm (Ross et al., 2011), we develop an effective AL policy learning method by designing an efficient and effective algorithmic expert, which provides the AL agent with good decisions in the encountered states. We then use a deep feedforward network to learn the AL policy to associate states to actions. Unlike the RL approach, our method can get observations and actions directly from the expert’s trajectory. Therefore, our trained policy can make better rankings of unlabelled datapoints in the pool, leading to more effective AL strategies. We evaluate our method on text classification and named entity recognition. The results show our method performs better than strong AL methods using heuristics and reinforcement learning, in that it boosts the performance of the underlying model with fewer labelling queries. An open source implementation of our model is available at: https://github.com/Grayming/ ALIL. 1875 2 Pool-based AL as a Decision Process We consider the popular pool-based AL setting where we are given a small set of initial labeled data and a large pool of unlabelled data, and a budget for getting the annotation of some unlabelled data by querying an oracle, e.g. a human annotator. The goal is to intelligently pick those unlabelled data for which if the annotations were available, the performance of the underlying re-trained model would be improved the most. The main challenge in AL is how to identify and select the most beneficial unlabelled data points. Various heuristics have been proposed to guide the unlabelled data selection (Settles, 2010). However, there is no one AL heuristic which performs best for all problems. The goal of this paper is to provide an approach to learn an AL strategy which is best suited for the problem at hand, instead of resorting to ad-hoc heuristics. The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017). The idea is to simulate the AL scenario on instances of the problem created using available labeled data, where the label of some part of the data is kept hidden. This allows to have an automatic oracle to reveal the labels of the queried data, resulting in an efficient way to quickly evaluate a hypothesised AL strategy. Once the AL strategy is learned on simulations, it is then applied to real AL scenarios. The more related are the tasks in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be. We are interested to train a model mφφφ which maps an input xxx ∈X to its label yyy ∈Yxxx, where Yxxx is the set of labels for the input xxx and φφφ is the parameter vector of the underling model. For example, in the named entity recognition (NER) task, the input is a sentence and the output is its label sequence, e.g. in the IBO format. Let D = {(xxx,yyy)} be a support set of labeled data, which is randomly partitioned into labeled Dlab, unlabelled Dunl, and evaluation Devl datasets. Repeated random partitioning creates multiple instances of the AL problem. At each time step t of an AL problem, the algorithm interacts with the oracle and queries the label of a datapoint xxxt ∈Dunl t . As the result of this action, the followings happen: • The automatic oracle reveals the label yyyt; • The labeled and unlabelled datasets are updated to include and exclude the recently queried data point, respectively; • The underlying model is re-trained based on the enlarged labeled data to update φφφ; and • The AL algorithm receives a reward −loss(mφφφ, Devl), which is the negative loss of the current trained model on the evaluation set, defined as loss(mφφφ, Devl) := X (xxx,yyy)∈Devl loss(mφφφ(xxx),yyy) where loss(yyy′,yyy) is the loss incurred due to predicting yyy′ instead of the ground truth yyy. More formally, a pool-based AL problem is a Markov decision process (MDP), denoted by (S, A, Pr(ssst+1|ssst, at), R) where S is the state space, A is the set of actions, Pr(ssst+1|ssst, at) is the transition function, and R is the reward function. The state ssst ∈S at time t consists of the labeled Dlab t and unlabelled Dunl t datasets paired with the parameters of the currently trained model φt. An action at ∈A corresponds to the selection of a query datapoint, and the reward function R(ssst, at,ssst+1) := −loss(mφφφt, Devl). We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit. The optimal policy is found by maximising the following objective over the parameterised policies: E(Dlab,Dunl,Devl)∼D " Eπθθθ h B X t=1 R(ssst, at,ssst+1) i# (1) where πθθθ is the policy network parameterised by θθθ, D is a distribution over possible AL problem instances, and B is the maximum number of queries made in an AL run, a.k.a. an episode. Following (Bachman et al., 2017), we maximise the sum of the rewards after each time step to encourage the anytime behaviour, i.e. the model should perform well after each label query. 3 Deep Imitation Learning to Train the AL Policy The question remains as how can we train the policy network to maximise the training objective in eqn 1. Typical learning approaches resort to deep reinforcement learning (RL) and provide training signal at the end of each episode to learn the optimal policy (Fang et al., 2017; Bachman 1876 et al., 2017) e.g., using policy gradient methods. These approaches, however, need a large number of training episodes to learn a reasonable policy as they need to deal with the credit assignment problem, i.e. discovery of the utility of individual actions in the sequence based on the achieved reward at the end of the episode. This exacerbates the difficulty of finding a good AL policy. We formulate learning for the AL policy as an imitation learning problem. At each state, we provide the AL agent with a correct action which is computed by an algorithmic expert. The AL agent uses the sequence of states observed in an episode paired with the expert’s sequence of actions to update its policy. This directly addresses the credit assignment problem, and reduces the complexity of the problem compared to the RL approaches. In what follows, we describe the ingredients of our deep imitation learning (IL) approach, which is summarised in Algorithm 1. Algorithmic Expert. At a given AL state ssst, our algorithmic expert computes an action by evaluating the current pool of unlabeled data. More concretely, for each xxx′ ∈Dpool rnd and its correct label yyy′, the underlying model mφφφt is re-trained to get mxxx′ φφφt, where Dpool rnd ⊂Dunl t is a small subset of the current large pool of unlabeled data. The expert action is then computed as: arg min xxx′∈Dpool rnd loss(mxxx′ φφφt(xxx), Devl). (2) In other words, our algorithmic expert tries a subset of actions to roll-out one step from the current state, in order to efficiently compute a reasonable action. Searching for the optimal action would be O(|Dunl|B), which is computationally challenging due to (i) the large action set, and (ii) the exponential dependence on the length of the roll out. We will see in the experiments that our method efficiently learns effective AL policies. Policy Network. Our policy network is a feedforward network with two fully-connected hidden layers. It receives the current AL state, and provides a preference score for a given unlabeled data point, allowing to select the most beneficial one corresponding to the highest score. The input to our policy network consists of three parts: (i) a fixed dimensional representation of the content and the predicted label of the unlabeled data point under consideration, (ii) a fixed-dimensional representation of the content and the labels of the labeled dataset, and (iii) a fixed-dimensional representation of the content of the unlabeled dataset. score Dpool rand Dlab xxx yyy Figure 1: The policy network and its inputs. Imitation Learning Algorithm. A typical approach to imitation learning (IL) is to train the policy network so that it mimics the expert’s behaviour given training data of the encountered states (input) and actions (output) performed by the expert. The policy network’s prediction affects future inputs during the execution of the policy. This violates the crucial independent and identically distributed (iid) assumption, inherent to most statistical supervised learning approaches for learning a mapping from states to actions. We make use of Dataset Aggregation (DAGGER) (Ross et al., 2011), an iterative algorithm for IL which addresses the non-iid nature of the encountered states during the AL process (see Algorithm 1). In round τ of DAGGER, the learned policy network ˆπτ is applied to the AL problem to collect a sequence of states which are paired with the expert actions. The collected pair of states and actions are aggregated to the dataset of such pairs M, collected from the previous iterations of the algorithm. The policy network is then re-trained on the aggregated set, resulting in ˆπτ+1 for the next iteration of the algorithm. The intuition is to build up the set of states that the algorithm is likely to encounter during its execution, in order to increase the generalization of the policy network. To better leverage the training signal from the algorithmic expert, we allow the algorithm to collect state-action pairs according to a modified policy which is a mixture of ˆπτ and the expert policy ˜π∗ τ, i.e. πτ = βτ ˜π∗+ (1 −βτ)ˆπτ where βτ ∈[0, 1] is a mixing coefficient. This amounts to tossing a coin with parameter βτ in 1877 each iteration of the algorithm to decide one of these two policies for data collection. Re-training the Policy Network. To train our policy network, we turn the preference scores to probabilities, and optimise the parameters such that the probability of the action prescribed by the expert is maximized. More specifically, let M := {(sssi,aaai)}I i=1 be the collected states paired with their expert’s prescribed actions. Let Dpool i be the set of unlabelled datapoints in the pool within the state, and aaai denote the datapoint selected by the expert in the set. Our training objective is PI i=1 log Pr(aaai|Dpool i ) where Pr(aaai|Dpool i ) := exp ˆπ(aaai;sssi) P xxx∈Dpool i exp ˆπ(xxx;sssi). The above can be interpreted as the probability of aaai being the best action among all possible actions in the state. Following (Mnih et al., 2015), we randomly sample multiple1 mini-batches from the replay memory M, in addition to the current round’s stat-action pair, in order to retrain the policy network. For each mini-batch, we make one SGD step to update the policy, where the gradients of the network parameters are calculated using the backpropagation algorithm. Transferring the Policy. We now apply the policy learned on the source task to AL in the target task. We expect the learned policy to be effective for target tasks which are related to the source task in terms of the data distribution and characteristics. Algorithm 2 illustrates the policy transfer. The pool-based AL scenario in Algorithm 2 is cold-start; however, extending to incorporate initially available labeled data is straightforward. 4 Experiments We conduct experiments on text classification and named entity recognition (NER). The AL scenarios include cross-domain sentiment classification, cross-lingual authorship profiling, and crosslingual named entity recognition (NER), whereby an AL policy trained on a source domain/language is transferred to the target domain/language. We compare our proposed AL method using imitation learning (ALIL) with the followings: • Random sampling: The query datapoint is chosen randomly. 1In our experiments, we use 10 mini-bathes, each of which of size 100. Algorithm 1 Learn active learning policy via imitation learning Input: large labeled data D, max episodes T, budget B, sample size K, the coin parameter β Output: The learned policy 1: M ←∅ ▷the aggregated dataset 2: initialise ˆπ1 with a random policy 3: for τ=1, . . . , T do 4: Dlab, Dunl, Devl ←dataPartition(D) 5: φφφ1 ←trainModel(Dlab) 6: c ←coinToss(β) 7: for t ∈1, . . . , B do 8: Dpool rnd ←sampleUniform(Dunl, K) 9: ssst ←(Dlab, Dpool rnd ,φφφt) 10: aaat ←arg minxxx′∈Dpool rnd loss(mxxx′ φφφt, Devl) 11: if c is head then ▷the expert 12: xxxt ←aaat 13: else ▷the policy 14: xxxt ←arg maxxxx′∈Dpool rnd ˆπτ(xxx′;ssst) 15: end if 16: Dlab ←Dlab + {(xxxt,yyyt)} 17: Dunl ←Dunl −{xxxt} 18: M ←M + {(ssst,aaat)} 19: φφφt+1 ←retrainModel(φφφt, Dlab) 20: end for 21: ˆπτ+1 ←retrainPolicy(ˆπτ, M) 22: end for 23: return ˆπT +1 Algorithm 2 Active learning by policy transfer Input: unlabeled pool Dunl, budget B, policy ˆπ Output: labeled dataset and trained model 1: Dlab ←∅ 2: initialise φφφ randomly 3: for t ∈1, . . . , B do 4: ssst ←(Dlab, Dunl,φφφ) 5: xxxt ←arg maxxxx′∈Dunl ˆπ(xxx′;ssst) 6: yyyt ←askAnnotation(xxxt) 7: Dlab ←Dlab + {(xxxt,yyyt)} 8: Dunl ←Dunl −{xxxt} 9: φ ←retrainModel(φφφ, Dlab) 10: end for 11: return Dlab and φφφ • Diversity sampling: The query datapoint is arg minxxx P xxx′∈Dlab Jaccard(xxx,xxx′), where the Jaccard coefficient between the unigram features of the two given texts is used as the similarity measure. • Uncertainty-based sampling: For text classification, we use the datapoint with the highest predictive entropy, arg maxxxx −P y p(y|xxx, Dlab) log p(y|xxx, Dlab) where p(yyy|xxx, Dlab) comes from the underlying model. We further use a state-of-the-art extension of this method, called uncertainty with rationals (Sharma et al., 2015), which not only considers uncertainty but also looks whether 1878 doc. (src/tgt) src tgt number avg. len. (tokens) elec. music dev. 27k/1k 35/20 book movie 24k/2k 140/150 en sp 3.6k/4.2k 1.15k/1.35k en pt 3.6k/1.2k 1.15k/1.03k Table 1: The data sets used in sentiment classification (top part) and gender profiling (bottom part). the unlabelled document contains sentiment words or phrases that were returned as rationales for any of the existing labeled documents. For NER, we use the Total Token Entropy (TTE) as the uncertainty sampling method, arg maxxxx −P|xxx| i=1 P yi p(yi|xxx, Dlab) log p(yi|xxx, Dlab) which has been shown to be the best heuristic for this task among 17 different heuristics (Settles and Craven, 2008). • PAL: A reinforcement learning based approach (Fang et al., 2017), which makes use a deep Q-network to make the selection decision for stream-based active learning. 4.1 Text Classification Datasets and Setup. The first task is sentiment classification, in which product reviews express either positive or negative sentiment. The data comes from the Amazon product reviews (McAuley and Yang, 2016); see Table 1 for data statistics. The second task is Authorship Profiling, in which we aim to predict the gender of the text author. The data comes from the gender profiling task in PAN 2017 (Rangel et al., 2017), which consists of a large Twitter corpus in multiple languages: English (en), Spanish (es) and Portuguese (pt). For each language, all tweets collected from a user constitute one document; Table 1 shows data statistics. The multilingual embeddings for this task come from off-the-shelf CCA-trained embeddings (Ammar et al., 2016) for twelve languages, including English, Spanish and Portuguese. We fix these word embeddings during training of both the policy and the underlying classification model. For training, 10% of the source data is used as the evaluation set for computing the best action in imitation learning. We run T = 100 episodes with the budget B = 100 documents in each episode, set the sample size K = 5, and fix the mixing coefficient βτ = 0.5. For testing, we take 90% of the target data as the unlabeled pool, and the remaining 10% as the test set. We show the test accuracy w.r.t. the number of labelled documents selected in the AL process. As the underlying model mφφφ, we use a fast and efficient text classifier based on convolutional neural networks. More specifically, we apply 50 convolutional filters with ReLU activation on the embedding of all words in a document xxx, where the width of the filters is 3. The filter outputs are averaged to produce a 50-dimensional document representation hhh(xxx), which is then fed into a softmax to predict the class. Representing state-action. The input to the policy network, i.e. the feature vector representing a state-action pair, includes: the candidate document represented by the convolutional net hhh(xxx), the distribution over the document’s class labels mφφφ(xxx), the sum of all document vector representations in the labeled set P xxx′∈Dlab hhh(xxx′), the sum of all document vectors in the random pool of unlabelled data P xxx′∈Dpool rnd hhh(xxx′), and the empirical distribution of class labels in the labeled dataset. Results. Fig 2 shows the results on product sentiment prediction and authorship profiling, in cross-domain and cross-lingual AL scenarios2. Our ALIL method consistently outperforms both heuristic-based and RL-based (PAL) (Fang et al., 2017) approaches across all tasks. ALIL tends to convergence faster than other methods, which indicates its policy can quickly select the most informative datapoints. Interestingly, the uncertainty and diversity sampling heuristics perform worse than random sampling on sentiment classification. We speculate this may be due to these two heuristics not being able to capture the polarity information during the data selection process. PAL performs on-par with uncertainty with rationals on musical device, both of which outperform the traditional diversity and uncertainty sampling heuristics. Interestingly, PAL is outperformed by random sampling on movie reviews, and by the traditional uncertainty sampling heuristic on authorship profiling tasks. We attribute this to ineffectiveness of the RL-based approach for learning a reasonable AL query strategy. We further investigate combining the transfer of the policy network with the transfer of the underlying classifier. That is, we first train a classi2Uncertainty with rationale cannot be done for authorship profiling as the rationales come from a sentiment dictionary. 1879 Figure 2: The performance of different active learning methods for cross domain sentiment classification (left two plots) and cross lingual authorship profiling (right two plots). fier on all of the annotated data from the source domain/language. Then, this classifier is ported to the target domain/language; for cross-language transfer, we make use of multilingual word embeddings. We start the AL process starting from the transferred classifier, referred to as the warmstart AL. We compare the performance of the directly transferred classifier with those obtained after the AL process in the warm-start and cold-start scenarios. The results are shown in Table 2. We have run the cold-start and warm-start AL for 25 times, and reported the average accuracy in Table 2. As seen from the results, both the cold and warm start AL settings outperform the direct transfer significantly, and the warm start consistently gets higher accuracy than the cold start. The difference between the results are statistically significant, with a p-value of .001, according to McNemar test3 (Dietterich, 1998). musical movie es pt direct transfer 0.715 0.640 0.675 0.740 cold-start AL 0.800 0.760 0.728 0.773 warm-start AL 0.825 0.765 0.730 0.780 Table 2: Classifiers performance under three different transfer settings. 4.2 Named Entity Recognition Data and setup We use NER corpora from the CONLL2002/2003 shared tasks, which include annotated text in English (en), German (de), Spanish (es), and Dutch (nl). The original annotation is based on IOB1, which we convert to the IO 3As the contingency table needed for the McNemar test, we have used the average counts across the 25 runs. labelling scheme. Following Fang et al. (2017), we consider two experimental conditions: (i) the bilingual scenario where English is the source (used for policy training) and other languages are the target, and (ii) the multilingual scenario where one of the languages (except English) is the target and the remaining ones are the source used in joint training of the AL policy. The underlying model mφφφ is a conditional random field (CRF) treating NER as a sequence labelling task. The prediction is made using the Viterbi algorithm. In the existing corpus partitions from CoNLL, each language has three subsets: train, testa and testb. During policy training with the source language(s), we combine these three subsets, shuffle, and re-split them into simulated training, unlabelled pool, and evaluation sets in every episode. We run N = 100 episodes with the budget B = 200, and set the sample size k = 5. When we transfer the policy to the target language, we do one episode and select B datapoints from train (treated as the pool of unlabeled data) and report F1 scores on testa. Representing state-action. The input to the policy network includes the representation of the candidate sentence using the sum of its words’ embeddings hhh(xxx), the representation of the labelling marginals using the label-level convolutional network cnnlab(Emφφφ(yyy|xxx)[yyy]) (Fang et al., 2017), the representation of sentences in the labeled data P (xxx′,yyy′)∈Dlab hhh(xxx′), the representation of sentences in the random pool of unlabelled data P xxx′∈Dpool rnd hhh(xxx′), the representation of ground-truth labels in the labeled data P (xxx′,yyy′)∈Dlab cnnlab(yyy′) using the empirical distributions, and the confidence of the sequential pre1880 Figure 3: The performance of active learning methods on the bilingual and multilingual settings for three target languages: German (de), Spanish (es) and Dutch (nl). Figure 4: The learning curves of agents with different K on Spanish (es) NER. diction |xxx|p maxyyy mφφφ(yyy|xxx), where |xxx| denotes the length of the sentence xxx. For the word embeddings, we use off-the-shelf CCA trained multilingual embeddings (Ammar et al., 2016) with 40 dimensions; we fix these during policy training. Results. Fig. 3 shows the results for three target languages. In addition to the strong heuristicbased methods, we compare our imitation learning approach (ALIL) with the reinforcement learning approach (PAL) (Fang et al., 2017), in both bilingual (bi) and multilingual (mul) transfer settings. Across all three languages, ALIL.bi and ALIL.mul outperform the heuristic methods, including Uncertainty Sampling based on TTE. This is expected as the uncertainty sampling largely relies on a high quality underlying model, and diversity sampling ignores the labelling information. In the bilingual case, ALIL.bi outperforms PAL.bi on Spanish (es) and Dutch (nl), and performs similarly on German (de). In the multilingual case, ALIL.mul achieves the best performance on Spanish, and performs competitively with PAL.mul on German and Dutch. 4.3 Analysis Insight on the selected data. We compare the data selected by ALIL to other methods. This will confirm that ALIL learns policies which are suitable for the problem at hand, without resorting to a fixed engineered heuristics. For this analysis, we report the mean reciprocal rank (MRR) of the data points selected by the ALIL policy under rankings of the unlabelled pool generated by the uncertainty and diversity sampling. Furthermore, we measure the fraction of times the decisions made by the ALIL policy agrees with those which would movie sentiment gender pt NER es acc Unc. 0.06 0.58 0.51 MRR Unc. 0.083 0.674 0.551 acc Div. 0.05 0.52 0.45 MRR Div. 0.057 0.593 0.530 acc PAL 0.15 0.56 0.52 Table 3: The first four rows show MRR and accuracy of instances returned by ALIL under the rankings of uncertainty and diversity sampling, the last row give average accuracy of instances under PAL. have been made by the heuristic methods, which is measured by the accuracy (acc). Table 3 report these measures. As we can see, for sentiment classification since uncertainty and diversity sampling perform badly, ALIL has a big disagreement with them on the selected data points. While for gender classification on Portuguese and NER on Spanish, ALIL shows much more agreement with other three heuristics. Lastly, we compare chosen queries by ALIL to those by PAL, to investigate the extent of the agreement between these two methods. This is simply measure by the fraction of identical query data points among the total number of queries (i.e. accuracy). Since PAL is stream-based and sensitive to the order in which it receives the data points, we report the average accuracy taken over multiple runs with random input streams. The expected accuracy numbers are reported in Table 3. As seen, ALIL has higher overlap with PAL than the heuristic-based methods, in terms of the selected queries. Sensitivity to K. As seen in Algorithm 1, we resort to an approximate algorithmic expert, which selects the best action in a random subset of the 1881 Figure 5: The learning curves of agents with different schedules for β before the trajectory on Spanish (es) NER. pool of unlabelled data with size K, in order to make the policy training efficient. Note that, in policy training, setting K to one and the size of the unlabelled data pool correspond to stream-based and pool-based AL scenarios, respectively. By changing K to values between these two extremes, we can analyse the effect of the quality of the algorithmic expert on the trained policy; Figure 4 shows the results. A larger candidate set may correspond to a better learned policy, needed to be traded off with the training time growing linearly with K. Interestingly, even small candidate sets lead to strong AL policies as increasing K beyond 10 does not change the performance significantly. Dynamically changing β. In our algorithm, β plays an important role as it trades off exploration versus exploitation. In the above experiments, we fix it to 0.5; however, we can change its value throughout trajectory collection as a function of τ (see Algorithm 1). We investigate schedules which tend to put more emphasis on exploration and exploitation towards the beginning and end of data collection, respectively. We investigate the following schedules: (i) linear βτ = max(0.5, 1− 0.01τ), (ii) exponential βτ = 0.9τ, and (iii) and inverse sigmoid βτ = 5 5+exp(τ/5), as a function of iterations. Fig. 5 shows the comparisons of these schedules. The learned policy seems to perform competitively with either a fixed or an exponential schedule. We have also investigated tossing the coin in each step within the trajectory roll out, but found that it is more effective to have it before the full trajectory roll out (as currently done in Algorithm 1). 5 Related Work Traditional active learning algorithms rely on various heuristics (Settles, 2010), such as uncertainty sampling (Settles and Craven, 2008; Houlsby et al., 2011), query-by-committee (GiladBachrach et al., 2006), and diversity sampling (Brinker, 2003; Joshi et al., 2009; Yang et al., 2015). Apart from these, different heuristics can be combined, thus creating integrated strategy which consider one or more heuristics at the same time. Combined with transfer learning, pre-existing labeled data from related tasks can help improve the performance of an active learner (Xiao and Guo, 2013; Kale and Liu, 2013; Huang and Chen, 2016; Konyushkova et al., 2017). More recently, deep reinforcement learning is used as the framework for learning active learning algorithms, where the active learning cycle is considered as a decision process. (Woodward and Finn, 2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions. (Bachman et al., 2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function. (Fang et al., 2017) designed an active learning algorithm based on a deep Qnetwork, in which the action corresponds to binary annotation decisions applied to a stream of data. The learned policy can then be transferred between languages or domains. Imitation learning (IL) refers to an agent’s acquisition of skills or behaviours by observing an expert’s trajectory in a given task. It helps reduce sequential prediction tasks into supervised learning by employing a (near) optimal oracle at training time. Several IL algorithms has been proposed in sequential prediction tasks, including SEARA (Daumé et al., 2009), AggreVaTe (Ross and Bagnell, 2014), DaD (Venkatraman et al., 2015), LOLS(Chang et al., 2015), DeeplyAggreVaTe (Sun et al., 2017). Our work is closely related to Dagger (Ross et al., 2011), which can guarantee to find a good policy by addressing the dependency nature of encountered states in a trajectory. 6 Conclusion In this paper, we have proposed a new method for learning active learning algorithms using deep imitation learning. We formalize pool-based active 1882 learning as a Markov decision process, in which active learning corresponds to the selection decision of the most informative data points from the pool. Our efficient algorithmic expert provides state-action pairs from which effective active learning policies can be learned. We show that the algorithmic expert allows direct policy learning, while at the same time, the learned policies transfer successfully between domains and languages, demonstrating improvement over previous heuristic and reinforcement learning approaches. Acknowledgments We would like to thank the feedback from anonymous reviewers. G. H. is grateful to Trevor Cohn for interesting discussions. This work was supported by computational resources from the Multimodal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) at Monash University. References Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2017. Learning algorithms for active learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 301–310, International Convention Centre, Sydney, Australia. PMLR. Klaus Brinker. 2003. Incorporating diversity in active learning with support vector machines. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 59–66. Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, and John Langford. 2015. Learning to search better than your teacher. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2058–2066. Ido Dagan and Sean P Engelson. 1995. Selective sampling in natural language learning. In Proceedings of IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing. Hal Daumé, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning, 75(3):297–325. Thomas G. Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput., 10(7):1895–1923. Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. arXiv preprint arXiv:1708.02383. Ran Gilad-Bachrach, Amir Navot, and Naftali Tishby. 2006. Query by committee made real. In Advances in neural information processing systems, pages 443–450. Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745. Sheng-Jun Huang and Songcan Chen. 2016. Transfer learning with active queries from source domain. In IJCAI, pages 1592–1598. Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. 2009. Multi-class active learning for image classification. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 2372–2379. IEEE. David Kale and Yan Liu. 2013. Accelerating active learning with transfer learning. In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 1085–1090. IEEE. Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. 2017. Learning active learning from data. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4225–4235. Curran Associates, Inc. Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proceedings of the 25th International Conference on World Wide Web, pages 625–635. International World Wide Web Conferences Steering Committee. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533. Kamal Nigam and Andrew McCallum. 1998. Poolbased active learning for text classification. In Conference on Automated Learning and Discovery (CONALD). Francisco Rangel, Paolo Rosso, Martin Potthast, and Benno Stein. 2017. Overview of the 5th author profiling task at PAN 2017: Gender and language variety identification in twitter. Working Notes Papers of the CLEF. Stephane Ross and J Andrew Bagnell. 2014. Reinforcement and imitation learning via interactive noregret learning. arXiv preprint arXiv:1406.5979. 1883 Stéphane Ross, Geoffrey J Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In International Conference on Artificial Intelligence and Statistics, pages 627–635. Burr Settles. 2010. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11. Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proceedings of the conference on empirical methods in natural language processing, pages 1070– 1079. Association for Computational Linguistics. Claude E Shannon. 1948. A note on the concept of entropy. Bell System Tech. J, 27:379–423. Manali Sharma, Di Zhuang, and Mustafa Bilgic. 2015. Active learning with rationales for text classification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 441–451. Wen Sun, Arun Venkatraman, Geoffrey J Gordon, Byron Boots, and J Andrew Bagnell. 2017. Deeply aggrevated: Differentiable imitation learning for sequential prediction. arXiv preprint arXiv:1703.01030. Arun Venkatraman, Martial Hebert, and J Andrew Bagnell. 2015. Improving multi-step prediction of learned time series models. In AAAI, pages 3024– 3030. Mark Woodward and Chelsea Finn. 2017. Active oneshot learning. arXiv preprint arXiv:1702.06559. Min Xiao and Yuhong Guo. 2013. Online active learning for cost sensitive domain adaptation. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 1–9. Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. 2015. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 113(2):113–127.
2018
174
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1884–1895 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1884 Training Classifiers with Natural Language Explanations Braden Hancock Computer Science Dept. Stanford University [email protected] Paroma Varma Electrical Engineering Dept. Stanford University [email protected] Stephanie Wang Computer Science Dept. Stanford University [email protected] Martin Bringmann OccamzRazor San Francisco, CA [email protected] Percy Liang Computer Science Dept. Stanford University [email protected] Christopher R´e Computer Science Dept. Stanford University [email protected] Abstract Training accurate classifiers requires many labels, but each label provides only limited information (one bit for binary classification). In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision. A semantic parser converts these explanations into programmatic labeling functions that generate noisy labels for an arbitrary amount of unlabeled data, which is used to train a classifier. On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5–100 faster by providing explanations instead of just labels. Furthermore, given the inherent imperfection of labeling functions, we find that a simple rule-based semantic parser suffices. 1 Introduction The standard protocol for obtaining a labeled dataset is to have a human annotator view each example, assess its relevance, and provide a label (e.g., positive or negative for binary classification). However, this only provides one bit of information per example. This invites the question: how can we get more information per example, given that the annotator has already spent the effort reading and understanding an example? Previous works have relied on identifying relevant parts of the input such as labeling features (Druck et al., 2009; Raghavan et al., 2005; Liang et al., 2009), highlighting rationale phrases in Both cohorts showed signs of optic nerve toxicity due to ethambutol. Example Label Explanation Because the words “due to” occur between the chemical and the disease. Does this chemical cause this disease? Why do you think so? Labeling Function def lf(x): return (1 if “due to” in between(x.chemical, x.disease) else 0) Figure 1: In BabbleLabble, the user provides a natural language explanation for each labeling decision. These explanations are parsed into labeling functions that convert unlabeled data into a large labeled dataset for training a classifier. text (Zaidan and Eisner, 2008; Arora and Nyberg, 2009), or marking relevant regions in images (Ahn et al., 2006). But there are certain types of information which cannot be easily reduced to annotating a portion of the input, such as the absence of a certain word, or the presence of at least two words. In this work, we tap into the power of natural language and allow annotators to provide supervision to a classifier via natural language explanations. Specifically, we propose a framework in which annotators provide a natural language explanation for each label they assign to an example (see Figure 1). These explanations are parsed into logical forms representing labeling functions (LFs), functions that heuristically map examples to labels (Ratner et al., 2016). The labeling functions are 1885 Tom Bradyand his wife Gisele Bündchen were spotted in New York City on Monday amid rumors of Brady’s alleged role in Deflategate. True, because the words “his wife” are right before person 2. def LF_1a(x): return (1 if “his wife” in left(x.person2, dist==1) else 0) def LF_1b(x): return (1 if “his wife” in right(x.person2) else 0 Correct Semantic Filter (inconsistent) Unlabeled Examples + Explanations Label whether person 1 is married to person 2 Labeling Functions Filters Label Matrix None of us knows what happened at Kane‘s home Aug. 2, but it is telling that the NHL has not suspended Kane. False, because person 1 and person 2 in the sentence are identical. Dr. Michael Richards and real estate and insurance businessman Gary Kirke did not attend the event. False, because the last word of person 1 is different than the last word of person 2. x1 x2 x3 def LF_3a(x): return (-1 if x.person1.tokens[-1] != x.person2.tokens[-1] else 0) Correct Pragmatic Filter (duplicate of LF_3a) def LF_2b(x): return (-1 if x.person1 == x.person2) else 0) Correct def LF_3b(x): return (-1 if not ( x.person1.tokens[-1] == x.person2.tokens[-1]) else 0) def LF_2a(x): return (-1 if x.person1 in x.sentence and x.person2 in x.sentence else 0) Pragmatic Filter (always true) x1 x2 x3 LF1a LF2b LF3a 1 -1 -1 -1 ỹ x4 … LF4c … 1 1 … + 1 + Noisy Labels (x1,ỹ1) (x2,ỹ2) (x3,ỹ3) (x4,ỹ4) Classifier x ỹ Figure 2: Natural language explanations are parsed into candidate labeling functions (LFs). Many incorrect LFs are filtered out automatically by the filter bank. The remaining functions provide heuristic labels over the unlabeled dataset, which are aggregated into one noisy label per example, yielding a large, noisily-labeled training set for a classifier. then executed on many unlabeled examples, resulting in a large, weakly-supervised training set that is then used to train a classifier. Semantic parsing of natural language into logical forms is recognized as a challenging problem and has been studied extensively (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang et al., 2011; Liang, 2016). One of our major findings is that in our setting, even a simple rule-based semantic parser suffices for three reasons: First, we find that the majority of incorrect LFs can be automatically filtered out either semantically (e.g., is it consistent with the associated example?) or pragmatically (e.g., does it avoid assigning the same label to the entire training set?). Second, LFs near the gold LF in the space of logical forms are often just as accurate (and sometimes even more accurate). Third, techniques for combining weak supervision sources are built to tolerate some noise (Alfonseca et al., 2012; Takamatsu et al., 2012; Ratner et al., 2018). The significance of this is that we can deploy the same semantic parser across tasks without task-specific training. We show how we can tackle a real-world biomedical application with the same semantic parser used to extract instances of spouses. Our work is most similar to that of Srivastava et al. (2017), who also use natural language explanations to train a classifier, but with two important differences. First, they jointly train a task-specific semantic parser and classifier, whereas we use a simple rule-based parser. In Section 4, we find that in our weak supervision framework, the rule-based semantic parser and the perfect parser yield nearly identical downstream performance. Second, while they use the logical forms of explanations to produce features that are fed directly to a classifier, we use them as functions for labeling a much larger training set. In Section 4, we show that using functions yields a 9.5 F1 improvement (26% relative improvement) over features, and that the F1 score scales with the amount of available unlabeled data. We validate our approach on two existing datasets from the literature (extracting spouses from news articles and disease-causing chemicals from biomedical abstracts) and one real-world use case with our biomedical collaborators at OccamzRazor to extract protein-kinase interactions related to Parkinson’s disease from text. We find empirically that users are able to train classifiers with comparable F1 scores up to two orders of magnitude faster when they provide natural language explanations instead of individual labels. Our code and data can be found at https:// github.com/HazyResearch/babble. 2 The BabbleLabble Framework The BabbleLabble framework converts natural language explanations and unlabeled data into a noisily-labeled training set (see Figure 2). There are three key components: a semantic parser, a filter bank, and a label aggregator. The semantic 1886 <START> label false because X and Y are the same person <STOP> START LABEL FALSE BECAUSE ARG AND ARG IS EQUAL STOP BOOL ARGLIST ISEQUAL CONDITION LF Lexical Rules Unary Rules Compositional Rules →<START> →label →false START LABEL FALSE →FALSE →TRUE →INT BOOL BOOL NUM LF CONDITION ARGLIST →START LABEL BOOL BECAUSE CONDITION STOP →ARGLIST ISEQUAL →ARG AND ARG Ignored token Figure 3: Valid parses are found by iterating over increasingly large subspans of the input looking for matches among the right hand sides of the rules in the grammar. Rules are either lexical (converting tokens into symbols), unary (converting one symbol into another symbol), or compositional (combining many symbols into a single higher-order symbol). A rule may optionally ignore unrecognized tokens in a span (denoted here with a dashed line). parser converts natural language explanations into a set of logical forms representing labeling functions (LFs). The filter bank removes as many incorrect LFs as possible without requiring ground truth labels. The remaining LFs are applied to unlabeled examples to produce a matrix of labels. This label matrix is passed into the label aggregator, which combines these potentially conflicting and overlapping labels into one label for each example. The resulting labeled examples are then used to train an arbitrary discriminative model. 2.1 Explanations To create the input explanations, the user views a subset S of an unlabeled dataset D (where |S| ≪ |D|) and provides for each input xi ∈S a label yi and a natural language explanation ei, a sentence explaining why the example should receive that label. The explanation ei generally refers to specific aspects of the example (e.g., in Figure 2, the location of a specific string “his wife”). 2.2 Semantic Parser The semantic parser takes a natural language explanation ei and returns a set of LFs (logical forms or labeling functions) {f1, . . . , fk} of the form fi : X →{−1, 0, 1} in a binary classification setting, with 0 representing abstention. We emphasize that the goal of this semantic parser is not to generate the single correct parse, but rather to have coverage over many potentially useful LFs.1 1Indeed, we find empirically that an incorrect LF nearby the correct one in the space of logical forms actually has higher end-task accuracy 57% of the time (see Section 4.2). We choose a simple rule-based semantic parser that can be used without any training. Formally, the parser uses a set of rules of the form α →β, where α can be replaced by the token(s) in β (see Figure 3 for example rules). To identify candidate LFs, we recursively construct a set of valid parses for each span of the explanation, based on the substitutions defined by the grammar rules. At the end, the parser returns all valid parses (LFs in our case) corresponding to the entire explanation. We also allow an arbitrary number of tokens in a given span to be ignored when looking for a matching rule. This improves the ability of the parser to handle unexpected input, such as unknown words or typos, since the portions of the input that are parseable can still result in a valid parse. For example, in Figure 3, the word “person” is ignored. All predicates included in our grammar (summarized in Table 1) are provided to annotators, with minimal examples of each in use (Appendix A). Importantly, all rules are domain independent (e.g., all three relation extraction tasks that we tested used the same grammar), making the semantic parser easily transferrable to new domains. Additionally, while this paper focuses on the task of relation extraction, in principle the BabbleLabble framework can be applied to other tasks or settings by extending the grammar with the necessary primitives (e.g., adding primitives for rows and columns to enable explanations about the alignments of words in tables). To guide the construction of the grammar, we collected 500 explanations for the Spouse domain from workers 1887 Predicate Description bool, string, int, float, tuple, list, set Standard primitive data types and, or, not, any, all, none Standard logic operators =, ̸=, <, ≤, >, ≥ Standard comparison operators lower, upper, capital, all caps Return True for strings of the corresponding case starts with, ends with, substring Return True if the first string starts/ends with or contains the second person, location, date, number, organization Return True if a string has the corresponding NER tag alias A frequently used list of words may be predefined and referred to with an alias count, contains, intersection Operators for checking size, membership, or common elements of a list/set map, filter Apply a functional primitive to each member of list/set to transform or filter the elements word distance, character distance Return the distance between two strings by words or characters left, right, between, within Return as a string the text that is left/right/within some distance of a string or between two designated strings Table 1: Predicates in the grammar supported by BabbleLabble’s rule-based semantic parser. on Amazon Mechanical Turk and added support for the most commonly used predicates. These were added before the experiments described in Section 4. Altogether the grammar contains 200 rule templates. 2.3 Filter Bank The input to the filter bank is a set of candidate LFs produced by the semantic parser. The purpose of the filter bank is to discard as many incorrect LFs as possible without requiring additional labels. It consists of two classes of filters: semantic and pragmatic. Recall that each explanation ei is collected in the context of a specific labeled example (xi, yi). The semantic filter checks for LFs that are inconsistent with their corresponding example; formally, any LF f for which f(xi) ̸= yi is discarded. For example, in the first explanation in Figure 2, the word “right” can be interpreted as either “immediately” (as in “right before”) or simply “to the right.” The latter interpretation results in a function that is inconsistent with the associated example (since “his wife” is actually to the left of person 2), so it can be safely removed. The pragmatic filters removes LFs that are constant, redundant, or correlated. For example, in Figure 2, LF 2a is constant, as it labels every example positively (since all examples contain two people from the same sentence). LF 3b is redundant, since even though it has a different syntax tree from LF 3a, it labels the training set identically and therefore provides no new signal. Finally, out of all LFs from the same explanation that pass all the other filters, we keep only the most specific (lowest coverage) LF. This prevents multiple correlated LFs from a single example from dominating. As we show in Section 4, over three tasks, the filter bank removes 86% of incorrect parses, and the incorrect ones that remain have average endtask accuracy within 2.5% of the corresponding correct parses. 2.4 Label Aggregator The label aggregator combines multiple (potentially conflicting) suggested labels from the LFs and combines them into a single probabilistic label per example. Concretely, if m LFs pass the filter bank and are applied to n examples, the label aggregator implements a function f : {−1, 0, 1}m×n →[0, 1]n. A naive solution would be to use a simple majority vote, but this fails to account for the fact that LFs can vary widely in accuracy and coverage. Instead, we use data programming (Ratner et al., 2016), which models the relationship between the true labels and the output of the labeling functions as a factor graph. More specifically, given the true labels Y ∈{−1, 1}n (latent) and label matrix Λ ∈{−1, 0, 1}m×n (observed) where Λi,j = LFi(xj), we define two types of factors representing labeling propensity and accuracy: φLab i,j (Λ, Y ) = 1{Λi,j ̸= 0} (1) φAcc i,j (Λ, Y ) = 1{Λi,j = yj}. (2) Denoting the vector of factors pertaining to a given data point xj as φj(Λ, Y ) ∈Rm, define the model: pw(Λ, Y ) = Z−1 w exp  n X j=1 w · φj(Λ, Y )  , (3) 1888 They include Joan Ridsdale, a 62-year-old payroll administrator from County Durham who was hit with a €16,000 tax bill when her husband Gordondied. Spouse Disease Protein Example Explanation True, because the phrase “her husband” is within three words of person 2. Example Explanation Young women on replacement estrogens for ovarian failure after cancer therapy may also have increased risk of endometrial carcinoma and should be examined periodically. (person 1, person 2) (chemical, disease) (protein, kinase) True, because “risk of” comes before the disease. Here we show that c-Jun N-terminal kinases JNK1, JNK2 and JNK3 phosphorylate tauat many serine/threonine-prolines, as assessed by the generation of the epitopes of phosphorylation-dependent anti-tau antibodies. Example Explanation True, because at least one of the words 'phosphorylation', 'phosphorylate', 'phosphorylated', 'phosphorylates' is found in the sentence and the number of words between the protein and kinase is smaller than 8." Figure 4: An example and explanation for each of the three datasets. where w ∈R2m is the weight vector and Zw is the normalization constant. To learn this model without knowing the true labels Y , we minimize the negative log marginal likelihood given the observed labels Λ: ˆw = arg min w −log X Y pw(Λ, Y ) (4) using SGD and Gibbs sampling for inference, and then use the marginals p ˆw(Y | Λ) as probabilistic training labels. Intuitively, we infer accuracies of the LFs based on the way they overlap and conflict with one another. Since noisier LFs are more likely to have high conflict rates with others, their corresponding accuracy weights in w will be smaller, reducing their influence on the aggregated labels. 2.5 Discriminative Model The noisily-labeled training set that the label aggregator outputs is used to train an arbitrary discriminative model. One advantage of training a discriminative model on the task instead of using the label aggregator as a classifier directly is that the label aggregator only takes into account those signals included in the LFs. A discriminative model, on the other hand, can incorporate features that were not identified by the user but are nevertheless informative.2 Consequently, even examples for which all LFs abstained can still be classified correctly. On the three tasks we evaluate, using the discriminative model averages 4.3 F1 points higher than using the label aggregator directly. For the results reported in this paper, our discriminative model is a simple logistic regression classifier with generic features defined over dependency paths.3 These features include unigrams, 2We give an example of two such features in Section 4.3. 3https://github.com/HazyResearch/treedlib Task Train Dev Test % Pos. Spouse 22195 2796 2697 8% Disease 6667 773 4101 20% Protein 5546 1011 1058 22% Table 2: The total number of unlabeled training examples (a pair of annotated entities in a sentence), labeled development examples (for hyperparameter tuning), labeled test examples (for assessment), and the fraction of positive labels in the test split. bigrams, and trigrams of lemmas, dependency labels, and part of speech tags found in the siblings, parents, and nodes between the entities in the dependency parse of the sentence. We found this to perform better on average than a biLSTM, particularly for the traditional supervision baselines with small training set sizes; it also provided easily interpretable features for analysis. 3 Experimental Setup We evaluate the accuracy of BabbleLabble on three relation extraction tasks, which we refer to as Spouse, Disease, and Protein. The goal of each task is to train a classifier for predicting whether the two entities in an example are participating in the relationship of interest, as described below. 3.1 Datasets Statistics for each dataset are reported in Table 2, with one example and one explanation for each given in Figure 4 and additional explanations shown in Appendix B. In the Spouse task, annotators were shown a sentence with two highlighted names and asked to label whether the sentence suggests that the two people are spouses. Sentences were pulled from the Signal Media dataset of news articles (Corney 1889 BL TS # Inputs 30 30 60 150 300 1,000 3,000 10,000 Spouse 50.1 15.5 15.9 16.4 17.2 22.8 41.8 55.0 Disease 42.3 32.1 32.6 34.4 37.5 41.9 44.5 Protein 47.3 39.3 42.1 46.8 51.0 57.6 Average 46.6 28.9 30.2 32.5 35.2 40.8 43.2 55.0 Table 3: F1 scores obtained by a classifier trained with BabbleLabble (BL) using 30 explanations or with traditional supervision (TS) using the specified number of individually labeled examples. BabbleLabble achieves the same F1 score as traditional supervision while using fewer user inputs by a factor of over 5 (Protein) to over 100 (Spouse). et al., 2016). Ground truth data was collected from Amazon Mechanical Turk workers, accepting the majority label over three annotations. The 30 explanations we report on were sampled randomly from a pool of 200 that were generated by 10 graduate students unfamiliar with BabbleLabble. In the Disease task, annotators were shown a sentence with highlighted names of a chemical and a disease and asked to label whether the sentence suggests that the chemical causes the disease. Sentences and ground truth labels came from a portion of the 2015 BioCreative chemical-disease relation dataset (Wei et al., 2015), which contains abstracts from PubMed. Because this task requires specialized domain expertise, we obtained explanations by having someone unfamiliar with BabbleLabble translate from Python to natural language labeling functions from an existing publication that explored applying weak supervision to this task (Ratner et al., 2018). The Protein task was completed in conjunction with OccamzRazor, a neuroscience company targeting biological pathways of Parkinson’s disease. For this task, annotators were shown a sentence from the relevant biomedical literature with highlighted names of a protein and a kinase and asked to label whether or not the kinase influences the protein in terms of a physical interaction or phosphorylation. The annotators had domain expertise but minimal programming experience, making BabbleLabble a natural fit for their use case. 3.2 Experimental Settings Text documents are tokenized with spaCy.4 The semantic parser is built on top of the Python-based 4https://github.com/explosion/spaCy implementation SippyCup.5 On a single core, parsing 360 explanations takes approximately two seconds. We use existing implementations of the label aggregator, feature library, and discriminative classifier described in Sections 2.4–2.5 provided by the open-source project Snorkel (Ratner et al., 2018). Hyperparameters for all methods we report were selected via random search over thirty configurations on the same held-out development set. We searched over learning rate, batch size, L2 regularization, and the subsampling rate (for improving balance between classes).6 All reported F1 scores are the average value of 40 runs with random seeds and otherwise identical settings. 4 Experimental Results We evaluate the performance of BabbleLabble with respect to its rate of improvement by number of user inputs, its dependence on correctly parsed logical forms, and the mechanism by which it utilizes logical forms. 4.1 High Bandwidth Supervision In Table 3 we report the average F1 score of a classifier trained with BabbleLabble using 30 explanations or traditional supervision with the indicated number of labels. On average, it took the same amount of time to collect 30 explanations as 60 labels.7 We observe that in all three tasks, BabbleLabble achieves a given F1 score with far fewer user inputs than traditional supervision, by 5https://github.com/wcmac/sippycup 6Hyperparameter ranges: learning rate (1e-2 to 1e-4), batch size (32 to 128), L2 regularization (0 to 100), subsampling rate (0 to 0.5) 7Zaidan and Eisner (2008) also found that collecting annotator rationales in the form of highlighted substrings from the sentence only doubled annotation time. 1890 Pre-filters Discarded Post-filters LFs Correct Sem. Prag. LFs Correct Spouse 153 10% 19 115 19 84% Disease 104 23% 41 36 27 89% Protein 122 14% 44 58 20 85% Table 4: The number of LFs generated from 30 explanations (pre-filters), discarded by the filter bank, and remaining (post-filters), along with the percentage of LFs that were correctly parsed from their corresponding explanations. as much as 100 times in the case of the Spouse task. Because explanations are applied to many unlabeled examples, each individual input from the user can implicitly contribute many (noisy) labels to the learning algorithm. We also observe, however, that once the number of labeled examples is sufficiently large, traditional supervision once again dominates, since ground truth labels are preferable to noisy ones generated by labeling functions. However, in domains where there is much more unlabeled data available than labeled data (which in our experience is most domains), we can gain in supervision efficiency from using BabbleLabble. Of those explanations that did not produce a correct LF, 4% were caused by the explanation referring to unsupported concepts (e.g., one explanation referred to “the subject of the sentence,” which our simple parser doesn’t support). Another 2% were caused by human errors (the correct LF for the explanation was inconsistent with the example). The remainder were due to unrecognized paraphrases (e.g., the explanation said “the order of appearance is X, Y” instead of a supported phrasing like “X comes before Y”). 4.2 Utility of Incorrect Parses In Table 4, we report LF summary statistics before and after filtering. LF correctness is based on exact match with a manually generated parse for each explanation. Surprisingly, the simple heuristic-based filter bank successfully removes over 95% of incorrect LFs in all three tasks, resulting in final LF sets that are 86% correct on average. Furthermore, among those LFs that pass through the filter bank, we found that the average difference in end-task accuracy between correct and incorrect parses is less than 2.5%. Intuitively, the filters are effective because it is quite difficult for an LF to be parsed from the explanaBL-FB BL BL+PP Spouse 15.7 50.1 49.8 Disease 39.8 42.3 43.2 Protein 38.2 47.3 47.4 Average 31.2 46.6 46.8 Table 5: F1 scores obtained using BabbleLabble with no filter bank (BL-FB), as normal (BL), and with a perfect parser (BL+PP) simulated by hand. tion, label its own example correctly (passing the semantic filter), and not label all examples in the training set with the same label or identically to another LF (passing the pragmatic filter). We went one step further: using the LFs that would be produced by a perfect semantic parser as starting points, we searched for “nearby” LFs (LFs differing by only one predicate) with higher endtask accuracy on the test set and succeeded 57% of the time (see Figure 5 for an example). In other words, when users provide explanations, the signals they describe provide good starting points, but they are actually unlikely to be optimal. This observation is further supported by Table 5, which shows that the filter bank is necessary to remove clearly irrelevant LFs, but with that in place, the simple rule-based semantic parser and a perfect parser have nearly identical average F1 scores. 4.3 Using LFs as Functions or Features Once we have relevant logical forms from userprovided explanations, we have multiple options for how to use them. Srivastava et al. (2017) propose using these logical forms as features in a linear classifier. We choose instead to use them as functions for weakly supervising the creation of a larger training set via data programming (Ratner et al., 2016). In Table 6, we compare the two approaches directly, finding that the the data programming approach outperforms a feature-based one by 9.5 F1 points with the rule-based parser, and by 4.5 points with a perfect parser. We attribute this difference primarily to the ability of data programming to utilize unlabeled data. In Figure 6, we show how the data programming approach improves with the number of unlabeled examples, even as the number of LFs remains constant. We also observe qualitatively that data programming exposes the classifier to additional patterns that are correlated with our explanations but not mentioned directly. For example, in the Disease task, two of the features weighted most 1891 def LF_1a(x): return (-1 if any(w.startswith(“improv”) for w in left(x.person2)) else 0) Correct False, because a word starting with “improve” appears before the chemical. Incorrect Explanation Labeling Function Correctness Accuracy def LF_1b(x): return (-1 if “improv” in left(x.person2)) else 0) 84.6% 84.6% def LF_2a(x): return (1 if “husband” in left(x.person1, dist==1) else 0) Correct True, because “husband” occurs right before the person1. Incorrect 13.6% 66.2% def LF_2b(x): return (1 if “husband” in left(x.person2, dist==1) else 0) Figure 5: Incorrect LFs often still provide useful signal. On top is an incorrect LF produced for the Disease task that had the same accuracy as the correct LF. On bottom is a correct LF from the Spouse task and a more accurate incorrect LF discovered by randomly perturbing one predicate at a time as described in Section 4.2. (Person 2 is always the second person in the sentence). 0 1000 2000 3000 4000 5000 Unlabeled Examples 0.15 0.20 0.25 0.30 0.35 0.40 0.45 F1 Score Spouse (BL) Spouse (Feat) Disease (BL) Disease (Feat) Protein (BL) Protein (Feat) Figure 6: When logical forms of natural language explanations are used as functions for data programming (as they are in BabbleLabble), performance can improve with the addition of unlabeled data, whereas using them as features does not benefit from unlabeled data. highly by the discriminative model were the presence of the trigrams “could produce a” or “support diagnosis of” between the chemical and disease, despite none of these words occurring in the explanations for that task. In Table 6 we see a 4.3 F1 point improvement (10%) when we use the discriminative model that can take advantage of these features rather than applying the LFs directly to the test set and making predictions based on the output of the label aggregator. 5 Related Work and Discussion Our work has two themes: modeling natural language explanations/instructions and learning from weak supervision. The closest body of work is on “learning from natural language.” As mentioned earlier, Srivastava et al. (2017) convert natural language explanations into classifier features (whereas we convert them into labeling functions). Goldwasser and Roth (2011) convert natural lanBL-DM BL BL+PP Feat Feat+PP Spouse 46.5 50.1 49.8 33.9 39.2 Disease 39.7 42.3 43.2 40.8 43.8 Protein 40.6 47.3 47.4 36.7 44.0 Average 42.3 46.6 46.8 37.1 42.3 Table 6: F1 scores obtained using explanations as functions for data programming (BL) or features (Feat), optionally with no discriminative model (-DM) or using a perfect parser (+PP). guage into concepts (e.g., the rules of a card game). Ling and Fidler (2017) use natural language explanations to assist in supervising an image captioning model. Weston (2016); Li et al. (2016) learn from natural language feedback in a dialogue. Wang et al. (2017) convert natural language definitions to rules in a semantic parser to build up progressively higher-level concepts. We lean on the formalism of semantic parsing (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Liang, 2016). One notable trend is to learn semantic parsers from weak supervision (Clarke et al., 2010; Liang et al., 2011), whereas our goal is to obtain weak supervision signal from semantic parsers. The broader topic of weak supervision has received much attention; we mention some works most related to relation extraction. In distant supervision (Craven et al., 1999; Mintz et al., 2009) and multi-instance learning (Riedel et al., 2010; Hoffmann et al., 2011), an existing knowledge base is used to (probabilistically) impute a training set. Various extensions have focused on aggregating a variety of supervision sources by learning generative models from noisy labels (Alfonseca et al., 2012; Takamatsu et al., 2012; Roth and Klakow, 2013; Ratner et al., 2016; Varma et al., 2017). 1892 Finally, while we have used natural language explanations as input to train models, they can also be output to interpret models (Krening et al., 2017; Lei et al., 2016). More generally, from a machine learning perspective, labels are the primary asset, but they are a low bandwidth signal between annotators and the learning algorithm. Natural language opens up a much higher-bandwidth communication channel. We have shown promising results in relation extraction (where one explanation can be “worth” 100 labels), and it would be interesting to extend our framework to other tasks and more interactive settings. Reproducibility The code, data, and experiments for this paper are available on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x900e7e41deaa4ec5b2fe41dc50594548/. Acknowledgments We gratefully acknowledge the support of the following organizations: DARPA under No. N66001-15-C-4043 (SIMPLEX), No. FA8750-17-2-0095 (D3M), No. FA8750-122-0335 (XDATA), and No. FA8750-13-2-0039 (DEFT), DOE under No. 108845, NIH under No. U54EB020405 (Mobilize), ONR under No. N000141712266 and No. N000141310129, AFOSR under No. 580K753, the Intel/NSF CPS Security grant No. 1505728, the Michael J. Fox Foundation for Parkinsons Research under Grant No. 14672, the Secure Internet of Things Project, Qualcomm, Ericsson, Analog Devices, the Moore Foundation, the Okawa Research Grant, American Family Insurance, Accenture, Toshiba, the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747, the Stanford Finch Family Fellowship, the Joseph W. and Hon Mai Goodman Stanford Graduate Fellowship, an NSF CAREER Award IIS-1552635, and the members of the Stanford DAWN project: Facebook, Google, Intel, Microsoft, NEC, Teradata, and VMware. We thank Alex Ratner and the developers of Snorkel for their assistance with data programming, as well as the many members of the Hazy Research group and Stanford NLP group who provided feedback and tested early prototyptes. Thanks as well to the OccamzRazor team: Tarik Koc, Benjamin Angulo, Katharina S. Volz, and Charlotte Brzozowski. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, DOE, NIH, ONR, AFOSR, NSF, or the U.S. Government. References L. V. Ahn, R. Liu, and M. Blum. 2006. Peekaboom: a game for locating objects in images. In Conference on Human Factors in Computing Systems (CHI). pages 55–64. E. Alfonseca, K. Filippova, J. Delort, and G. Garrido. 2012. Pattern learning for relation extraction with a hierarchical topic model. In Association for Computational Linguistics (ACL). pages 54–59. S. Arora and E. Nyberg. 2009. Interactive annotation learning with indirect feature voting. In Association for Computational Linguistics (ACL). pages 55–60. J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In Computational Natural Language Learning (CoNLL). pages 18–27. D. Corney, D. Albakour, M. Martinez-Alvarez, and S. Moussa. 2016. What do a million news articles look like? In NewsIR@ ECIR. pages 42–47. M. Craven, J. Kumlien, et al. 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB. pages 77–86. G. Druck, B. Settles, and A. McCallum. 2009. Active learning by labeling features. In Empirical Methods in Natural Language Processing (EMNLP). pages 81–90. D. Goldwasser and D. Roth. 2011. Learning from natural instructions. In International Joint Conference on Artificial Intelligence (IJCAI). pages 1794–1800. R. Hoffmann, C. Zhang, X. Ling, L. S. Zettlemoyer, and D. S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Association for Computational Linguistics (ACL). pages 541–550. S. Krening, B. Harrison, K. M. Feigh, C. L. Isbell, M. Riedl, and A. Thomaz. 2017. Learning from explanations using sentiment and advice in RL. IEEE Transactions on Cognitive and Developmental Systems 9(1):44–55. T. Lei, R. Barzilay, and T. Jaakkola. 2016. Rationalizing neural predictions. In Empirical Methods in Natural Language Processing (EMNLP). J. Li, A. H. Miller, S. Chopra, M. Ranzato, and J. Weston. 2016. Learning through dialogue interactions. arXiv preprint arXiv:1612.04936 . P. Liang. 2016. Learning executable semantic parsers for natural language understanding. Communications of the ACM 59. P. Liang, M. I. Jordan, and D. Klein. 2009. Learning from measurements in exponential families. In International Conference on Machine Learning (ICML). P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599. H. Ling and S. Fidler. 2017. Teaching machines to describe images via natural language feedback. In Advances in Neural Information Processing Systems (NIPS). M. Mintz, S. Bills, R. Snow, and D. Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Association for Computational Linguistics (ACL). pages 1003–1011. H. Raghavan, O. Madani, and R. Jones. 2005. Interactive feature selection. In International Joint Conference on Artificial Intelligence (IJCAI). volume 5, pages 841–846. 1893 A. J. Ratner, S. H. Bach, H. Ehrenberg, J. Fries, S. Wu, and C. R’e. 2018. Snorkel: Rapid training data creation with weak supervision. In Very Large Data Bases (VLDB). A. J. Ratner, C. M. D. Sa, S. Wu, D. Selsam, and C. R’e. 2016. Data programming: Creating large training sets, quickly. In Advances in Neural Information Processing Systems (NIPS). pages 3567–3575. S. Riedel, L. Yao, and A. McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases (ECML PKDD). pages 148–163. B. Roth and D. Klakow. 2013. Combining generative and discriminative model scores for distant supervision. In Empirical Methods in Natural Language Processing (EMNLP). pages 24–29. S. Srivastava, I. Labutov, and T. Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Empirical Methods in Natural Language Processing (EMNLP). pages 1528–1537. S. Takamatsu, I. Sato, and H. Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Association for Computational Linguistics (ACL). pages 721–729. P. Varma, B. He, D. Iter, P. Xu, R. Yu, C. D. Sa, and C. R’e. 2017. Socratic learning: Augmenting generative models to incorporate latent subsets in training data. arXiv preprint arXiv:1610.08123 . S. I. Wang, S. Ginn, P. Liang, and C. D. Manning. 2017. Naturalizing a programming language via interactive learning. In Association for Computational Linguistics (ACL). C. Wei, Y. Peng, R. Leaman, A. P. Davis, C. J. Mattingly, J. Li, T. C. Wiegers, and Z. Lu. 2015. Overview of the biocreative V chemical disease relation (cdr) task. In Proceedings of the fifth BioCreative challenge evaluation workshop. pages 154–166. J. E. Weston. 2016. Dialog-based language learning. In Advances in Neural Information Processing Systems (NIPS). pages 829–837. O. F. Zaidan and J. Eisner. 2008. Modeling annotators: A generative approach to learning from annotator rationales. In Empirical Methods in Natural Language Processing (EMNLP). M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI). pages 1050–1055. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI). pages 658–666. 1894 A Predicate Examples Below are the predicates in the rule-based semantic parser grammar, each of which may have many supported paraphrases, only one of which is listed here in a minimal example. Logic and: X is true and Y is true or: X is true or Y is true not: X is not true any: Any of X or Y or Z is true all: All of X and Y and Z are true none: None of X or Y or Z is true Comparison =: X is equal to Y ̸=: X is not Y <: X is smaller than Y ≤: X is no more than Y >: X is larger than Y ≥: X is at least Y Syntax lower: X is lowercase upper: X is upper case capital: X is capitalized all caps: X is in all caps starts with: X starts with "cardio" ends with: X ends with "itis" substring: X contains "-induced" Named-entity Tags person: A person is between X and Y location: A place is within two words of X date: A date is between X and Y number: There are three numbers in the sentence organization: An organization is right after X Lists list: (X, Y) is in Z set: X, Y, and Z are true count: There is one word between X and Y contains: X is in Y intersection: At least two of X are in Y map: X is at the start of a word in Y filter: There are three capitalized words to the left of X alias: A spouse word is in the sentence (“spouse” is a predefined list from the user) Position word distance: X is two words before Y char distance: X is twenty characters after Y left: X is before Y right: X is after Y between: X is between Y and Z within: X is within five words of Y 1895 B Sample Explanations The following are a sample of the explanations provided by users for each task. Spouse Users referred to the first person in the sentence as “X” and the second as “Y”. Label true because "and" occurs between X and Y and "marriage" occurs one word after person1. Label true because person Y is preceded by ‘beau’. Label false because the words "married", "spouse", "husband", and "wife" do not occur in the sentence. Label false because there are more than 2 people in the sentence and "actor" or "actress" is left of person1 or person2. Disease Label true because the disease is immediately after the chemical and ’induc’ or ’assoc’ is in the chemical name. Label true because a word containing ’develop’ appears somewhere before the chemical, and the word ’following’ is between the disease and the chemical. Label true because "induced by", "caused by", or "due to" appears between the chemical and the disease." Label false because "none", "not", or "no" is within 30 characters to the left of the disease. Protein Label true because "Ser" or "Tyr" are within 10 characters of the protein. Label true because the words "by" or "with" are between the protein and kinase and the words "no", "not" or "none" are not in between the protein and kinase and the total number of words between them is smaller than 10. Label false because the sentence contains "mRNA", "DNA", or "RNA". Label false because there are two "," between the protein and the kinase with less than 30 characters between them.
2018
175
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1896–1906 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1896 Did the Model Understand the Question? Pramod K. Mudrakarta University of Chicago [email protected] Ankur Taly Google Brain Mukund Sundararajan Google {ataly,mukunds,kedar}@google.com Kedar Dhamdhere Google Abstract We analyze state-of-the-art deep learning models for three tasks: question answering on (1) images, (2) tables, and (3) passages of text. Using the notion of attribution (word importance), we find that these deep networks often ignore important question terms. Leveraging such behavior, we perturb questions to craft a variety of adversarial examples. Our strongest attacks drop the accuracy of a visual question answering model from 61.1% to 19%, and that of a tabular question answering model from 33.5% to 3.3%. Additionally, we show how attributions can strengthen attacks proposed by Jia and Liang (2017) on paragraph comprehension models. Our results demonstrate that attributions can augment standard measures of accuracy and empower investigation of model performance. When a model is accurate but for the wrong reasons, attributions can surface erroneous logic in the model that indicates inadequacies in the test data. 1 Introduction Recently, deep learning has been applied to a variety of question answering tasks. For instance, to answer questions about images (e.g. (Kazemi and Elqursh, 2017)), tabular data (e.g. (Neelakantan et al., 2017)), and passages of text (e.g. (Yu et al., 2018)). Developers, end-users, and reviewers (in academia) would all like to understand the capabilities of these models. The standard way of measuring the goodness of a system is to evaluate its error on a test set. High accuracy is indicative of a good model only if the test set is representative of the underlying realworld task. Most tasks have large test and training sets, and it is hard to manually check that they are representative of the real world. In this paper, we propose techniques to analyze the sensitivity of a deep learning model to question words. We do this by applying attribution (as discussed in section 3), and generating adversarial questions. Here is an illustrative example: recall Visual Question Answering (Agrawal et al., 2015) where the task is to answer questions about images. Consider the question “how symmetrical are the white bricks on either side of the building?” (corresponding image in Figure 1). The system that we study gets the answer right (“very”). But, we find (using an attribution approach) that the system relies on only a few of the words like “how” and “bricks”. Indeed, we can construct adversarial questions about the same image that the system gets wrong. For instance, “how spherical are the white bricks on either side of the building?” returns the same answer (“very”). A key premise of our work is that most humans have expertise in question answering. Even if they cannot manually check that a dataset is representative of the real world, they can identify important question words, and anticipate their function in question answering. 1.1 Our Contributions We follow an analysis workflow to understand three question answering models. There are two steps. First, we apply Integrated Gradients (henceforth, IG) (Sundararajan et al., 2017) to attribute the systems’ predictions to words in the questions. We propose visualizations of attributions to make analysis easy. Second, we identify weaknesses (e.g., relying on unimportant words) in the networks’ logic as exposed by the attributions, and leverage them to craft adversarial questions. A key contribution of this work is an overstability test for question answering networks. Jia and Liang (2017) showed that reading comprehension networks are overly stable to semantics-altering edits to the passage. In this work, we find that 1897 such overstability also applies to questions. Furthermore, this behavior can be seen in visual and tabular question answering networks as well. We use attributions to a define a general-purpose test for measuring the extent of the overstability (sections 4.3 and 5.3). It involves measuring how a network’s accuracy changes as words are systematically dropped from questions. We emphasize that, in contrast to modelindependent adversarial techniques such as that of Jia and Liang (2017), our method exploits the strengths and weaknesses of the model(s) at hand. This allows our attacks to have a high success rate. Additionally, using insights derived from attributions we were able to improve the attack success rate of Jia and Liang (2017) (section 6.2). Such extensive use of attributions in crafting adversarial examples is novel to the best of our knowledge. Next, we provide an overview of our results. In each case, we evaluate a pre-trained model on new inputs. We keep the networks’ parameters intact. Visual QA (section 4): The task is to answer questions about images. We analyze the deep network in Kazemi and Elqursh (2017). We find that the network ignores many question words, relying largely on the image to produce answers. For instance, we show that the model retains more than 50% of its original accuracy even when every word that is not “color” is deleted from all questions in the validation set. We also show that the model under-relies on important question words (e.g. nouns) and attaching contentfree prefixes (e.g., “in not many words, . . .”) to questions drops the accuracy from 61.1% to 19%. QA on tables (section 5): We analyze a system called Neural Programmer (henceforth, NP) (Neelakantan et al., 2017) that answers questions on tabular data. NP determines the answer to a question by selecting a sequence of operations to apply on the accompanying table (akin to an SQL query; details in section 5). We find that these operation selections are more influenced by content-free words (e.g., “in”, “at”, “the”, etc.) in questions than important words such as nouns or adjectives. Dropping all content-free words reduces the validation accuracy of the network from 33.5%1 to 28.5%. Similar to Visual QA, we 1This is the single-model accuracy that we obtained on training the Neural Programmer network. The accuracy reported in the paper is 34.1%. show that attaching content-free phrases (e.g., “in not a lot of words”) to the question drops the network’s accuracy from 33.5% to 3.3%. We also find that NP often gets the answer right for the wrong reasons. For instance, for the question “which nation earned the most gold medals?”, one of the operations selected by NP is “first” (pick the first row of the table). Its answer is right only because the table happens to be arranged in order of rank. We quantify this weakness by evaluating NP on the set of perturbed tables generated by Pasupat and Liang (2016) and find that its accuracy drops from 33.5% to 23%. Finally, we show an extreme form of overstability where the table itself induces a large bias in the network regardless of the question. For instance, we found that in tables about Olympic medal counts, NP was predisposed to selecting the “prev” operator. Reading comprehension (Section 6): The task is to answer questions about paragraphs of text. We analyze the network by Yu et al. (2018). Again, we find that the network often ignores words that should be important. Jia and Liang (2017) proposed attacks wherein sentences are added to paragraphs that ought not to change the network’s answers, but sometimes do. Our main finding is that these attacks are more likely to succeed when an added sentence includes all the question words that the model found important (for the original paragraph). For instance, we find that attacks are 50% more likely to be successful when the added sentence includes top-attributed nouns in the question. This insight should allow the construction of more successful attacks and better training data sets. In summary, we find that all networks ignore important parts of questions. One can fix this by either improving training data, or introducing an inductive bias. Our analysis workflow is helpful in both cases. It would also make sense to expose end-users to attribution visualizations. Knowing which words were ignored, or which operations the words were mapped to, can help the user decide whether to trust a system’s response. 2 Related Work We are motivated by Jia and Liang (2017). As they discuss, “the extent to which [reading comprehension systems] truly understand language remains unclear”. The contrast between Jia and Liang 1898 (2017) and our work is instructive. Their main contribution is to fix the evaluation of reading comprehension systems by augmenting the test set with adversarially constructed examples. (As they point out in Section 4.6 of their paper, this does not necessarily fix the model; the model may simply learn to circumvent the specific attack underlying the adversarial examples.) Their method is independent of the specification of the model at hand. They use crowdsourcing to craft passage perturbations intended to fool the network, and then query the network to test their effectiveness. In contrast, we propose improving the analysis of question answering systems. Our method peeks into the logic of a network to identify highattribution question terms. Often there are several important question terms (e.g., nouns, adjectives) that receive tiny attribution. We leverage this weakness and perturb questions to craft targeted attacks. While Jia and Liang (2017) focus exclusively on systems for the reading comprehension task, we analyze one system each for three different tasks. Our method also helps improve the efficacy Jia and Liang (2017)’s attacks; see table 4 for examples. Our analysis technique is specific to deep-learning-based systems, whereas theirs is not. We could use many other methods instead of Integrated Gradients (IG) to attribute a deep network’s prediction to its input features (Baehrens et al., 2010; Simonyan et al., 2013; Shrikumar et al., 2016; Binder et al., 2016; Springenberg et al., 2014). One could also use model agnostic techniques like Ribeiro et al. (2016b). We choose IG for its ease and efficiency of implementation (requires just a few gradient-calls) and its axiomatic justification (see Sundararajan et al. (2017) for a detailed comparison with other attribution methods). Recently, there have been a number of techniques for crafting and defending against adversarial attacks on image-based deep learning models (cf. Goodfellow et al. (2015)). They are based on oversensitivity of models, i.e., tiny, imperceptible perturbations of the image to change a model’s response. In contrast, our attacks are based on models’ over-reliance on few question words even when other words should matter. We discuss task-specific related work in corresponding sections (sections 4 to 6). 3 Integrated Gradients (IG) We employ an attribution technique called Integrated Gradients (IG) (Sundararajan et al., 2017) to isolate question words that a deep learning system uses to produce an answer. Formally, suppose a function F : Rn ! [0, 1] represents a deep network, and an input x = (x1, . . . , xn) 2 Rn. An attribution of the prediction at input x relative to a baseline input x0 is a vector AF (x, x0) = (a1, . . . , an) 2 Rn where ai is the contribution of xi to the prediction F(x). One can think of F as the probability of a specific response. x1, . . . , xn are the question words; to be precise, they are going to be vector representations of these terms. The attributions a1, . . . , an are the influences/blame-assignments to the variables x1, . . . , xn on the probability F. Notice that attributions are defined relative to a special, uninformative input called the baseline. In this paper, we use an empty question as the baseline, that is, a sequence of word embeddings corresponding to padding value. Note that the context (image, table, or passage) of the baseline x0 is set to be that of x; only the question is set to empty. We now describe how IG produces attributions. Intuitively, as we interpolate between the baseline and the input, the prediction moves along a trajectory, from uncertainty to certainty (the final probability). At each point on this trajectory, one can use the gradient of the function F with respect to the input to attribute the change in probability back to the input variables. IG simply aggregates the gradients of the probability with respect to the input along this trajectory using a path integral. Definition 1 (Integrated Gradients) Given an input x and baseline x0, the integrated gradient along the ith dimension is defined as follows. IGi(x, x0) ::= (xi−x0 i)⇥ Z 1 ↵=0 @F(x0+↵⇥(x−x0)) @xi d↵ (here @F(x) @xi is the gradient of F along the ith dimension at x). Sundararajan et al. (2017) discuss several properties of IG. Here, we informally mention a few desirable ones, deferring the reader to Sundararajan et al. (2017) for formal definitions. IG satisfies the condition that the attributions sum to the difference between the probabilities at 1899 the input and the baseline. We call a variable uninfluential if all else fixed, varying it does not change the output probability. IG satisfies the property that uninfluential variables do not get any attribution. Conversely, influential variables always get some attribution. Attributions for a linear combination of two functions F1 and F2 are a linear combination of the attributions for F1 and F2. Finally, IG satisfies the condition that symmetric variables get equal attributions. In this work, we validate the use of IG empirically via question perturbations. We observe that perturbing high-attribution terms changes the networks’ response (sections 4.4 and 5.5). Conversely, perturbing terms that receive a low attribution does not change the network’s response (sections 4.3 and 5.3). We use these observations to craft attacks against the network by perturbing instances where generic words (e.g., “a”, “the”) receive high attribution or contentful words receive low attribution. 4 Visual Question Answering 4.1 Task, model, and data The Visual Question Answering Task (Agrawal et al., 2015; Teney et al., 2017; Kazemi and Elqursh, 2017; Ben-younes et al., 2017; Zhu et al., 2016) requires a system to answer questions about images (fig. 1). We analyze the deep network from Kazemi and Elqursh (2017). It achieves 61.1% accuracy on the validation set (the state of the art (Fukui et al., 2016) achieves 66.7%). We chose this model for its easy reproducibility. The VQA 1.0 dataset (Agrawal et al., 2015) consists of 614,163 questions posed over 204,721 images (3 questions per image). The images were taken from COCO (Lin et al., 2014), and the questions and answers were crowdsourced. The network in Kazemi and Elqursh (2017) treats question answering as a classification task wherein the classes are 3000 most frequent answers in the training data. The input question is tokenized, embedded and fed to a multi-layer LSTM. The states of the LSTM attend to a featurized version of the image, and ultimately produce a probability distribution over the answer classes. 4.2 Observations We applied IG and attributed the top selected answer class to input question words. The baseline for a given input instance is the image and an Question: how symmetrical are the white bricks on either side of the building Prediction: very Ground truth: very Figure 1: Visual QA (Kazemi and Elqursh, 2017): Visualization of attributions (word importances) for a question that the network gets right. Red indicates high attribution, blue negative attribution, and gray near-zero attribution. The colors are determined by attributions normalized w.r.t the maximum magnitude of attributions among the question’s words. empty question2. We omit instances where the top answer class predicted by the network remains the same even when the question is emptied (i.e., the baseline input). This is because IG attributions are not informative when the input and the baseline have the same prediction. A visualization of the attributions is shown in fig. 1. Notice that very few words have high attribution. We verified that altering the low attribution words in the question does not change the network’s answer. For instance, the following questions still return “very” as the answer: “how spherical are the white bricks on either side of the building”, “how soon are the bricks fading on either side of the building”, “how fast are the bricks speaking on either side of the building”. On analyzing attributions across examples, we find that most of the highly attributed words are words such as “there”, “what”, “how”, “doing”– they are usually the less important words in questions. In section 4.3 we describe a test to measure the extent to which the network depends on such words. We also find that informative words in the question (e.g., nouns) often receive very low attribution, indicating a weakness on part of the network. In Section 4.4, we describe various attacks that exploit this weakness. 4.3 Overstability test To determine the set of question words that the network finds most important, we isolate words that most frequently occur as top attributed words in questions. We then drop all words except these and compute the accuracy. Figure 2 shows how the accuracy changes as the size of this isolated set is varied from 0 to 5305. 2We do not black out the image in our baseline as our objective is to study the influence of just the question words for a given image 1900 We find that just one word is enough for the model to achieve more than 50% of its final accuracy. That word is “color”. Figure 2: VQA network (Kazemi and Elqursh, 2017): Accuracy as a function of vocabulary size, relative to its original accuracy. Words are chosen in the descending order of how frequently they appear as top attributions. The X-axis is on logscale, except near zero where it is linear. Note that even when empty questions are passed as input to the network, its accuracy remains at about 44.3% of its original accuracy. This shows that the model is largely reliant on the image for producing the answer. The accuracy increases (almost) monotonically with the size of the isolated set. The top 6 words in the isolated set are “color”, “many”, “what”, “is”, “there”, and “how”. We suspect that generic words like these are used to determine the type of the answer. The network then uses the type to choose between a few answers it can give for the image. 4.4 Attacks Attributions reveal that the network relies largely on generic words in answering questions (section 4.3). This is a weakness in the network’s logic. We now describe a few attacks against the network that exploit this weakness. Subject ablation attack In this attack, we replace the subject of a question with a specific noun that consistently receives low attribution across questions. We then determine, among the questions that the network originally answered correctly, what percentage result in the same answer after the ablation. We repeat this process for different nouns; specifically, “fits”, “childhood”, “copyrights”, “mornings”, “disorder”, “importance”, “topless”, “critter”, “jumper”, “tweet”, and average the result. Prefix Accuracy in not a lot of words 35.5% in not many words 32.5% what is the answer to 31.7% Union of all three 19% Baseline prefix tell me 51.3% answer this 55.7% answer this for me 49.8% Union of baseline prefixes 46.9% Table 1: VQA network (Kazemi and Elqursh, 2017): Accuracy for prefix attacks; original accuracy is 61.1%. We find that, among the set of questions that the network originally answered correctly, 75.6% of the questions return the same answer despite the subject replacement. Prefix attack In this attack, we attach content-free phrases to questions. The phrases are manually crafted using generic words that the network finds important (section 4.3). Table 1 (top half) shows the resulting accuracy for three prefixes —“in not a lot of words”, “what is the answer to”, and “in not many words”. All of these phrases nearly halve the model’s accuracy. The union of the three attacks drops the model’s accuracy from 61.1% to 19%. We note that the attributions computed for the network were crucial in crafting the prefixes. For instance, we find that other prefixes like “tell me”, “answer this” and “answer this for me” do not drop the accuracy by much; see table 1 (bottom half). The union of these three ineffective prefixes drops the accuracy from 61.1% to only 46.9%. Per attributions, words present in these prefixes are not deemed important by the network. 4.5 Related work Agrawal et al. (2016) analyze several VQA models. Among other attacks, they test the models on question fragments of telescopically increasing length. They observe that VQA models often arrive at the same answer by looking at a small fragment of the question. Our stability analysis in section 4.3 explains, and intuitively subsumes this; indeed, several of the top attributed words appear in the prefix, while important words like “color” often occur in the middle of the question. Our analysis enables additional attacks, for instance, replacing question subject with low attri1901 bution nouns. Ribeiro et al. (2016a) use a model explanation technique to illustrate overstability for two examples. They do not quantify their analysis at scale. Kafle and Kanan (2017); Zhang et al. (2016) examine the VQA data, identify deficiencies, and propose data augmentation to reduce over-representation of certain question/answer types. Goyal et al. (2016) propose the VQA 2.0 dataset, which has pairs of similar images that have different answers on the same question. We note that our method can be used to improve these datasets by identifying inputs where models ignore several words. Huang et al. (2017) evaluate robustness of VQA models by appending questions with semantically similar questions. Our prefix attacks in section 4.4 are in a similar vein and perhaps a more natural and targeted approach. Finally, Fong and Vedaldi (2017) use saliency methods to produce image perturbations as adversarial examples; our attacks are on the question. 5 Question Answering over Tables 5.1 Task, model, and data We now analyze question answering over tables based on the WikiTableQuestions benchmark dataset (Pasupat and Liang, 2015). The dataset has 22033 questions posed over 2108 tables scraped from Wikipedia. Answers are either contents of table cells or some table aggregations. Models performing QA on tables translate the question into a structured program (akin to an SQL query) which is then executed on the table to produce the answer. We analyze a model called Neural Programmer (NP) (Neelakantan et al., 2017). NP is the state of the art among models that are weakly supervised, i.e., supervised using the final answer instead of the correct structured program. It achieves 33.5% accuracy on the validation set. NP translates the input into a structured program consisting of four operator and table column selections. An example of such a program is “reset (score), reset (score), min (score), print (name)”, where the output is the name of the person who has the lowest score. 5.2 Observations We applied IG to attribute operator and column selection to question words. NP preprocesses inputs and whenever applicable, appends symbols tm token, cm token to questions that signify matches between a question and the accompanying table. These symbols are treated the same as question words. NP also computes priors for column selection using question-table matches. These vectors, tm and cm, are passed as additional inputs to the neural network. In the baseline for IG, we use an empty question, and zero vectors for column selection priors3. Figure 3: Visualization of attributions. Question words, preprocessing tokens and column selection priors on the Yaxis. Along the X-axis are operator and column selections with their baseline counterparts in parentheses. Operators and columns not affecting the final answer, and those which are same as their baseline counterparts, are given zero attribution. We visualize the attributions using an alignment matrix; they are commonly used in the analysis of translation models (fig. 3). Observe that the operator “first” is used when the question is asking for a superlative. Further, we see that the word “gold” is a trigger for this operator. We investigate implications of this behavior in the following sections. 5.3 Overstability test Similar to the test we did for Visual QA (section 4.3), we check for overstability in NP by looking at accuracy as a function of the vocabulary size. We treat table match annotations tm token, cm token and the out-of-vocab token (unk) as part of the vocabulary. The results are in fig. 4. We see that the curve is similar to that of Visual QA (fig. 2). Just 5 words (along with the column selection priors) are sufficient for the model to reach more than 50% of its final accuracy on the validation set. These five words are: “many”, “number”, “tm token”, “after”, and “total”. 5.4 Table-specific default programs We saw in the previous section that the model relies on only a few words in producing correct answers. An extreme case of overstability is when 3Note that the table is left intact in the baseline 1902 Operator sequence # Triggers Insights reset, reset, max, print 109 [unk, date, position, points, name, competition, notes, no, year, venue] sports reset, prev, max, print 68 [unk, rank, total, bronze, gold, silver, nation, name, date, no] medal tallies reset, reset, first, print 29 [name, unk, notes, year, nationality, rank, location, date, comments, hometown] player rankings reset, mfe, first, print 25 [notes, date, title, unk, role, genre, year, score, opponent, event] awards reset, reset, min, print 17 [year, height, unk, name, position, floors, notes, jan, jun, may] building info. reset, mfe, max, print 14 [opponent, date, result, location, rank, site, attendance, notes, city, listing] politics reset, next, first, print 10 [unk, name, year, edition, birth, death, men, time, women, type] census Table 2: Attributions to column names for table-specific default programs (programs returned by NP on empty input questions). See supplementary material, table 6 for the full list. These results are indication that the network is predisposed towards picking certain operators solely based on the table. Figure 4: Accuracy as a function of vocabulary size. The words are chosen in the descending order of their frequency appearance as top attributions to question terms. The X-axis is on logscale, except near zero where it is linear. Note that just 5 words are necessary for the network to reach more than 50% of its final accuracy. the operator sequences produced by the model are independent of the question. We find that if we supply an empty question as an input, i.e., the output is a function only of the table, then the distribution over programs is quite skewed. We call these programs table-specific default programs. On average, about 36.9% of the selected operators match their table-default counterparts, indicating that the model relies significantly on the table for producing an answer. For each default program, we used IG to attribute operator and column selections to column names and show ten most frequently occurring ones across tables in the validation set (table 2). Here is an insight from this analysis: NP uses the combination “reset, prev” to exclude the last row of the table from answer computation. The default program corresponding to “reset, prev, max, print” has attributions to column names such as “rank”, “gold”, “silver”, “bronze”, “nation”, “year”. These column names indicate medal tallies and usually have a “total” row. If the table happens not to have a “total” row, the model may ‘ Attack phrase Prefix Suffix in not a lot of words 20.6% 10.0% if its all the same 21.8% 18.7% in not many words 15.6% 11.2% one way or another 23.5% 20.0% Union of above attacks 3.3% Baseline please answer 32.3% 30.7% do you know 31.2% 29.5% Union of baseline prefixes 27.1% Table 3: Neural Programmer (Neelakantan et al., 2017): Left: Validation accuracy when attack phrases are concatenated to the question. (Original: 33.5%) produce an incorrect answer. We now describe attacks that add or drop content-free words from the question, and cause NP to produce the wrong answer. These attacks leverage the attribution analysis. 5.5 Attacks Question concatenation attacks In these attacks, we either suffix or prefix contentfree phrases to questions. The phrases are crafted using irrelevant trigger words for operator selections (supplementary material, table 5). We manually ensure that the phrases are content-free. Table 3 describes our results. The first 4 phrases use irrelevant trigger words and result in a large drop in accuracy. For instance, the first phrase uses “not” which is a trigger for “next”, “last”, and “min”, and the second uses “same” which is a trigger for “next” and “mfe”. The four phrases combined results in the model’s accuracy going down from 33.5% to 3.3%. The first two phrases alone drop the accuracy to 5.6%. The next set of phrases use words that receive low attribution across questions, and are hence non-triggers for any operator. The resulting drop in accuracy on using these phrases is relatively 1903 low. Combined, they result in the model’s accuracy dropping from 33.5% to 27.1%. Stop word deletion attacks We find that sometimes an operator is selected based on stop words like: “a”, “at”, “the”, etc. For instance, in the question, “what ethnicity is at the top?”, the operator “next” is triggered on the word “at”. Dropping the word “at” from the question changes the operator selection and causes NP to return the wrong answer. We drop stop words from questions in the validation dataset that were originally answered correctly and test NP on them. The stop words to be dropped were manually selected4 and are shown in Figure 5 in the supplementary material. By dropping stop words, the accuracy drops from 33.5% to 28.5%. Selecting operators based on stop words is not robust. In real world search queries, users often phrase questions without stop words, trading grammatical correctness for conciseness. For instance, the user may simply say “top ethnicity”. It may be possible to defend against such examples by generating synthetic training data, and re-training the network on it. Row reordering attacks We found that NP often got the question right by leveraging artifacts of the table. For instance, the operators for the question “which nation earned the most gold medals” are “reset”, “prev”, “first” and “print”. The “prev” operator essentially excludes the last row from the answer computation. It gets the answer right for two reasons: (1) the answer is not in the last row, and (2) rows are sorted by the values in the column “gold”. In general, a question answering system should not rely on row ordering in tables. To quantify the extent of such biases, we used a perturbed version of WikiTableQuestions validation dataset as described in Pasupat and Liang (2016)5 and evaluated the existing NP model on it (there was no re-training involved here). We found that NP has only 23% accuracy on it, in constrast to an accuracy of 33.5% on the original validation dataset. One approach to making the network robust to row-reordering attacks is to train against perturbed tables. This may also help the model generalize 4We avoided standard stop word lists (e.g. NLTK) as they contain contentful words (e.g “after”) which may be important in some questions (e.g. “who ranked right after turkey?”) 5based on data at https://nlp.stanford.edu/ software/sempre/wikitable/dpd/ better. Indeed, Mudrakarta et al. (2018) note that the state-of-the-art strongly supervised6 model on WikiTableQuestions (Krishnamurthy et al., 2017) enjoys a 7% gain in its final accuracy by leveraging perturbed tables during training. 6 Reading Comprehension 6.1 Task, model, and data The reading comprehension task involves identifying a span from a context paragraph as an answer to a question. The SQuAD dataset (Rajpurkar et al., 2016) for machine reading comprehension contains 107.7K query-answer pairs, with 87.5K for training, 10.1K for validation, and another 10.1K for testing. Deep learning methods are quite successful on this problem, with the state-of-the-art F1 score at 84.6 achieved by Yu et al. (2018); we analyze their model. 6.2 Analyzing adversarial examples Recall the adversarial attacks proposed by Jia and Liang (2017) for reading comprehension systems. Their attack ADDSENT appends sentences to the paragraph that resemble an answer to the question without changing the ground truth. See the second column of table 4 for a few examples. We investigate the effectiveness of their attacks using attributions. We analyze 100 examples generated by the ADDSENT method in Jia and Liang (2017), and find that an adversarial sentence is successful in fooling the model in two cases: First, a contentful word in the question gets low/zero attribution and the adversarially added sentence modifies that word. E.g. in the question, “Who did Kubiak take the place of after Super Bowl XXIV?”, the word “Super” gets low attribution. Adding “After Champ Bowl XXV, Crowton took the place of Jeff Dean” changes the prediction for the model. Second, a contentful word in the question that is not present in the context. For e.g. in the question “Where hotel did the Panthers stay at?”, “hotel”, is not present in the context. Adding “The Vikings stayed at Chicago hotel.” changes the prediction for the model. On the flip side, an adversarial sentence is unsuccessful when a contentful word in the question having high attribution is not present in the added sentence. E.g. for “Where according to gross state product does Victoria rank in Australia?”, “Australia” receives high attribution. Adding “Accord6supervised on the structured program 1904 Question ADDSENT attack that does not work Attack that works Who was Count of Melfi Jeff Dean was the mayor of Bracco. Jeff Dean was the mayor of Melfi. What country was Abhisit Vejjajiva prime minister of , despite having been born in Newcastle ? Samak Samak was prime minister of the country of Chicago, despite having been born in Leeds. Abhisit Vejjajiva was chief minister of the country of Chicago, despite having been born in Leeds. Where according to gross state product does Victoria rank in Australia ? According to net state product, Adelaide ranks 7 in New Zealand According to net state product, Adelaide ranked 7 in Australia. (as a prefix) When did the Methodist Protestant Church split from the Methodist Episcopal Church ? The Presbyterian Catholics split from the Presbyterian Anglican in 1805. The Methodist Protestant Church split from the Presbyterian Anglican in 1805. (as a prefix) What period was 2.5 million years ago ? The period of Plasticean era was 2.5 billion years ago. The period of Plasticean era was 1.5 billion years ago. (as a prefix) Table 4: ADDSENT attacks that failed to fool the model. With modifications to preserve nouns with high attributions, these are successful in fooling the model. Question words that receive high attribution are colored red (intensity indicates magnitude). ing to net state product, Adelaide ranks 7 in New Zealand.” does not fool the model. However, retaining “Australia” in the adversarial sentence does change the model’s prediction. 6.3 Predicting the effectiveness of attacks Next we correlate attributions with efficacy of the ADDSENT attacks. We analyzed 1000 (question, attack phrase) instances7 where Yu et al. (2018) model has the correct baseline prediction. Of the 1000 cases, 508 are able to fool the model, while 492 are not. We split the examples into two groups. The first group has examples where a noun or adjective in the question has high attribution, but is missing from the adversarial sentence and the rest are in the second group. Our attribution analysis suggests that we should find more failed examples in the first group. That is indeed the case. The first group has 63% failed examples, while the second has only 40%. Recall that the attack sentences were constructed by (a) generating a sentence that answers the question, (b) replacing all the adjectives and nouns with antonyms, and named entities by the nearest word in GloVe word vector space (Pennington et al., 2014) and (c) crowdsourcing to check that the new sentence is grammatically correct. This suggests a use of attributions to improve the effectiveness of the attacks, namely ensuring that question words that the model thinks are important are left untouched in step (b) (we note that other changes in should be carried out). In table 4, 7data sourced from https:// worksheets.codalab.org/worksheets/ 0xc86d3ebe69a3427d91f9aaa63f7d1e7d/ we show a few examples where an original attack did not fool the model, but preserving a noun with high attribution did. 7 Conclusion We analyzed three question answering models using an attribution technique. Attributions helped us identify weaknesses of these models more effectively than conventional methods (based on validation sets). We believe that a workflow that uses attributions can aid the developer in iterating on model quality more effectively. While the attacks in this paper may seem unrealistic, they do expose real weaknesses that affect the usage of a QA product. Under-reliance on important question terms is not safe. We also believe that other QA models may share these weaknesses. Our attribution-based methods can be directly used to gauge the extent of such problems. Additionally, our perturbation attacks (sections 4.4 and 5.5) serve as empirical validation of attributions. Reproducibility Code to generate attributions and reproduce our results is freely available at https://github. com/pramodkaushik/acl18_results. Acknowledgments We thank the anonymous reviewers and Kevin Gimpel for feedback on our work, and David Dohan for helping with the reading comprehension network. We are grateful to Jiˇr´ı ˇSimˇsa for helpful comments on drafts of this paper. 1905 References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. arXiv preprint arXiv:1606.07356. Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C Lawrence Zitnick, Dhruv Batra, and Devi Parikh. 2015. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468. David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and KlausRobert M¨uller. 2010. How to explain individual classification decisions. Journal of Machine Learning Research, pages 1803–1831. Hedi Ben-younes, R´emi Cadene, Matthieu Cord, and Nicolas Thome. 2017. Mutan: Multimodal tucker fusion for visual question answering. arXiv preprint arXiv:1705.06676. Alexander Binder, Gr´egoire Montavon, Sebastian Bach, Klaus-Robert M¨uller, and Wojciech Samek. 2016. Layer-wise relevance propagation for neural networks with local renormalization layers. CoRR. Ruth C Fong and Andrea Vedaldi. 2017. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3429–3437. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. arXiv preprint arXiv:1612.00837. Jia-Hong Huang, Cuong Duc Dao, Modar Alfadly, and Bernard Ghanem. 2017. A novel framework for robustness analysis of visual qa models. arXiv preprint arXiv:1711.06232. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017. Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 1983–1991. IEEE. Vahid Kazemi and Ali Elqursh. 2017. Show, ask, attend, and answer: A strong baseline for visual question answering. arXiv preprint arXiv:1704.03162. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. It was the training data pruning too! arXiv preprint arXiv:1803.04579. Arvind Neelakantan, Quoc V. Le, Mart´ın Abadi, Andrew McCallum, and Dario Amodei. 2017. Learning a natural language interface with neural programmer. Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In International Conference on Learning Representations ICLR. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2016. Inferring logical forms from denotations. arXiv preprint arXiv:1606.06900. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383–2392. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016a. Nothing else matters: modelagnostic explanations by identifying prediction invariance. arXiv preprint arXiv:1611.05817. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016b. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM. 1906 Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not just a black box: Learning important features through propagating activation differences. CoRR. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2014. Striving for simplicity: The all convolutional net. CoRR. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 3319–3328. Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2017. Tips and tricks for visual question answering: Learnings from the 2017 challenge. arXiv preprint arXiv:1708.02711. Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and accurate reading comprehension by combining self-attention and convolution. In International Conference on Learning Representations. Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and yang: Balancing and answering binary visual questions. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pages 5014–5022. IEEE. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4995–5004.
2018
176
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1907–1917 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1907 Harvesting Paragraph-Level Question-Answer Pairs from Wikipedia Xinya Du and Claire Cardie Department of Computer Science Cornell University Ithaca, NY, 14853, USA {xdu, cardie}@cs.cornell.edu Abstract We study the task of generating from Wikipedia articles question-answer pairs that cover content beyond a single sentence. We propose a neural network approach that incorporates coreference knowledge via a novel gating mechanism. Compared to models that only take into account sentence-level information (Heilman and Smith, 2010; Du et al., 2017; Zhou et al., 2017), we find that the linguistic knowledge introduced by the coreference representation aids question generation significantly, producing models that outperform the current state-of-theart. We apply our system (composed of an answer span extraction system and the passage-level QG system) to the 10,000 top-ranking Wikipedia articles and create a corpus of over one million questionanswer pairs. We also provide a qualitative analysis for this large-scale generated corpus from Wikipedia. 1 Introduction Recently, there has been a resurgence of work in NLP on reading comprehension (Hermann et al., 2015; Rajpurkar et al., 2016; Joshi et al., 2017) with the goal of developing systems that can answer questions about the content of a given passage or document. Large-scale QA datasets are indispensable for training expressive statistical models for this task and play a critical role in advancing the field. And there have been a number of efforts in this direction. Miller et al. (2016), for example, develop a dataset for open-domain question answering; Rajpurkar et al. (2016) and Joshi et al. (2017) do so for reading comprehension (RC); and Hill et al. (2015) and Hermann Paragraph: (1)Tesla was renowned for his achievements and showmanship, eventually earning him a reputation in popular culture as an archetypal "mad scientist". (2)His patents earned him a considerable amount of money, much of which was used to finance his own projects with varying degrees of success. (3)He lived most of his life in a series of New York hotels, through his retirement. (4)Tesla died on 7 January 1943. ... Questions: – What was Tesla’s reputation in popular culture? mad scientist – How did Tesla finance his work? patents – Where did Tesla live for much of his life? New York hotels Figure 1: Example input from the fourth paragraph of a Wikipedia article on Nikola Tesla, along with the natural questions and their answers from the SQuAD (Rajpurkar et al., 2016) dataset. We show in italics the set of mentions that refer to Nikola Tesla — Tesla, him, his, he, etc. et al. (2015), for the related task of answering cloze questions (Winograd, 1972; Levesque et al., 2011). To create these datasets, either crowdsourcing or (semi-)synthetic approaches are used. The (semi-)synthetic datasets (e.g., Hermann et al. (2015)) are large in size and cheap to obtain; however, they do not share the same characteristics as explicit QA/RC questions (Rajpurkar et al., 2016). In comparison, high-quality crowdsourced datasets are much smaller in size, and the annotation process is quite expensive because the labeled examples require expertise and careful design (Chen et al., 2016). 1908 Thus, there is a need for methods that can automatically generate high-quality question-answer pairs. Serban et al. (2016) propose the use of recurrent neural networks to generate QA pairs from structured knowledge resources such as Freebase. Their work relies on the existence of automatically acquired KBs, which are known to have errors and suffer from incompleteness. They are also nontrivial to obtain. In addition, the questions in the resulting dataset are limited to queries regarding a single fact (i.e., tuple) in the KB. Motivated by the need for large scale QA pairs and the limitations of recent work, we investigate methods that can automatically “harvest” (generate) question-answer pairs from raw text/unstructured documents, such as Wikipediatype articles. Recent work along these lines (Du et al., 2017; Zhou et al., 2017) (see Section 2) has proposed the use of attention-based recurrent neural models trained on the crowdsourced SQuAD dataset (Rajpurkar et al., 2016) for question generation. While successful, the resulting QA pairs are based on information from a single sentence. As described in Du et al. (2017), however, nearly 30% of the questions in the human-generated questions of SQuAD rely on information beyond a single sentence. For example, in Figure 1, the second and third questions require coreference information (i.e., recognizing that “His” in sentence 2 and “He” in sentence 3 both corefer with “Tesla” in sentence 1) to answer them. Thus, our research studies methods for incorporating coreference information into the training of a question generation system. In particular, we propose gated Coreference knowledge for Neural Question Generation (CorefNQG), a neural sequence model with a novel gating mechanism that leverages continuous representations of coreference clusters — the set of mentions used to refer to each entity — to better encode linguistic knowledge introduced by coreference, for paragraph-level question generation. In an evaluation using the SQuAD dataset, we find that CorefNQG enables better question generation. It outperforms significantly the baseline neural sequence models that encode information from a single sentence, and a model that encodes all preceding context and the input sentence itself. When evaluated on only the portion of SQuAD that requires coreference resolution, the gap between our system and the baseline systems is even larger. By applying our approach to the 10,000 topranking Wikipedia articles, we obtain a question answering/reading comprehension dataset with over one million QA pairs; we provide a qualitative analysis in Section 6. The dataset and the source code for the system are available at https://github.com/xinyadu/ HarvestingQA. 2 Related Work 2.1 Question Generation Since the work by Rus et al. (2010), question generation (QG) has attracted interest from both the NLP and NLG communities. Most early work in QG employed rule-based approaches to transform input text into questions, usually requiring the application of a sequence of well-designed general rules or templates (Mitkov and Ha, 2003; Labutov et al., 2015). Heilman and Smith (2010) introduced an overgenerate-and-rank approach: their system generates a set of questions and then ranks them to select the top candidates. Apart from generating questions from raw text, there has also been research on question generation from symbolic representations (Yao et al., 2012; Olney et al., 2012). With the recent development of deep representation learning and large QA datasets, there has been research on recurrent neural network based approaches for question generation. Serban et al. (2016) used the encoder-decoder framework to generate QA pairs from knowledge base triples; Reddy et al. (2017) generated questions from a knowledge graph; Du et al. (2017) studied how to generate questions from sentences using an attention-based sequence-to-sequence model and investigated the effect of exploiting sentencevs. paragraph-level information. Du and Cardie (2017) proposed a hierarchical neural sentencelevel sequence tagging model for identifying question-worthy sentences in a text passage. Finally, Duan et al. (2017) investigated how to use question generation to help improve question answering systems on the sentence selection subtask. In comparison to the related methods from above that generate questions from raw text, our method is different in its ability to take into account contextual information beyond the sentencelevel by introducing coreference knowledge. 1909 2.2 Question Answering Datasets and Creation Recently there has been an increasing interest in question answering with the creation of many datasets. Most are built using crowdsourcing; they are generally comprised of fewer than 100,000 QA pairs and are time-consuming to create. WebQuestions (Berant et al., 2013), for example, contains 5,810 questions crawled via the Google Suggest API and is designed for knowledge base QA with answers restricted to Freebase entities. To tackle the size issues associated with WebQuestions, Bordes et al. (2015) introduce SimpleQuestions, a dataset of 108,442 questions authored by English speakers. SQuAD (Rajpurkar et al., 2016) is a dataset for machine comprehension; it is created by showing a Wikipedia paragraph to human annotators and asking them to write questions based on the paragraph. TriviaQA (Joshi et al., 2017) includes 95k question-answer authored by trivia enthusiasts and corresponding evidence documents. (Semi-)synthetic generated datasets are easier to build to large-scale (Hill et al., 2015; Hermann et al., 2015). They usually come in the form of cloze-style questions. For example, Hermann et al. (2015) created over a million examples by pairing CNN and Daily Mail news articles with their summarized bullet points. Chen et al. (2016) showed that this dataset is quite noisy due to the method of data creation and concluded that performance of QA systems on the dataset is almost saturated. Closest to our work is that of Serban et al. (2016). They train a neural triple-to-sequence model on SimpleQuestions, and apply their system to Freebase to produce a large collection of human-like question-answer pairs. 3 Task Definition Our goal is to harvest high quality questionanswer pairs from the paragraphs of an article of interest. In our task formulation, this consists of two steps: candidate answer extraction and answer-specific question generation. Given an input paragraph, we first identify a set of question-worthy candidate answers ans = (ans1, ans2, ..., ansl), each a span of text as denoted in color in Figure 1. For each candidate answer ansi, we then aim to generate a question Q — a sequence of tokens y1, ..., yN — based on the sentence S that contains candidate ansi such that: • Q asks about an aspect of ansi that is of potential interest to a human; • Q might rely on information from sentences that precede S in the paragraph. Mathematically then, Q = arg max Q P (Q|S, C) (1) where P(Q|S, C) = QN n=1 P (yn|y<n, S, C) where C is the set of sentences that precede S in the paragraph. 4 Methodology In this section, we introduce our framework for harvesting the question-answer pairs. As described above, it consists of the question generator CorefNQG (Figure 2) and a candidate answer extraction module. During test/generation time, we (1) run the answer extraction module on the input text to obtain answers, and then (2) run the question generation module to obtain the corresponding questions. 4.1 Question Generation As shown in Figure 2, our generator prepares the feature-rich input embedding — a concatenation of (a) a refined coreference position feature embedding, (b) an answer feature embedding, and (c) a word embedding, each of which is described below. It then encodes the textual input using an LSTM unit (Hochreiter and Schmidhuber, 1997). Finally, an attention-copy equipped decoder is used to decode the question. More specifically, given the input sentence S (containing an answer span) and the preceding context C, we first run a coreference resolution system to get the coref-clusters for S and C and use them to create a coreference transformed input sentence: for each pronoun, we append its most representative non-pronominal coreferent mention. Specifically, we apply the simple feedforward network based mention-ranking model of Clark and Manning (2016) to the concatenation of C and S to get the coref-clusters for all entities in C and S. The C&M model produces a score/representation s for each mention pair (m1, m2), s(m1, m2) = Wmhm(m1, m2) + bm (2) 1910 … Decoder LSTMs Context Vector encoder Natural Question ... What team did the Panthers defeat … ? coref. gate vector MLP They the Panthers defeated the Arizona Cardinals … … word answer feature coref. position feature mention-pair score refined coref. position feature Encoder coreference transformed sentence S’ 𝑔" 𝑐" 𝑠𝑐𝑜𝑟𝑒" 𝒇𝒅 𝒇𝒄 𝑓, 𝑥 ℎ/ ℎ0 ℎ1 ℎ2 ℎ3 ℎ4 ℎ5 Attention Figure 2: The gated Coreference knowledge for Neural Question Generation (CorefNQG) Model. word they the panthers defeated the arizona cardinals 49 – 15 ... ans. feature O O O O B_ANS I_ANS I_ANS O O O ... coref. feature B_PRO B_ANT I_ANT O O O O O O O ... Table 1: Example input sentence with coreference and answer position features. The corresponding gold question is “What team did the Panthers defeat in the NFC championship game ?” where Wm is a 1 × d weight matrix and b is the bias. hm(m1, m2) is representation of the last hidden layer of the three layer feedforward neural network. For each pronoun in S, we then heuristically identify the most “representative” antecedent from its coref-cluster. (Proper nouns are preferred.) We append the new mention after the pronoun. For example, in Table 1, “the panthers” is the most representative mention in the coref-cluster for “they”. The new sentence with the appended coreferent mention is our coreference transformed input sentence S ′ (see Figure 2). Coreference Position Feature Embedding For each token in S ′, we also maintain one position feature fc = (c1, ..., cn), to denote pronouns (e.g., “they”) and antecedents (e.g., “the panthers”). We use the BIO tagging scheme to label the associated spans in S ′. “B_ANT” denotes the start of an antecedent span, tag “I_ANT” continues the antecedent span and tag “O” marks tokens that do not form part of a mention span. Similarly, tags “B_PRO” and “I_PRO” denote the pronoun span. (See Table 1, “coref. feature”.) Refined Coref. Position Feature Embedding Inspired by the success of gating mechanisms for controlling information flow in neural networks (Hochreiter and Schmidhuber, 1997; Dauphin et al., 2017), we propose to use a gating network here to obtain a refined representation of the coreference position feature vectors fc = (c1, ..., cn). The main idea is to utilize the mention-pair score (see Equation 2) to help the neural network learn the importance of the coreferent phrases. We compute the refined (gated) coreference position feature vector fd = (d1, ..., dn) as follows, gi = ReLU(Waci + Wbscorei + b) di = gi ⊙ci (3) where ⊙denotes an element-wise product between two vectors and ReLU is the rectified linear activation function. scorei denotes the mentionpair score for each antecedent token (e.g., “the” and “panthers”) with the pronoun (e.g., “they”); scorei is obtained from the trained model (Equation 2) of the C&M. If token i is not added later as an antecedent token, scorei is set to zero. Wa, Wb are weight matrices and b is the bias vector. Answer Feature Embedding We also include an answer position feature embedding to generate answer-specific questions; we denote the answer span with the usual BIO tagging scheme (see, 1911 e.g., “the arizona cardinals” in Table 1). During training and testing, the answer span feature (i.e., “B_ANS”, “I_ANS” or “O”) is mapped to its feature embedding space: fa = (a1, ..., an). Word Embedding To obtain the word embedding for the tokens themselves, we just map the tokens to the word embedding space: x = (x1, ..., xn). Final Encoder Input As noted above, the final input to the LSTM-based encoder is a concatenation of (1) the refined coreference position feature embedding (light blue units in Figure 2), (2) the answer position feature embedding (red units), and (3) the word embedding for the token (green units), ei = concat(di, ai, xi) (4) Encoder As for the encoder itself, we use bidirectional LSTMs to read the input e = (e1, ..., en) in both the forward and backward directions. After encoding, we obtain two sequences of hidden vectors, namely, −→ h = (−→ h1, ..., −→ hn) and ←− h = (←− h1, ..., ←− hn). The final output state of the encoder is the concatenation of −→ h and ←− h where hi = concat(−→ hi, ←− hi) (5) Question Decoder with Attention & Copy On top of the feature-rich encoder, we use LSTMs with attention (Bahdanau et al., 2015) as the decoder for generating the question y1, ..., ym one token at a time. To deal with rare/unknown words, the decoder also allows directly copying words from the source sentence via pointing (Vinyals et al., 2015). At each time step t, the decoder LSTM reads the previous word embedding wt−1 and previous hidden state st−1 to compute the new hidden state, st = LSTM(wt−1, st−1) (6) Then we calculate the attention distribution αt as in Bahdanau et al. (2015), et,i = hT i Wcst−1 αt = softmax(et) (7) where Wc is a weight matrix and attention distribution αt is a probability distribution over the source sentence words. With αt, we can obtain the context vector h∗ t , h∗ t = n X i=1 αi thi (8) Then, using the context vector h∗ t and hidden state st, the probability distribution over the target (question) side vocabulary is calculated as, Pvocab = softmax(Wdconcat(h∗ t , st)) (9) Instead of directly using Pvocab for training/generating with the fixed target side vocabulary, we also consider copying from the source sentence. The copy probability is based on the context vector h∗ t and hidden state st, λcopy t = σ (Weh∗ t + Wfst) (10) and the probability distribution over the source sentence words is the sum of the attention scores of the corresponding words, Pcopy(w) = n X i=1 αi t ∗1{w == wi} (11) Finally, we obtain the probability distribution over the dynamic vocabulary (i.e., union of original target side and source sentence vocabulary) by summing over Pcopy and Pvocab, P(w) = λcopy t Pcopy(w) + (1 −λcopy t )Pvocab(w) (12) where σ is the sigmoid function, and Wd, We, Wf are weight matrices. 4.2 Answer Span Identification We frame the problem of identifying candidate answer spans from a paragraph as a sequence labeling task and base our model on the BiLSTM-CRF approach for named entity recognition (Huang et al., 2015). Given a paragraph of n tokens, instead of directly feeding the sequence of word vectors x = (x1, ..., xn) to the LSTM units, we first construct the feature-rich embedding x ′ for each token, which is the concatenation of the word embedding, an NER feature embedding, and a character-level representation of the word (Lample et al., 2016). We use the concatenated vector as the “final” embedding x ′ for the token, x ′ i = concat(xi, CharRepi, NERi) (13) where CharRepi is the concatenation of the last hidden states of a character-based biLSTM. The intuition behind the use of NER features is that SQuAD answer spans contain a large number of named entities, numeric phrases, etc. Then a multi-layer Bi-directional LSTM is applied to (x ′ 1, ..., x ′ n) and we obtain the output state 1912 zt for time step t by concatenation of the hidden states (forward and backward) at time step t from the last layer of the BiLSTM. We apply the softmax to (z1, ..., zn) to get the normalized score representation for each token, which is of size n × k, where k is the number of tags. Instead of using a softmax training objective that minimizes the cross-entropy loss for each individual word, the model is trained with a CRF (Lafferty et al., 2001) objective, which minimizes the negative log-likelihood for the entire correct sequence: −log(py), py = exp(q(x ′, y)) P y′∈Y′ exp(q(x ′, y ′)) (14) where q(x ′, y) = Pn t=1 Pt,yt + Pn−1 t=0 Ayt,yt+1, Pt,yt is the score of assigning tag yt to the tth token, and Ayt,yt+1 is the transition score from tag yt to yt+1, the scoring matrix A is to be learned. Y ′ represents all the possible tagging sequences. 5 Experiments 5.1 Dataset We use the SQuAD dataset (Rajpurkar et al., 2016) to train our models. It is one of the largest general purpose QA datasets derived from Wikipedia with over 100k questions posed by crowdworkers on a set of Wikipedia articles. The answer to each question is a segment of text from the corresponding Wiki passage. The crowdworkers were users of Amazon’s Mechanical Turk located in the US or Canada. To obtain high-quality articles, the authors sampled 500 articles from the top 10,000 articles obtained by Nayuki’s Wikipedia’s internal PageRanks. The question-answer pairs were generated by annotators from a paragraph; and although the dataset is typically used to evaluate reading comprehension, it has also been used in an open domain QA setting (Chen et al., 2017; Wang et al., 2018). For training/testing answer extraction systems, we pair each paragraph in the dataset with the gold answer spans that it contains. For the question generation system, we pair each sentence that contains an answer span with the corresponding gold question as in Du et al. (2017). To quantify the effect of using predicted (rather than gold standard) answer spans on question generation (e.g., predicted answer span boundaries can be inaccurate), we also train the models on an augmented “Training set w/ noisy examples” (see Table 2). This training set contains all of the original training examples plus new examples for predicted answer spans (from the top-performing answer extraction model, bottom row of Table 3) that overlap with a gold answer span. We pair the new training sentence (w/ predicted answer span) with the gold question. The added examples comprise 42.21% of the noisy example training set. For generation of our one million QA pair corpus, we apply our systems to the 10,000 topranking articles of Wikipedia. 5.2 Evaluation Metrics For question generation evaluation, we use BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014).1 BLEU measures average n-gram precision vs. a set of reference questions and penalizes for overly short sentences. METEOR is a recall-oriented metric that takes into account synonyms, stemming, and paraphrases. For answer candidate extraction evaluation, we use precision, recall and F-measure vs. the gold standard SQuAD answers. Since answer boundaries are sometimes ambiguous, we compute Binary Overlap and Proportional Overlap metrics in addition to Exact Match. Binary Overlap counts every predicted answer that overlaps with a gold answer span as correct, and Proportional Overlap give partial credit proportional to the amount of overlap (Johansson and Moschitti, 2010; Irsoy and Cardie, 2014). 5.3 Baselines and Ablation Tests For question generation, we compare to the stateof-the-art baselines and conduct ablation tests as follows: Du et al. (2017)’s model is an attention-based RNN sequence-to-sequence neural network (without using the answer location information feature). Seq2seq + copyw/ answer is the attention-based sequence-to-sequence model augmented with a copy mechanism, with answer features concatenated with the word embeddings during encoding. Seq2seq + copyw/ full context + answer is the same model as the previous one, but we allow access to the full context (i.e., all the preceding sentences and the input sentence itself). We denote it as ContextNQG henceforth for simplicity. CorefNQG is the coreference-based model proposed in this paper. CorefNQG–gating is an 1We use the evaluation scripts of Du et al. (2017). 1913 Models Training set Training set w/ noisy examples BLEU-3 BLEU-4 METEOR BLEU-3 BLEU-4 METEOR Baseline (Du et al., 2017) (w/o answer) 17.50 12.28 16.62 15.81 10.78 15.31 Seq2seq + copy (w/ answer) 20.01 14.31 18.50 19.61 13.96 18.19 ContextNQG: Seq2seq + copy (w/ full context + answer) 20.31 14.58 18.84 19.57 14.05 18.19 CorefNQG 20.90 15.16 19.12 20.19 14.52 18.59 - gating 20.68 14.84 18.98 20.08 14.40 18.64 - mention-pair score 20.56 14.75 18.85 19.73 14.13 18.38 Table 2: Evaluation results for question generation. Models Precision Recall F-measure Prop. Bin. Exact Prop. Bin. Exact Prop. Bin. Exact NER 24.54 25.94 12.77 58.20 67.66 38.52 34.52 37.50 19.19 BiLSTM 43.54 45.08 22.97 28.43 35.99 18.87 34.40 40.03 20.71 BiLSTM w/ NER 44.35 46.02 25.33 33.30 40.81 23.32 38.04 43.26 24.29 BiLSTM-CRF w/ char 49.35 51.92 38.58 30.53 32.75 24.04 37.72 40.16 29.62 BiLSTM-CRF w/ char w/ NER 45.96 51.61 33.90 41.05 43.98 28.37 43.37 47.49 30.89 Table 3: Evaluation results of answer extraction systems. ablation test, the gating network is removed and the coreference position embedding is not refined. CorefNQG–mention-pair score is also an ablation test where all mention-pair scorei are set to zero. For answer span extraction, we conduct experiments to compare the performance of an off-theshelf NER system and BiLSTM based systems. For training and implementation details, please see the Supplementary Material. 6 Results and Analysis 6.1 Automatic Evaluation Table 2 shows the BLEU-{3, 4} and METEOR scores of different models. Our CorefNQG outperforms the seq2seq baseline of Du et al. (2017) by a large margin. This shows that the copy mechanism, answer features and coreference resolution all aid question generation. In addition, CorefNQG outperforms both Seq2seq+Copy models significantly, whether or not they have access to the full context. This demonstrates that the coreference knowledge encoded with the gating network explicitly helps with the training and generation: it is more difficult for the neural sequence model to learn the coreference knowledge in a latent way. (See input 1 in Figure 3 for an example.) Building end-to-end models that take into account coreference knowledge in a latent way is an interesting direction to explore. In the ablation tests, the performance drop of CorefNQG–gating BLEU-3 BLEU-4 METEOR Seq2seq + copy (w/ ans.) 17.81 12.30 17.11 ContextNQG 18.05 12.53 17.33 CorefNQG 18.46 12.96 17.58 Table 4: Evaluation results for question generation on the portion that requires coreference knowledge (36.42% examples of the original test set). shows that the gating network is playing an important role for getting refined coreference position feature embedding, which helps the model learn the importance of an antecedent. The performance drop of CorefNQG–mention-pair score shows the mention-pair score introduced from the external system (Clark and Manning, 2016) helps the neural network better encode coreference knowledge. To better understand the effect of coreference resolution, we also evaluate our model and the baseline models on just that portion of the test set that requires pronoun resolution (36.42% of the examples) and show the results in Table 4. The gaps of performance between our model and the baseline models are still significant. Besides, we see that all three systems’ performance drop on this partial test set, which demonstrates the hardness of generating questions for the cases that require pronoun resolution (passage context). We also show in Table 2 the results of the QG models trained on the training set augmented with noisy examples with predicted answer spans. 1914 Input 1: The elizabethan navigator, sir francis drake was born in the nearby town of tavistock and was the mayor of plymouth. ... . :: he ::: died::of:::::::: dysentery:: in:::: 1596 :: off::: the :::: coast:: of ::::: puerto::: rico. Human: In what year did Sir Francis Drake die ? ContextNQG: When did he die ? CorefNQG: When did sir francis drake die ? Input 2: american idol is an american singing competition ... . :it::::: began::::: airing :: on::: fox::: on:june 11 , 2002, :as::: an:::::: addition::to::: the:::: idols:::::: format ::::: based :: on::: the::::: british :::: series::: pop:::: idol::: and::: has:::: since::::::: become ::: one::of::: the:::: most ::::::: successful::::: shows::in::: the ::::: history::of::::::: american:::::::: television. Human: When did american idol first air on tv ? ContextNQG: When did fox begin airing ? CorefNQG: When did american idol begin airing ? Input 3: ... the a38 dual-carriageway runs from east to west across the north of the city . ::::: within ::: the ::: city :it::is:::::::: designated::as::‘ the parkway :’::: and:::::::: represents::: the ::::::: boundary :::::: between::: the::::: urban:::: parts::: of :: the:::: city::: and::: the ::::::: generally :::: more::::: recent ::::::: suburban :::: areas:. Human: What is the a38 called inside the city ? ContextNQG: What is another name for the city ? CorefNQG: What is the city designated as ? Figure 3: Example questions (with answers highlighted) generated by human annotators (ground truth questions), by our system CorefNQG, and by the Seq2seq+Copy model trained with full context (i.e., ContextNQG). There is a consistent but acceptable drop for each model on this new training set, given the inaccuracy of predicted answer spans. We see that CorefNQG still outperforms the baseline models across all metrics. Figure 3 provides sample output for input sentences that require contextual coreference knowledge. We see that ContextNQG fails in all cases; our model misses only the third example due to an error introduced by coreference resolution — the “city” and “it” are considered coreferent. We can also see that human-generated questions are more natural and varied in form with better paraphrasing. In Table 3, we show the evaluation results for different answer extraction models. First we see that all variants of BiLSTM models outperform the off-the-shelf NER system (that proposes all NEs as answer spans), though the NER system has a higher recall. The BiLSTM-CRF that encodes the character-level and NER features for each token performs best in terms of F-measure. 6.2 Human Study We hired four native speakers of English to rate the systems’ outputs. Detailed guidelines for the raters are listed in the supplementary materials. Grammaticality Making Sense Answerability Avg. rank ContextNQG 3.793 3.836 3.892 1.768 CorefNQG 3.804* 3.847** 3.895* 1.762 Human 3.807 3.850 3.902 1.758 Table 5: Human evaluation results for question generation. “Grammaticality”, “Making Sense” and “Answerability” are rated on a 1–5 scale (5 for the best, see the supplementary materials for a detailed rating scheme), “Average rank” is rated on a 1–3 scale (1 for the most preferred, ties are allowed.) Two-tailed t-test results are shown for our method compared to ContextNQG (stat. significance is indicated with ∗(p < 0.05), ∗∗(p < 0.01).) The evaluation can also be seen as a measure of the quality of the generated dataset (Section 6.3). We randomly sampled 11 passages/paragraphs from the test set; there are in total around 70 questionanswer pairs for evaluation. We consider three metrics — “grammaticality”, “making sense” and “answerability”. The evaluators are asked to first rate the grammatical correctness of the generated question (before being shown the associated input sentence or any other textual context). Next, we ask them to rate the degree to which the question “makes sense” given the input sentence (i.e., without considering the correctness of the answer span). Finally, evaluators rate the “answerability” of the question given the full context. Table 5 shows the results of the human evaluation. Bold indicates top scores. We see that the original human questions are preferred over the two NQG systems’ outputs, which is understandable given the examples in Figure 3. The humangenerated questions make more sense and correspond better with the provided answers, particularly when they require information in the preceding context. How exactly to capture the preceding context so as to ask better and more diverse questions is an interesting future direction for research. In terms of grammaticality, however, the neural models do quite well, achieving very close to human performance. In addition, we see that our method (CorefNQG) performs statistically significantly better across all metrics in comparison to the baseline model (ContextNQG), which has access to the entire preceding context in the passage. 6.3 The Generated Corpus Our system generates in total 1,259,691 questionanswer pairs, nearly 126 questions per article. Figure 5 shows the distribution of different types of 1915 Exact Match F-1 Dev Test Dev Test DocReader (Chen et al., 2017) 82.33 81.65 88.20 87.79 Table 6: Performance of the neural machine reading comprehension model (no initialization with pretrained embeddings) on our generated corpus. The United States of America (USA), commonly referred to as the United States (U.S.) or America, is a federal republic composed of states, a federal district, five major self-governing territories, and various possessions. ... . The territories are scattered about the Pacific Ocean and the Caribbean Sea. Nine time zones are covered. The geography, climate and wildlife of the country are extremely diverse. Q1: What is another name for the united states of america ? Q2: How many major territories are in the united states? Q3: What are the territories scattered about ? Figure 4: Example question-answer pairs from our generated corpus. questions in our dataset vs. the SQuAD training set. We see that the distribution for “In what”, “When”, “How long”, “Who”, “Where”, “What does” and “What do” questions in the two datasets is similar. Our system generates more “What is”, “What was” and “What percentage” questions, while the proportions of “What did”, “Why” and “Which” questions in SQuAD are larger than ours. One possible reason is that the “Why”, “What did” questions are more complicated to ask (sometimes involving world knowledge) and the answer spans are longer phrases of various types that are harder to identify. “What is” and “What was” questions, on the other hand, are often safer for the neural networks systems to ask. In Figure 4, we show some examples of the generated question-answer pairs. The answer extractor identifies the answer span boundary well and all three questions correspond to their answers. Q2 is valid but not entirely accurate. For more examples, please refer to our supplementary materials. Table 6 shows the performance of a topperforming system for the SQuAD dataset (Document Reader (Chen et al., 2017)) when applied to the development and test set portions of our generated dataset. The system was trained on the training set portion of our dataset. We use the SQuAD evaluation scripts, which calculate exact match (EM) and F-1 scores.2 Performance of the 2F-1 measures the average overlap between the predicted answer span and ground truth answer (Rajpurkar et al., 2016). 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 in which what has how did what can how long what year what were what do why how much what type what percentage what does what did which what are in what where how many when what was who what is SQuAD Our corpus Figure 5: Distribution of question types of our corpus and SQuAD training set. The categories are the ones used in Wang et al. (2016), we add one more category: “what percentage”. neural machine reading model is reasonable. We also train the DocReader on our training set and test the models’ performance on the original dev set of SQuAD; for this, the performance is around 45.2% on EM and 56.7% on F-1 metric. DocReader trained on the original SQuAD training set achieves 69.5% EM, 78.8% F-1 indicating that our dataset is more difficult and/or less natural than the crowd-sourced QA pairs of SQuAD. 7 Conclusion We propose a new neural network model for better encoding coreference knowledge for paragraphlevel question generation. Evaluations with different metrics on the SQuAD machine reading dataset show that our model outperforms state-ofthe-art baselines. The ablation study shows the effectiveness of different components in our model. Finally, we apply our question generation framework to produce a corpus of 1.26 million questionanswer pairs, which we hope will benefit the QA research community. It would also be interesting to apply our approach to incorporating coreference knowledge to other text generation tasks. Acknowledgments We thank the anonymous reviewers and members of Cornell NLP group for helpful comments. 1916 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations Workshop (ICLR). Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1533–1544. http://www.aclweb.org/anthology/D13-1160. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 . Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2358–2367. http://www.aclweb.org/anthology/P16-1223. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1870–1879. https://doi.org/10.18653/v1/P17-1171. Kevin Clark and Christopher D. Manning. 2016. Improving coreference resolution by learning entitylevel distributed representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 643– 653. https://doi.org/10.18653/v1/P16-1061. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In International Conference on Machine Learning. pages 933–941. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Association for Computational Linguistics, Baltimore, Maryland, USA, pages 376– 380. http://www.aclweb.org/anthology/W14-3348. Xinya Du and Claire Cardie. 2017. Identifying where to focus in reading comprehension for neural question generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2067–2073. http://aclweb.org/anthology/D17-1219. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1342–1352. https://doi.org/10.18653/v1/P17-1123. Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 866–874. http://aclweb.org/anthology/D17-1090. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, pages 609–617. http://www.aclweb.org/anthology/N10-1086. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 . Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991 . Ozan Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, pages 720–728. https://doi.org/10.3115/v1/D14-1080. Richard Johansson and Alessandro Moschitti. 2010. Syntactic and semantic structure for opinion expression detection. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, pages 67–76. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1601–1611. https://doi.org/10.18653/v1/P17-1147. 1917 Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). volume 1, pages 889–898. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data . Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, pages 260–270. https://doi.org/10.18653/v1/N16-1030. Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The winograd schema challenge. In Aaai spring symposium: Logical formalizations of commonsense reasoning. volume 46, page 47. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1400–1409. https://doi.org/10.18653/v1/D16-1147. Ruslan Mitkov and Le An Ha. 2003. Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing-Volume 2. Association for Computational Linguistics, pages 17–22. Andrew M Olney, Arthur C Graesser, and Natalie K Person. 2012. Question generation from concept maps. Dialogue & Discourse 3(2):75–99. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pages 311–318. https://doi.org/10.3115/1073083.1073135. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Austin, Texas, pages 2383–2392. https://aclweb.org/anthology/D161264. Sathish Reddy, Dinesh Raghu, Mitesh M. Khapra, and Sachindra Joshi. 2017. Generating natural language question-answer pairs from a knowledge graph using a rnn based question generation model. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, pages 376–385. http://aclweb.org/anthology/E17-1036. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. The first question generation shared task evaluation challenge. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics, pages 251–257. Iulian Vlad Serban, Alberto García-Durán, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 588–598. http://www.aclweb.org/anthology/P16-1056. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced ranker-reader for open-domain question answering . Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211 . Terry Winograd. 1972. Understanding natural language. Cognitive psychology 3(1):1–191. Xuchen Yao, Gosse Bouma, and Yi Zhang. 2012. Semantics-based question generation and implementation. Dialogue & Discourse 3(2):11–42. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. arXiv preprint arXiv:1704.01792 .
2018
177
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1918–1927 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1918 Multi-Passage Machine Reading Comprehension with Cross-Passage Answer Verification Yizhong Wang1 *, Kai Liu2, Jing Liu2, Wei He2, Yajuan Lyu2, Hua Wu2, Sujian Li1 and Haifeng Wang2 1Key Laboratory of Computational Linguistics, Peking University, MOE, China 2Baidu Inc., Beijing, China {yizhong, lisujian}@pku.edu.cn, {liukai20, liujing46, hewei06, lvyajuan, wu hua, wanghaifeng}@baidu.com Abstract Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi-passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end-to-end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross-passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state-of-the-art performance on the English MS-MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real-world settings. 1 Introduction Machine reading comprehension (MRC), empowering computers with the ability to acquire knowledge and answer questions from textual data, is believed to be a crucial step in building a general intelligent agent (Chen et al., 2016). Recent years have seen rapid growth in the MRC community. With the release of various datasets, the MRC task has evolved from the early cloze-style test (Hermann et al., 2015; Hill et al., 2015) to answer extraction from a single passage (Rajpurkar et al., *This work was done while the first author was doing internship at Baidu Inc. 2016) and to the latest more complex question answering on web data (Nguyen et al., 2016; Dunn et al., 2017; He et al., 2017). Great efforts have also been made to develop models for these MRC tasks , especially for the answer extraction on single passage (Wang and Jiang, 2016; Seo et al., 2016; Pan et al., 2017). A significant milestone is that several MRC models have exceeded the performance of human annotators on the SQuAD dataset1 (Rajpurkar et al., 2016). However, this success on single Wikipedia passage is still not adequate, considering the ultimate goal of reading the whole web. Therefore, several latest datasets (Nguyen et al., 2016; He et al., 2017; Dunn et al., 2017) attempt to design the MRC tasks in more realistic settings by involving search engines. For each question, they use the search engine to retrieve multiple passages and the MRC models are required to read these passages in order to give the final answer. One of the intrinsic challenges for such multipassage MRC is that since all the passages are question-related but usually independently written, it’s probable that multiple confusing answer candidates (correct or incorrect) exist. Table 1 shows an example from MS-MARCO. We can see that all the answer candidates have semantic matching with the question while they are literally different and some of them are even incorrect. As is shown by Jia and Liang (2017), these confusing answer candidates could be quite difficult for MRC models to distinguish. Therefore, special consideration is required for such multi-passage MRC problem. In this paper, we propose to leverage the answer candidates from different passages to verify the final correct answer and rule out the noisy incorrect answers. Our hypothesis is that the cor1https://rajpurkar.github.io/SQuAD-explorer/ 1919 Question: What is the difference between a mixed and pure culture? Passages: [1] A culture is a society’s total way of living and a society is a group that live in a defined territory and participate in common culture. While the answer given is in essence true, societies originally form for the express purpose to enhance . . . [2] . . . There has been resurgence in the economic system known as capitalism during the past two decades. 4. The mixed economy is a balance between socialism and capitalism. As a result, some institutions are owned and maintained by . .. [3] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies. Culture on the other hand, is the lifestyle that the people in the country . . . [4] Best Answer: A pure culture comprises a single species or strains. A mixed culture is taken from a source and may contain multiple strains or species. A contaminated culture contains organisms that derived from some place . . . [5] . . . It will be at that time when we can truly obtain a pure culture. A pure culture is a culture consisting of only one strain. You can obtain a pure culture by picking out a small portion of the mixed culture . . . [6] A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies. A pure culture is a culture consisting of only one strain. . . . · · · · · · Reference Answer: A pure culture is one in which only one kind of microbial species is found whereas in mixed culture two or more microbial species formed colonies. Table 1: An example from MS-MARCO. The text in bold is the predicted answer candidate from each passage according to the boundary model. The candidate from [1] is chosen as the final answer by this model, while the correct answer is from [6] and can be verified by the answers from [3], [4], [5]. rect answers could occur more frequently in those passages and usually share some commonalities, while incorrect answers are usually different from one another. The example in Table 1 demonstrates this phenomenon. We can see that the answer candidates extracted from the last four passages are all valid answers to the question and they are semantically similar to each other, while the answer candidates from the other two passages are incorrect and there is no supportive information from other passages. As human beings usually compare the answer candidates from different sources to deduce the final answer, we hope that MRC model can also benefit from the cross-passage answer verification process. The overall framework of our model is demonstrated in Figure 1 , which consists of three modules. First, we follow the boundary-based MRC models (Seo et al., 2016; Wang and Jiang, 2016) to find an answer candidate for each passage by identifying the start and end position of the answer (Figure 2). Second, we model the meanings of the answer candidates extracted from those passages and use the content scores to measure the quality of the candidates from a second perspective. Third, we conduct the answer verification by enabling each answer candidate to attend to the other candidates based on their representations. We hope that the answer candidates can collect supportive information from each other according to their semantic similarities and further decide whether each candidate is correct or not. Therefore, the final answer is determined by three factors: the boundary, the content and the crosspassage answer verification. The three steps are modeled using different modules, which can be jointly trained in our end-to-end framework. We conduct extensive experiments on the MSMARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets. The results show that our answer verification MRC model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on both datasets. 2 Our Approach Figure 1 gives an overview of our multi-passage MRC model which is mainly composed of three modules including answer boundary prediction, answer content modeling and answer verification. First of all, we need to model the question and passages. Following Seo et al. (2016), we compute the question-aware representation for each passage (Section 2.1). Based on this representation, we employ a Pointer Network (Vinyals et al., 2015) to predict the start and end position of the answer in the module of answer boundary prediction (Section 2.2). At the same time, with the answer content model (Section 2.3), we estimate whether each word should be included in the answer and thus obtain the answer representations. Next, in the answer verification module (Section 2.4), each answer candidate can attend to the other answer candidates to collect supportive information and we compute one score for each candidate 1920 Encoding Q-P Matching Answer Boundary Prediction Answer Content Modeling Question 𝑈" Passage 1 𝑈#$ 𝑉#$ 𝑃(𝑠𝑡𝑎𝑟𝑡) 𝑃(𝑒𝑛𝑑) 𝑃(𝑐𝑜𝑛𝑡𝑒𝑛𝑡) Answer 𝐴3 ⊕ weighted sum 𝑟5$ Passage 2 𝑈#6 𝑉#6 𝑃(𝑠𝑡𝑎𝑟𝑡) 𝑃(𝑒𝑛𝑑) 𝑃(𝑐𝑜𝑛𝑡𝑒𝑛𝑡) Answer 𝐴7 ⊕ weighted sum 𝑟56 Passage n 𝑈#8 𝑉#8 𝑃(𝑠𝑡𝑎𝑟𝑡) 𝑃(𝑒𝑛𝑑) 𝑃(𝑐𝑜𝑛𝑡𝑒𝑛𝑡) Answer 𝐴: ⊕ weighted sum 𝑟58 ... Answer Verification 𝑟5$ 𝑟̃ 5$ 𝑟56 𝑟̃ 56 𝑟58 𝑟̃ 58 ⊕ Score 1 Score 2 Score 3 Attention Final Answer Figure 1: Overview of our method for multi-passage machine reading comprehension to indicate whether it is correct or not according to the verification. The final answer is determined by not only the boundary but also the answer content and its verification score (Section 2.5). 2.1 Question and Passage Modeling Given a question Q and a set of passages {Pi} retrieved by search engines, our task is to find the best concise answer to the question. First, we formally present the details of modeling the question and passages. Encoding We first map each word into the vector space by concatenating its word embedding and sum of its character embeddings. Then we employ bi-directional LSTMs (BiLSTM) to encode the question Q and passages {Pi} as follows: uQ t = BiLSTMQ(uQ t−1, [eQ t , cQ t ]) (1) uPi t = BiLSTMP (uPi t−1, [ePi t , cPi t ]) (2) where eQ t , cQ t , ePi t , cPi t are the word-level and character-level embeddings of the tth word. uQ t and uPi t are the encoding vectors of the tth words in Q and Pi respectively. Unlike previous work (Wang et al., 2017c) that simply concatenates all the passages, we process the passages independently at the encoding and matching steps. Q-P Matching One essential step in MRC is to match the question with passages so that important information can be highlighted. We use the Attention Flow Layer (Seo et al., 2016) to conduct the Q-P matching in two directions. The similarity matrix S ∈R|Q|×|Pi| between the question and passage i is changed to a simpler version, where the similarity between the tth word in the question and the kth word in passage i is computed as: St,k = uQ t ⊺· uPi k (3) Then the context-to-question attention and question-to-context attention is applied strictly following Seo et al. (2016) to obtain the questionaware passage representation {˜uPi t }. We do not give the details here due to space limitation. Next, another BiLSTM is applied in order to fuse the contextual information and get the new representation for each word in the passage, which is regarded as the match output: vPi t = BiLSTMM(vPi t−1, ˜uPi t ) (4) Based on the passage representations, we introduce the three main modules of our model. 2.2 Answer Boundary Prediction To extract the answer span from passages, mainstream studies try to locate the boundary of the answer, which is called boundary model. Following (Wang and Jiang, 2016), we employ Pointer Network (Vinyals et al., 2015) to compute the probability of each word to be the start or end position 1921 of the span: gt k = wa 1 ⊺tanh(Wa 2[vP k , ha t−1]) (5) αt k = exp(gt k)/ X|P| j=1 exp(gt j) (6) ct = X|P| k=1 αt kvP k (7) ha t = LSTM(ha t−1, ct) (8) By utilizing the attention weights, the probability of the kth word in the passage to be the start and end position of the answer is obtained as α1 k and α2 k. It should be noted that the pointer network is applied to the concatenation of all passages, which is denoted as P so that the probabilities are comparable across passages. This boundary model can be trained by minimizing the negative log probabilities of the true start and end indices: Lboundary = −1 N N X i=1 (log α1 y1 i + log α2 y2 i ) (9) where N is the number of samples in the dataset and y1 i , y2 i are the gold start and end positions. 2.3 Answer Content Modeling Previous work employs the boundary model to find the text span with the maximum boundary score as the final answer. However, in our context, besides locating the answer candidates, we also need to model their meanings in order to conduct the verification. An intuitive method is to compute the representation of the answer candidates separately after extracting them, but it could be hard to train such model end-to-end. Here, we propose a novel method that can obtain the representation of the answer candidates based on probabilities. Specifically, we change the output layer of the classic MRC model. Besides predicting the boundary probabilities for the words in the passages, we also predict whether each word should be included in the content of the answer. The content probability of the kth word is computed as: pc k = sigmoid(wc 1 ⊺ReLU(Wc 2vPi k )) (10) Training this content model is also quite intuitive. We transform the boundary labels into a continuous segment, which means the words within the answer span will be labeled as 1 and other words will be labeled as 0. In this way, we define the loss function as the averaged cross entropy: Lcontent = −1 N 1 |P| N X i=1 |P| X j=1 [yc k log pc k + (1 −yc k) log(1 −pc k)] (11) The content probabilities provide another view to measure the quality of the answer in addition to the boundary. Moreover, with these probabilities, we can represent the answer from passage i as a weighted sum of all the word embeddings in this passage: rAi = 1 |Pi| X|Pi| k=1 pc k[ePi k , cPi k ] (12) 2.4 Cross-Passage Answer Verification The boundary model and the content model focus on extracting and modeling the answer within a single passage respectively, with little consideration of the cross-passage information. However, as is discussed in Section 1, there could be multiple answer candidates from different passages and some of them may mislead the MRC model to make an incorrect prediction. It’s necessary to aggregate the information from different passages and choose the best one from those candidates. Therefore, we propose a method to enable the answer candidates to exchange information and verify each other through the cross-passage answer verification process. Given the representation of the answer candidates from all passages {rAi}, each answer candidate then attends to other candidates to collect supportive information via attention mechanism: si,j = ( 0, if i = j, rAi⊺· rAj, otherwise (13) αi,j = exp(si,j)/ Xn k=1 exp(si,k) (14) ˜rAi = Xn j=1 αi,jrAj (15) Here ˜rAi is the collected verification information from other passages based on the attention weights. Then we pass it together with the original representation rAi to a fully connected layer: gv i = wv⊺[rAi,˜rAi, rAi ⊙˜rAi] (16) We further normalize these scores over all passages to get the verification score for answer candidate Ai: pv i = exp(gv i )/ Xn j=1 exp(gv j ) (17) 1922 In order to train this verification model, we take the answer from the gold passage as the gold answer. And the loss function can be formulated as the negative log probability of the correct answer: Lverify = −1 N N X i=1 log pv yv i (18) where yv i is the index of the correct answer in all the answer candidates of the ith instance . 2.5 Joint Training and Prediction As is described above, we define three objectives for the reading comprehension model over multiple passages: 1. finding the boundary of the answer; 2. predicting whether each word should be included in the content; 3. selecting the best answer via cross-passage answer verification. According to our design, these three tasks can share the same embedding, encoding and matching layers. Therefore, we propose to train them together as multi-task learning (Ruder, 2017). The joint objective function is formulated as follows: L = Lboundary + β1Lcontent + β2Lverify (19) where β1 and β2 are two hyper-parameters that control the weights of those tasks. When predicting the final answer, we take the boundary score, content score and verification score into consideration. We first extract the answer candidate Ai that has the maximum boundary score from each passage i. This boundary score is computed as the product of the start and end probability of the answer span. Then for each answer candidate Ai, we average the content probabilities of all its words as the content score of Ai. And we can also predict the verification score for Ai using the verification model. Therefore, the final answer can be selected from all the answer candidates according to the product of these three scores. 3 Experiments To verify the effectiveness of our model on multipassage machine reading comprehension, we conduct experiments on the MS-MARCO (Nguyen et al., 2016) and DuReader (He et al., 2017) datasets. Our method achieves the state-of-the-art performance on both datasets. 3.1 Datasets We choose the MS-MARCO and DuReader datasets to test our method, since both of them are MS-MARCO DuReader Multiple Answers 9.93% 67.28% Multiple Spans 40.00% 56.38% Table 2: Percentage of questions that have multiple valid answers or answer spans designed from real-world search engines and involve a large number of passages retrieved from the web. One difference of these two datasets is that MS-MARCO mainly focuses on the English web data, while DuReader is designed for Chinese MRC. This diversity is expected to reflect the generality of our method. In terms of the data size, MS-MARCO contains 102023 questions, each of which is paired up with approximately 10 passages for reading comprehension. As for DuReader, it keeps the top-5 search results for each question and there are totally 201574 questions. One prerequisite for answer verification is that there should be multiple correct answers so that they can verify each other. Both the MS-MARCO and DuReader datasets require the human annotators to generate multiple answers if possible. Table 2 shows the proportion of questions that have multiple answers. However, the same answer that occurs many times is treated as one single answer here. Therefore, we also report the proportion of questions that have multiple answer spans to match with the human-generated answers. A span is taken as valid if it can achieve F1 score larger than 0.7 compared with any reference answer. From these statistics, we can see that the phenomenon of multiple answers is quite common for both MS-MARCO and DuReader. These answers will provide strong signals for answer verification if we can leverage them properly. 3.2 Implementation Details For MS-MARCO, we preprocess the corpus with the reversible tokenizer from Stanford CoreNLP (Manning et al., 2014) and we choose the span that achieves the highest ROUGE-L score with the reference answers as the gold span for training. We employ the 300-D pre-trained Glove embeddings (Pennington et al., 2014) and keep it fixed during training. The character embeddings are randomly initialized with its dimension as 30. For DuReader, we follow the preprocessing described in He et al. (2017). We tune the hyper-parameters according to the 1923 Model ROUGE-L BLEU-1 FastQA Ext (Weissenborn et al., 2017) 33.67 33.93 Prediction (Wang and Jiang, 2016) 37.33 40.72 ReasoNet (Shen et al., 2017) 38.81 39.86 R-Net (Wang et al., 2017c) 42.89 42.22 S-Net (Tan et al., 2017) 45.23 43.78 Our Model 46.15 44.47 S-Net (Ensemble) 46.65 44.78 Our Model (Ensemble) 46.66 45.41 Human 47 46 Table 3: Performance of our method and competing models on the MS-MARCO test set validation performance on the MS-MARCO development set. The hidden size is set to be 150 and we apply L2 regularization with its weight as 0.0003. The task weights β1, β2 are both set to be 0.5. To train our model, we employ the Adam algorithm (Kingma and Ba, 2014) with the initial learning rate as 0.0004 and the mini-batch size as 32. Exponential moving average is applied on all trainable variables with a decay rate 0.9999. Two simple yet effective technologies are employed to improve the final performance on these two datasets respectively. For MS-MARCO, approximately 8% questions have the answers as Yes or No, which usually cannot be solved by extractive approach (Tan et al., 2017). We address this problem by training a simple Yes/No classifier for those questions with certain patterns (e.g., starting with “is”). Concretely, we simply change the output layer of the basic boundary model so that it can predict whether the answer is “Yes” or “No”. For DuReader, the retrieved document usually contains a large number of paragraphs that cannot be fed into MRC models directly (He et al., 2017). The original paper employs a simple a simple heuristic strategy to select a representative paragraph for each document, while we train a paragraph ranking model for this. We will demonstrate the effects of these two technologies later. 3.3 Results on MS-MARCO Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set. We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002). As we can see, for both metrics, our single model outperforms all the other competing models with an evident margin, which is extremely hard considering the near-human perModel BLEU-4 ROUGE-L Match-LSTM 31.8 39.0 BiDAF 31.9 39.2 PR + BiDAF 37.55 41.81 Our Model 40.97 44.18 Human 56.1 57.4 Table 4: Performance on the DuReader test set Model ROUGE-L ∆ Complete Model 45.65 Answer Verification 44.38 -1.27 Content Modeling 44.27 -1.38 Joint Training 44.12 -1.53 YesNo Classification 41.87 -3.78 Boundary Baseline 38.95 -6.70 Table 5: Ablation study on MS-MARCO development set formance. If we ensemble the models trained with different random seeds and hyper-parameters, the results can be further improved and outperform the ensemble model in Tan et al. (2017), especially in terms of the BLEU-1. 3.4 Results on DuReader The results of our model and several baseline systems on the test set of DuReader are shown in Table 4. The BiDAF and Match-LSTM models are provided as two baseline systems (He et al., 2017). Based on BiDAF, as is described in Section 3.2, we tried a new paragraph selection strategy by employing a paragraph ranking (PR) model. We can see that this paragraph ranking can boost the BiDAF baseline significantly. Finally, we implement our system based on this new strategy, and our system (single model) achieves further improvement by a large margin. 1924 Question: What is the difference between a mixed and pure culture Scores Answer Candidates: Boundary Content Verification [1] A culture is a society’s total way of living and a society is a group . . . 1.0 × 10−2 1.0 × 10−1 1.1 × 10−1 [2] The mixed economy is a balance between socialism and capitalism. 1.0 × 10−4 4.0 × 10−2 3.2 × 10−2 [3] A pure culture is one in which only one kind of microbial species is . . . 5.5 × 10−3 7.7 × 10−2 1.2 × 10−1 [4] A pure culture comprises a single species or strains. A mixed . . . 2.7 × 10−3 8.1 × 10−2 1.3 × 10−1 [5] A pure culture is a culture consisting of only one strain. 5.8 × 10−4 7.9 × 10−2 5.1 × 10−2 [6] A pure culture is one in which only one kind of microbial species . . . 5.8 × 10−3 9.1 × 10−2 2.7 × 10−1 ...... . . . . . . Table 6: Scores predicted by our model for the answer candidates shown in Table 1. Although the candidate [1] gets high boundary and content scores, the correct answer [6] is preferred by the verification model and is chosen as the final answer. 4 Analysis and Discussion 4.1 Ablation Study To get better insight into our system, we conduct in-depth ablation study on the development set of MS-MARCO, which is shown in Table 5. Following Tan et al. (2017), we mainly focus on the ROUGE-L score that is averaged case by case. We first evaluate the answer verification by ablating the cross-passage verification model so that the verification loss and verification score will not be used during training and testing. Then we remove the content model in order to test the necessity of modeling the content of the answer. Since we don’t have the content scores, we use the boundary probabilities instead to compute the answer representation for verification. Next, to show the benefits of joint training, we train the boundary model separately from the other two models. Finally, we remove the yes/no classification in order to show the real improvement of our end-toend model compared with the baseline method that predicts the answer with only the boundary model. From Table 5, we can see that the answer verification makes a great contribution to the overall improvement, which confirms our hypothesis that cross-passage answer verification is useful for the multi-passage MRC. For the ablation of the content model, we analyze that it will not only affect the content score itself, but also violate the verification model since the content probabilities are necessary for the answer representation, which will be further analyzed in Section 4.3. Another discovery is that jointly training the three models can provide great benefits, which shows that the three tasks are actually closely related and can boost each other with shared representations at bottom layers. At last, comparing our method with the baseline, we achieve an improvement of nearly 3 points without the yes/no classification. This significant improvement proves the effectiveness of our approach. 4.2 Case Study To demonstrate how each module of our model takes effect when predicting the final answer, we conduct a case study in Table 6 with the same example that we discussed in Section 1. For each answer candidate, we list three scores predicted by the boundary model, content model and verification model respectively. On the one hand, we can see that these three scores generally have some relevance. For example, the second candidate is given lowest scores by all the three models. We analyze that this is because the models share the same encoding and matching layers at bottom level and this relevance guarantees that the content and verification models will not violate the boundary model too much. On the other hand, we also see that the verification score can really make a difference here when the boundary model makes an incorrect decision among the confusing answer candidates ([1], [3], [4], [6]). Besides, as we expected, the verification model tends to give higher scores for those answers that have semantic commonality with each other ([3], [4], [6]), which are all valid answers in this case. By multiplying the three scores, our model finally predicts the answer correctly. 4.3 Necessity of the Content Model In our model, we compute the answer representation based on the content probabilities predicted by a separate content model instead of directly using the boundary probabilities. We argue that this content model is necessary for our answer verification process. Figure 2 plots the predicted content probabilities as well as the boundary probabilities 1925 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 • charge unit -LRBnoun -RRB. The noun charge unit has 1 sense : 1 . a measure of the quantity of electricity -LRBdetermined by the amount of an electric current and the time for which it flows -RRB. familiarity info : charge unit used as a noun is very rare . start probability end probability content probability Figure 2: The boundary probabilities and content probabilities for the words in a passage for a passage. We can see that the boundary and content probabilities capture different aspects of the answer. Since answer candidates usually have similar boundary words, if we compute the answer representation based on the boundary probabilities, it’s difficult to model the real difference among different answer candidates. On the contrary, with the content probabilities, we pay more attention to the content part of the answer, which can provide more distinguishable information for verifying the correct answer. Furthermore, the content probabilities can also adjust the weights of the words within the answer span so that unimportant words (e.g. “and” and “.”) get lower weights in the final answer representation. We believe that this refined representation is also good for the answer verification process. 5 Related Work Machine reading comprehension made rapid progress in recent years, especially for singlepassage MRC task, such as SQuAD (Rajpurkar et al., 2016). Mainstream studies (Seo et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016) treat reading comprehension as extracting answer span from the given passage, which is usually achieved by predicting the start and end position of the answer. We implement our boundary model similarly by employing the boundary-based pointer network (Wang and Jiang, 2016). Another inspiring work is from Wang et al. (2017c), where the authors propose to match the passage against itself so that the representation can aggregate evidence from the whole passage. Our verification model adopts a similar idea. However, we collect information across passages and our attention is based on the answer representation, which is much more efficient than attention over all passages. For the model training, Xiong et al. (2017) argues that the boundary loss encourages exact answers at the cost of penalizing overlapping answers. Therefore they propose a mixed objective that incorporates rewards derived from word overlap. Our joint training approach has a similar function. By taking the content and verification loss into consideration, our model will give less loss for overlapping answers than those unmatched answers, and our loss function is totally differentiable. Recently, we also see emerging interests in multi-passage MRC from both the academic (Dunn et al., 2017; Joshi et al., 2017) and industrial community (Nguyen et al., 2016; He et al., 2017). Early studies (Shen et al., 2017; Wang et al., 2017c) usually concat those passages and employ the same models designed for singlepassage MRC. However, more and more latest studies start to design specific methods that can read multiple passages more effectively. In the aspect of passage selection, Wang et al. (2017a) introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions. Tan et al. (2017) treats the passage ranking as an auxiliary task that can be trained jointly with the reading comprehension model. Actually, the target of our answer verification is very similar to that of the passage selection, while we pay more attention to the answer content and the answer verification process. Speaking of the answer verification, Wang et al. (2017b) has a similar motivation to ours. They attempt to aggregate the evidence from different passages and choose the final answer from n-best candidates. However, they implement their idea as a separate reranking step after reading comprehension, while our answer verification is a component of the whole model that can be trained end-to-end. 6 Conclusion In this paper, we propose an end-to-end framework to tackle the multi-passage MRC task . We 1926 creatively design three different modules in our model, which can find the answer boundary, model the answer content and conduct cross-passage answer verification respectively. All these three modules can be trained with different forms of the answer labels and training them jointly can provide further improvement. The experimental results demonstrate that our model outperforms the baseline models by a large margin and achieves the state-of-the-art performance on two challenging datasets, both of which are designed for MRC on real web data. Acknowledgments This work is supported by the National Basic Research Program of China (973 program, No. 2014CB340505) and Baidu-Peking University Joint Project. We thank the Microsoft MSMARCO team for evaluating our results on the anonymous test set. We also thank Ying Chen, Xuan Liu and the anonymous reviewers for their constructive criticism of the manuscript. References Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G¨uney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179 . Wei He, Kai Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2017. Dureader: a chinese machine reading comprehension dataset from real-world applications. arXiv preprint arXiv:1711.05073 . Karl Moritz Hermann, Tom´as Kocisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301 . Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017. pages 2021–2031. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. volume 1, pages 1601–1611. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out . Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55–60. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. Memen: Multi-layer embedding with memory networks for machine comprehension. arXiv preprint arXiv:1707.09098 . Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA.. pages 311–318. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532– 1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 . 1927 Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 . Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017. pages 1047– 1055. Chuanqi Tan, Furu Wei, Nan Yang, Weifeng Lv, and Ming Zhou. 2017. S-net: From answer extraction to answer generation for machine reading comprehension. arXiv preprint arXiv:1706.04815 . Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 2692–2700. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 . Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2017a. R$ˆ3$: Reinforced reader-ranker for open-domain question answering. arXiv preprint arXiv:1709.00023 . Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. 2017b. Evidence aggregation for answer re-ranking in open-domain question answering. arXiv preprint arXiv:1711.05116 . Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017c. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, August 3-4, 2017. pages 271–280. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . Caiming Xiong, Victor Zhong, and Richard Socher. 2017. DCN+: mixed objective and deep residual coattention for question answering. arXiv preprint arXiv:1711.00106 .
2018
178
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1928–1937 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1928 Language Generation via DAG Transduction Yajie Ye, Weiwei Sun and Xiaojun Wan Institute of Computer Science and Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {yeyajie,ws,wanxiaojun}@pku.edu.cn Abstract A DAG automaton is a formal device for manipulating graphs. By augmenting a DAG automaton with transduction rules, a DAG transducer has potential applications in fundamental NLP tasks. In this paper, we propose a novel DAG transducer to perform graph-to-program transformation. The target structure of our transducer is a program licensed by a declarative programming language rather than linguistic structures. By executing such a program, we can easily get a surface string. Our transducer is designed especially for natural language generation (NLG) from type-logical semantic graphs. Taking Elementary Dependency Structures, a format of English Resource Semantics, as input, our NLG system achieves a BLEU-4 score of 68.07. This remarkable result demonstrates the feasibility of applying a DAG transducer to resolve NLG, as well as the effectiveness of our design. 1 Introduction The recent years have seen an increased interest as well as rapid progress in semantic parsing and surface realization based on graph-structured semantic representations, e.g. Abstract Meaning Representation (AMR; Banarescu et al., 2013), Elementary Dependency Structure (EDS; Oepen and Lønning, 2006) and Depedendency-based Minimal Recursion Semantics (DMRS; Copestake, 2009). Still underexploited is a formal framework for manipulating graphs that parallels automata, tranducers or formal grammars for strings and trees. Two such formalisms have recently been proposed and applied for NLP. One is graph grammar, e.g. Hyperedge Replacement Grammar (HRG; Ehrig et al., 1999). The other is DAG automata, originally studied by Kamimura and Slutzki (1982) and extended by Chiang et al. (2018). In this paper, we study DAG transducers in depth, with the goal of building accurate, efficient yet robust natural language generation (NLG) systems. The meaning representation studied in this work is what we call type-logical semantic graphs, i.e. semantic graphs grounded under type-logical semantics (Carpenter, 1997), one dominant theoretical framework for modeling natural language semantics. In this framework, adjuncts, such as adjective and adverbal phrases, are analyzed as (higher-order) functors, the function of which is to consume complex arguments (Kratzer and Heim, 1998). In the same spirit, generalized quantifiers, prepositions and function words in many languages other than English are also analyzed as higher-order functions. Accordingly, all the linguistic elements are treated as roots in type-logical semantic graphs, such as EDS and DMRS. This makes the typological structure quite flat rather than hierachical, which is an essential distinction between natural language semantics and syntax. To the best of our knowledge, the only existing DAG transducer for NLG is the one proposed by Quernheim and Knight (2012). Quernheim and Knight introduced a DAG-to-tree transducer that can be applied to AMR-to-text generation. This transducer is designed to handle hierarchical structures with limited reentrencies, and it is unsuitable for meaning graphs transformed from type-logical semantics. Furthermore, Quernheim and Knight did not describe how to acquire graph recognition and transduction rules from linguistic data, and reported no result of practical generation. It is still unknown to what extent a DAG transducer suits realistic NLG. The design for string and tree transducers 1929 (Comon et al., 1997) focuses on not only the logic of the computation for a new data structure, but also the corresponding control flow. This is very similar the imperative programming paradigm: implementing algorithms with exact details in explicit steps. This design makes it very difficult to transform a type-logical semantic graph into a string, due to the fact their internal structures are highly diverse. We borrow ideas from declarative programming, another programming paradigm, which describes what a program must accomplish, rather than how to accomplish it. We propose a novel DAG transducer to perform graphto-program transformation (§3). The input of our transducer is a semantic graph, while the output is a program licensed by a declarative programming language rather than linguistic structures. By executing such a program, we can easily get a surface string. This idea can be extended to other types of linguistic structures, e.g. syntactic trees or semantic representations of another language. We conduct experiments on richly detailed semantic annotations licensed by English Resource Grammar (ERG; Flickinger, 2000). We introduce a principled method to derive transduction rules from DeepBank (Flickinger et al., 2012). Furthermore, we introduce a fine-to-coarse strategy to ensure that at least one sentence is generated for any input graph. Taking EDS graphs, a variable-free ERS format, as input, our NLG system achieves a BLEU-4 score of 68.07. On average, it produces more than 5 sentences in a second on an x86 64 GNU/Linux platform with two Intel Xeon E5-2620 CPUs. Since the data for experiments is newswire data, i.e. WSJ sentences from PTB (Marcus et al., 1993), the input graphs are quite large on average. The remarkable accuracy, efficiency and robustness demonstrate the feasibility of applying a DAG transducer to resolve NLG, as well as the effectiveness of our transducer design. 2 Previous Work and Challenges 2.1 Preliminaries A node-labeled simple graph over alphabet Σ is a triple G = (V, E, ℓ), where V is a finite set of nodes, E ⊆V × V is an finite set of edges and ℓ: V →Σ is a labeling function. For a node v ∈V , sets of its incoming and outgoing edges are denoted by in(v) and out(v) respectively. For an edge e ∈E, its source node and target node are denoted by src(e) and tar(e) respectively. Generally speaking, a DAG is a directed acyclic simple graph. Different from trees, a DAG allows nodes to have multiple incoming edges. In this paper, we only consider DAGs that are unordered, node-labeled, multi-rooted1 and connected. Conceptual graphs, including AMR and EDS, are both node-labeled and edge-labeled. It seems that without edge labels, a DAG is inadequate, but this problem can be solved easily by using the strategies introduced in (Chiang et al., 2018). Take a labeled edge proper q BV −→named for example2. We can represent the same information by replacing it with two unlabeled edges and a new labeled node: proper q →BV →named. 2.2 Previous Work DAG automata are the core engines of graph transducers (Bohnet and Wanner, 2010; Quernheim and Knight, 2012). In this work, we adopt Chiang et al. (2018)’s design and define a weighted DAG automaton as a tuple M = ⟨Σ, Q, δ, K⟩: • Σ is an alphabet of node labels. • Q is a finite set of states. • (K, ⊕, ⊗, 0, 1) is a semiring of weights. • δ : Θ →K\{0} is a weight function that assigns nonzero weights to a finite transition set Θ. Every transition t ∈Θ is of the form {q1, · · · , qm} σ−→{r1, · · · , rn} where qi and rj are states in Q. A transition t gets m states on the incoming edges of a node and puts n states on the outgoing edges. A transition that does not belong to Θ recieves a weight of zero. A run of M on a DAG D = ⟨V, E, ℓ⟩is an edge labeling function ρ : E →Q. The weight of a run ρ (denoted as δ′(ρ)) is the product of all weights of local transitions: δ′(ρ) = ⊗ v∈V δ ( ρ(in(v)) ℓ(v) −−→ρ(out(v)) ) Here, for a function f, we use f({a1, · · · , an}) to represent {f(a1), · · · , f(an)}. If K is a boolean semiring, the automata fall backs to an unweighted 1A node without incoming edges is called root and a node without outgoing edges is called leaf. 2 proper q and named are node labels, while BV is the edge label. 1930 DAG automata or DAG acceptor. A accepting run or recognition is a run, the weight of which is 1, meaning true. 2.3 Challenges The DAG automata defined above can only be used for recognition. In order to generate sentences from semantic graphs, we need DAG transducers. A DAG transducer is a DAG automata-augmented computation model for transducing well-formed DAGs to other data structures. Quernheim and Knight (2012) focused on feature structures and introduced a DAG-to-Tree transducer to perform graph-to-tree transformation. The input of their transducer is limited to single-rooted DAGs. When the labels of the leaves of an output tree in order are interpreted as words, this transducer can be applied to generate natural language sentences. When applying Quernheim and Knight’s DAGto-Tree transducer on type-logic semantic graphs, e.g. ERS, there are some significant problems. First, it lacks the ability to reverse the direction of edges during transduction because it is difficult to keep acyclicy anymore if edge reversing is allowed. Second, it cannot handle multiple roots. But we have discussed and reached the conclusion that multi-rootedness is a necessary requirement for representing type-logical semantic graphs. It is difficult to decide which node should be the tree root during a ‘top-down’ transduction and it is also difficult to merge multiple unconnected nodes into one during a ‘bottom-up’ transduction. At the risk of oversimplifying, we argue that the function of the existing DAG-to-Tree transducer is to transform a hierachical structure into another hierarchical structure. Since the type-local semantic graphs are so flat, it is extremely difficult to adopt Quernheim and Knight’s design to handle such graphs. Third, there are unconnected nodes with direct dependencies, meaning that their correpsonding surface expressions appear to be very close. The conceptual nodes even x deg and steep a 1 in Figure 4 are an example. It is extremely difficult for the DAG-to-Tree transducer to handle this situation. 3 A New DAG Transducer 3.1 Basic Idea In this paper, we introduce a design of transducers that can perform structure transformation towards many data structures, including but not limited to trees. The basic idea is to give up the rewritting method to directly generate a new data structure piece by piece, while recognizing an input DAG. Instead, our transducer obtains target structures based on side effects of DAG recognition. The output of our transducer is no longer the target data structure itself, e.g. a tree or another DAG, and is now a program, i.e. a bunch of statements licensed by a particular declarative programming language. The target structures are constructed by executing such programs. Since our main concern of this paper is natural language generation, we take strings, namely sequences of words, as our target structures. In this section, we introduce an extremely simple programming language for string concatenation and then details about how to leverage the power of declarative programming to perform DAG-tostring transformation. 3.2 A Declarative Programming Language The syntax in the BNF format of our declarative programming language, denoted as Lc, for string calculation is: ⟨program⟩::= ⟨statement⟩∗ ⟨statement⟩::= ⟨variable⟩= ⟨expr⟩ ⟨expr⟩::= ⟨variable⟩| ⟨string⟩ | ⟨expr⟩+ ⟨expr⟩ Here a string is a sequence of characters selected from an alphabet (denoted as Σout) and can be empty (denoted as ϵ). The semantics of ‘=’ is value assignment, while the semantics of ‘+’ is string concatenation. The value of variables are strings. For every statement, the left hand side is a variable and the right hand side is a sequence of string literals and variables that are combined through ‘+’. Equation (1) presents an exmaple program licensed by this language. S = x21 + want + x11 x11 = to + go x21 = x41 + John x41 = ϵ (1) After solving these statements, we can query the values of all variables. In particular, we are interested in S, which is related to the desired natural language expression John want to go3. 3 The expression is a sequence of lemmas rather than inflected words. Refer to §4 for more details. 1931 Using the relation between the variables, we can easily convert the statements in (1) to a rooted tree. The result is shown in Figure 1. This tree is significantly different from the target structures discussed by Quernheim and Knight (2012) or other normal tree transducers (Comon et al., 1997). This tree represents calculation to solve the program. Constructing such internal trees is an essential function of the compiler of our programming language. S x21 x41 ε John want x11 to go Figure 1: Variable relation tree. 3.3 Informal Illustration We introduce our DAG transducer using a simple example. Figure 2 shows the original input graph D = (V, E, ℓ). Without any loss of generality, we remove edge labels. Table 1 lists the rule set—R—for this example. Every row represents an applicable transduction rule that consists of two parts. The left column is the recognition part displayed in the form I σ−→O, where I, O and σ decode the state set of incoming edges, the state set of outgoing edges and the node label respectively. The right column is the generation part which consists of (multiple) templates of statements licensed by the programming language defined in the previous section. In practice, two different rules may have a same recognition part but different generation parts. Every state q is of the form l(n, d) where l is the finite state label, n is the count of possible variables related to q, and d denotes the direction. The value of d can only be r (reversed), u (unchanged) or e(empty). Variable vl(j,d) represents the jth (1 ≤j ≤n) variable related to state q. For example, vX(2,r) means the second variable of state X(3,r). There are two special variables: S, which corresponds to the whole sentence and L, which corresponds to the output string associated to current node label. It is reasonable to assume that there exists a function ψ : Σ →Σ∗out that maps a particular node label, i.e. concept, to a surface string. Therefore L is determined by ψ. Now we are ready to apply transduction rules to named(John) want v 1 go v 1 proper q Figure 2: An input graph. The intended reading is John wants to go. named(John) want v 1 go v 1 proper q VP(1,u) e1 e2 NP(1,u) Empty(0,e) e3 DET(1,r) e4 Figure 3: A run of the graph in Figure 2. translate D into a string. The transduction consists of two steps: Recognition The goal of this step is to find an edge labeling function ρ : E →Q which satisfies that for every node v, ρ(in(v)) ℓ(v) −−→ρ(out(v)) matches the recognition part of a rule in R. The recognition result is shown in Figure 3. The red dashed edges in Figure 3 make up an intermediate graph T(ρ), which is a subgraph of D if edge direction is not taken into account. Sometimes, T(ρ) paralles the syntactic structure of an output sentence. For a labeling function ρ, we can construct intermediate graph T(ρ) by checking the direction parameter of every edge state. For an edge e = (u, v) ∈E, if the direction of ρ(e) is r, then (v, u) is in T(ρ). If the direction is u, then (u, v) is in T(ρ). If the direction is e, neither (u, v) nor (v, u) is included. The recognition process is slightly different from the one in Chiang et al. (2018). Since incoming edges with an Empty(0,e) state carry no semantic information, they will be ignored during recognition. For example, in Figure 3, we will only use e2 and e4 to match transducation rules for node named(John). Instantiation We use rule(v) to denotes the rule used on node v. Assume s is the generation part of rule(v). For every edge ei adjacent to v, assume ρ(ei) = l(n, d). We replace L with ψ(ℓ(v)) and replace every occurrence of vl(j,d) in s with a new variable xij (1 ≤j ≤n). Then we 1932 Q = {DET(1,r), Empty(0,e), VP(1,u), NP(1,u)} Rule For Recognition For Generation 1 {} proper q −−−−−−→{DET(1,r)} vDET(1,r) = ϵ 2 {} want v 1 −−−−−−→{VP(1,u), NP(1,u)} S = vNP(1,u) + L + vVP(1,u) 3 {VP(1,u)} go v 1 −−−−→{Empty(0,e)} vVP(1,u) = to + L 4 {NP(1,u), DET(1,r)} named −−−−→{} vNP(1,u) = vDET(1,r) + L Table 1: Sets of states (Q) and rules (R) that can be used to process the graph in Figure 2. get a newly generated expression for v. For example, node want v 1 is recognized using Rule 2, so we replace vNP(1,u) with x21, vVP(1,u) with x11 and L with want. After instantiation, we get all the statements in Equation (1). Our transducer is suitable for type-logical semantic graphs. Because declarative programming brings in more freedom for graph transduction. We can arrange the variables in almost any order without regard to the edge directions in original graphs. Meanwhile, the multi-rooted problem can be solved easily because the generation is based on side effects. We do not need to decide which node is the tree root. 3.4 Definition The formal definition of our DAG transducer described above is a tuple M = (Σ, Q, R, w, V, S) where: • Σ is an alphabet of node labels. • Q is a finite set of edge states. Every state q ∈Q is of the form l(n, d) where l is the state label, n is the variable count and d is the direction of state which can be r, u or e. • R is a finite set of rules. Every rule is of the form I σ−→⟨O, E⟩. E can be any kind of statement in a declarative programming language. It is called the generation part. I, σ and O have the same meanings as they do in the previous section and they are called the recognition part. • w is a score function. Given a particular run and an anchor node, w assigns a score to measure the preference for a particular rule at this anchor node. • V is the set of parameterized variables that can be used in every expression. • S ∈V is a distinguished, global variable. It is like the ‘goal’ of a program. 4 DAG Transduction-based NLG Different languages exhibit different morphosyntactic and syntactico-semantic properties. For example, Russian and Arabic are morphologically-rich languages and heavily utilize grammatical markers to indicate grammatical as well as semantic functions. On the contrary, Chinese, as an analytic language, encodes grammatical and semantic information in a highly configurational rather than either inflectional or derivational way. Such differences affects NLG significantly. Considering generating Chinese sentences, it seems sufficient to employ our DAG transducer to obtain a sequence of lemmas, since no morpholical production is needed. But for morphologically-rich languages, we do need to model complex morphological changes. To unify a general framework for DAG transduction-based NLG, we propose a two-step strategy to achive meaning-to-text transformation. • In the first phase, we are concerned with syntactico-semantic properties and utilize our DAG transducer to translate a semantic graph into sequential lemmas. Information such as tense, apsects, gender, etc. is attached to anchor lemmas. Actually, our transducer generates “want.PRES” rather than “wants”. Here, “PRES” indicates a particular tense. • In the second phase, we are concerned with morpho-syntactic properties and utilize a neural sequence-to-sequence model to obtain final surface strings from the outputs of the DAG transducer. 5 Inducing Transduction Rules We present an empirical study on the feasibility of DAG transduction-based NLG. We focus on 1933 steep a 1<21:28> decline n 1<5:12> focus d mofy<37:48> comp pronoun q pron<49:51> say v to<52:57> proper q the q<0:4> even x deg<16:20> in p temp<34:36> e4 {PP<34:48>} e3 {S<0:48>} e2 {S<0:48>} e5 {ADV<16:20>,PP<29:48>} e12 e1 {ADV<16:20>} e10 {DET<0:4>} e9 {} e7 {} e9 {NP<37:48>} e6 {NP<49:51>} e11 {NP<0:12>} Figure 4: An example graph. The intended reading is “the decline is even steeper than in September”, he said. Original edge labels are removed for clarity. Every edge is associated with a span list, and spans are written in the form label<begin:end>. The red dashed edges belong to the intermediate graph T. variable-free MRS representations, namely EDS (Oepen and Lønning, 2006). The data set used in this work is DeepBank 1.1 (Flickinger et al., 2012). 5.1 EDS-specific Constraints In order to generate reasonable strings, three constraints must be kept during transduction. First, for a rule I σ−→⟨O, E⟩, a state with direction u in I or a state with direction r in O is called head state and its variables are called head variables. For example, the head state of rule 3 in Table 1 is VP(1,u) and the head state of rule 2 is DET(1,r). There is at most one head state in a rule and only head variables or S can be the left sides of statements. If there is no head state, we assign the global S as its head. Otherwise, the number of statements is equal to the number of head variables and each statement has a distinguished left side variable. An empty state does not have any variables. Second, every rule has no-copying, no-deleting statements. In other words, all variables must be used exactly once in a statement. Third, during recognition, a labeling function ρ is valid only if T(ρ) is a rooted tree. After transduction, we get result ρ∗. The first and second constraints ensure that for all nodes, there is at most one incoming red dashed edge in T(ρ∗) and ‘data’ carried by variables of the only incoming red dashed edge or S is separated into variables of outgoing red dashed edges. The last constraint ensures that we can solve all statements by a bottom-up process on tree T(ρ∗). 5.2 Fine-to-Coarse Transduction Almost all NLG systems that heavily utilize a symbolic system to encode deep syntacticosemantic information lack some robustness, meaning that some input graphs may not be successfully processed. There are two reasons: (1) some explicit linguistic constraints are not included; (2) exact decoding is too time-consuming while inexact decoding cannot cover the whole search space. To solve the robustness problem, we introduce a fine-to-coarse strategy to ensure that at least one sentence is generated for any input graph. There are three types of rules in our system, namely induced rules, extended rules and dynamic rules. The most fine-grained rules are applied to bring us precision, while the most coarse-grained rules are for robustness. In order to extract reasonable rules, we will use both EDS graphs and the corresponding derivation trees provided by ERG. The details will be described step-by-step in the following sections. 5.3 Induced Rules Figure 4 shows an example for obtaining induced rules. The induced rules are directly obtained by following three steps: Finding intermediate tree T EDS graphs are highly regular semantic graphs. It is not difficult to generate T based on a highly customized ‘breadthfirst’ search. The generation starts from the ‘top’ node ( say v to in Figure 4) given by the EDS graph and traverse the whole graph. No more than thirty heuristic rules are used to decide the visiting order of nodes. 1934 Assigning states EDS graphs also provide span information for nodes. We select a group of lexical nodes which have corresponding substrings in the original sentence. In Figure 4, these nodes are in bold font and directly followed by a span. Then we merge spans from the bottom of T to the top to assign each red edge a span list. For each node n in T, we collect spans of every outgoing dashed edge of n into a list s. Some additional spans may be inserted into s. These spans do not occur in the EDS graph but they do occur in the sentence, i.e. than<29:33>. Then we merge continuous spans in s and assign the remaining spans in s to the incoming red dashed edge of n. We apply a similar method to the derivation tree. As a result, every inner node of the derivation tree is associated with a span. Then we align the edges in T to nodes of the inner derivation tree by comparing their spans. Finally edge labels in Figure 4 are generated. We use the concatenation of the edge labels in a span list as the state label. The edge labels are joined in order with ‘ ’. Empty(0,e) is the state of the edges that do not belong to T (ignoring direction), such as e12. The variable count of a state is equal to the size of the span list and the direction of a state is decided by whether the edge in T related to the state and its corresponding edge in D have different directions. For example, the state of e5 should be ADV PP(2,r). Generating statements After the above two steps, we are ready to generate statements according to how spans are merged. For all nodes, spans of the incoming edges represent the left hand side and the outgoing edges represent the right hand side. For example, the rule for node comp will be: {ADV(1,r)} comp −−−→{PP(1,u), ADV PP(2,r)} vADV PP(1,r) = vADV(1,r) vADV PP(2,r) = than + vPP(1,u) 5.4 Extended Rules Extended rules are used when no induced rules can cover a given node. In theory, there can be unlimited modifier nodes pointing to a given node, such as PP and ADJ. We use some manually written rules to slightly change an induced rule (prototype) by addition or deletion to generate a group of extended rules. The motivation here is to deal with the data sparseness problem. For a group of selected non-head states in I, such as PP and ADJ. We can produce new rules by removing or duplicating more of them. For example: {NP(1,u), ADJ(1,r)} X n 1 −−−−→{} vNP(1,u) = vADJ(1,r) + L As a result, we get the two rules below: {NP(1,u)} X n 1 −−−−→{} vNP(1,u) = L {NP(1,u), ADJ(1,r)1, ADJ(1,r)2} X n 1 −−−−→{} vNP(1,u) = vADJ(1,r)1 + vADJ(1,r)2 + L 5.5 Dynamic Rules During decoding, when neither induced nor extended rule is applicable, we create a dynamic rule on-the-fly. Our rule creator builds a new rule following the Markov assumption: P(O|C) = P(q1|C) n ∏ i=2 P(qi|C)P(qi|qi−1, C) C = ⟨I, D⟩represents the context. O = {q1, · · · , qn} denotes the outgoing states and I, D have the same meaning as before. Though they are unordered multisets, we can give them an explicit alphabet order by their edge labels. There is also a group of hard constraints to make sure that the predicted rules are well-formed as the definition in §5 requires. This Markovization strategy is widely utilized by lexicalized and unlexicalized PCFG parsers (Collins, 1997; Klein and Manning, 2003). For a dynamic rule, all variables in this rule will appear in the statement. We use a simple perceptron-based scorer to assign every variable a score and arrange them in an decreasing order. 6 Evaluation and Analysis 6.1 Set-up We use DeepBank 1.1 (Flickinger et al., 2012), i.e. gold-standard ERS annotations, as our main experimental data set to train a DAG transducer as well as a sequence-to-sequence morpholyzer, and wikiwoods (Flickinger et al., 2010), i.e. automatically-generated ERS annotations by ERG, as additional data set to enhance the sequence-to-sequence morpholyzer. The training, 1935 development and test data sets are from DeepBank and split according to DeepBank’s recommendation. There are 34,505, 1,758 and 1,444 sentences (all disconnected graphs as well as their associated sentences are removed) in the training, development and test data sets. We use a small portion of wikiwoods data, c.a. 300K sentences, for experiments. 37,537 induced rules are directly extracted from the training data set, and 447,602 extended rules are obtained. For DAG recognition, at one particular position, there may be more than one rule applicable. In this case, we need a disambiguation model as well as a decoder to search for a globally optimal solution. In this work, we train a structured perceptron model (Collins, 2002) for disambiguation and employ a beam decoder. The perceptron model used by our dynamic rule generator are trained with the induced rules. To get a sequence-to-sequence model, we use the open source tool—OpenNMT4. 6.2 The Decoder We implement a fine-to-coarse beam search decoder. Given a DAG D, our goal is to find the highest scored labeling function ρ: ρ = arg max ρ n ∏ i=1 ∑ j wj · fj(rule(vi), D) s.t. rule(vi) = ρ(in(vi)) ℓ(vi) −−−→⟨ρ(out(vi)), Ei⟩ where n is the node count and fj(·, ·) and wj represent a feature and the corresponding weight, respectively. The features are chosen from the context of the given node vi. We perform ‘topdown’ search to translate an input DAG into a morphology-function-enhanced lemma sequence. Each hypothesis consists of the current DAG graph, the partial labeling function, the current hypothesis score and other graph information used to perform rule selection. The decoder will keep the corresponding partial intermediate graph T acyclic when decoding. The algorithm used by our decoder is displayed in Algorithm 1. Function FindRules(h, n, R) will use hard constraints to select rules from the rule set R according to the contextual information. It will also perform an acyclic check on T. Function Insert(h, r, n, B) will create and score a new hypothesis made from the given context and then insert it into beam B. 4https://github.com/OpenNMT/OpenNMT/ After we get the edge labeling function ρ, we use a simple linear equation solver to convert all statements to a sequence of lemmas. Algorithm 1: Algorithm for our decoder. Input: D is the EDS graph. RI and RE are induced-rules and extended-rules respectively. Result: The edge labeling function ρ. 1 Q ←all the roots in D 2 B1 ←empty beam 3 E ←∅ 4 Insert initial hypothesis into B1 5 while Q is not empty: 6 B2 ←empty beam 7 n ←dequeue a node from Q 8 for h ∈B1: 9 rules ←FindRules(h, n, RI) 10 if rules is not empty: 11 for r ∈rules: 12 Insert(h, r, n, B2) else: 13 rules ←FindRules(h, n, RE) 14 for r ∈rules: 15 Insert(h, r, n, B2) 16 if B2 is still empty: 17 for h ∈B1: 18 r ←RuleGenerator(h, n) 19 Insert(h, r, n, B2) 20 B1 ←B2 21 for e ∈out(n): 22 E ←E ∪{e} 23 if in(tar(e)) ⊆E: 24 Q ←Q ∪{tar(e)} 25 Extract ρ from best hypothesis in B1 6.3 Accuracy In order to evaluate the effectiveness of our transducer for NLG, we try a group of tests showed in Table 2. All sequence-to-sequence models (either from lemma sequences to lemma sequences or lemma sequences to sentences) are trained on DeepBank and wikiwoods data set and tuned on the development data. The second column shows the BLEU-4 scores between generated lemma sequences and golden sequences of lemmas. The third column shows the BLEU-4 scores between generated sentences and golden sentences. The fourth column shows the fraction of graphs in the test data set that can reach output sentences. 1936 Transducer Lemmas Sentences Coverage I 89.44 74.94 67% I+E 88.41 74.03 77% I+E+D 82.04 68.07 100% DFS-NN 50.45 100% AMR-NN 33.8 100% AMR-NRG 25.62 100% Table 2: Accuracy (BLEU-4 score) and coverage of different systems. I denotes transduction only using induced rules; I+E denotes transduction using both induced and extended rules; I+E+D denotes transduction using all kinds of rules. DFSNN is a rough implementation of Konstas et al. (2017) but with the EDS data, while AMR-NN includes the results originally reported by Konstas et al., which are evaluated on the AMR data. AMR-NRG includes the results obtained by a synchronous graph grammar (Song et al., 2017). The graphs that cannot received any natural language sentences are removed while conducting the BLEU evaluation. As we can conclude from Table 2, using only induced rules achieves the highest accuracy but the coverage is not satisfactory. Extended rules lead to a slight accuracy drop but with a great improvement of coverage (c.a. 10%). Using dynamic rules, we observe a significant accuracy drop. Nevertheless, we are able to handle all EDS graphs. The full-coverage robustness may benefit many NLP applications. The lemma sequences generated by our transducer are really close to the golden one. This means that our model actually works and most reordering patterns are handled well by induced rules. Compared to the AMR generation task, our transducer on EDS graphs achieves much higher accuracies. To make clear how much improvement is from the data and how much is from our DAG transducer, we implement a purely neural baseline. The baseline converts a DAG into a concept sequence by a pre-order DFS traversal on the intermediate tree of this DAG. Then we use a sequenceto-sequence model to transform this concept sequence to the lemma sequence for comparison. This is a kind of implementation of Konstas et al.’s model but evaluated on the EDS data. We can see that on this task, our transducer is much better than a pure sequence-to-sequence model on DeepBank data. Transducer Average (s) Maximal (s) I 0.090 0.40 I+E 0.093 0.46 I+E+D 0.18 3.2 Table 3: Efficiency of our NL generator. 6.4 Efficiency Table 3 shows the efficiency of the beam search decoder with a beam size of 128. The platform for this experiment is x86 64 GNU/Linux with two Intel Xeon E5-2620 CPUs. The second and third columns represent the average and the maximal time (in seconds) to translate an EDS graph. Using dynamic rules slow down the decoder to a great degree. Since the data for experiments is newswire data, i.e. WSJ sentences from PTB (Marcus et al., 1993), the input graphs are quite large on average. On average, it produces more than 5 sentences per second on CPU. We consider this a promising speed. 7 Conclusion We extend the work on DAG automata in Chiang et al. (2018) and propose a general method to build flexible DAG transducer. The key idea is to leverage a declarative programming language to minimize the computation burden of a graph transducer. We think may NLP tasks that involve graph manipulation may benefit from this design. To exemplify our design, we develop a practical system for the semantic-graph-to-string task. Our system is accurate (BLEU 68.07), efficient (more than 5 sentences per second on a CPU) and robust (fullcoverage). The empirical evaluation confirms the usefulness a DAG transducer to resolve NLG, as well as the effectiveness of our design. Acknowledgments This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Weiwei Sun is the corresponding author. 1937 References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. Bernd Bohnet and Leo Wanner. 2010. Open soucre graph transducer interpreter and grammar development environment. In LREC. B. Carpenter. 1997. Type-Logical Semantics. Bradford books. MIT Press. David Chiang, Frank Drewes, Daniel Gildea, Adam Lopez, and Giorgio Satta. 2018. Weighted DAG automata for semantic graphs. Computational Linguistics. To appear. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 16–23, Madrid, Spain. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1–8. Association for Computational Linguistics. Hubert Comon, Max Dauchet, Florent Jacquemard, Denis Lugiez, Sophie Tison, and Marc Tommasi. 1997. Tree automata techniques and applications. Technical report. Ann Copestake. 2009. Invited Talk: slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 1–9, Athens, Greece. Association for Computational Linguistics. H. Ehrig, H.-J. Kreowski, U. Montanari, and G. Rozenberg, editors. 1999. Handbook of Graph Grammars and Computing by Graph Transformation: Vol. 3: Concurrency, Parallelism, and Distribution. World Scientific Publishing Co., Inc., River Edge, NJ, USA. Dan Flickinger. 2000. On building a more efficient grammar by exploiting types. Nat. Lang. Eng., 6(1):15–28. Dan Flickinger, Stephan Oepen, and Gisle Ytrestl. 2010. Wikiwoods: Syntacto-semantic annotation for English wikipedia. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Daniel Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank: A dynamically annotated treebank of the wall street journal. In Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories, pages 85–96. Tsutomu Kamimura and Giora Slutzki. 1982. Transductions of dags and trees. Mathematical Systems Theory, 15(3):225–249. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430, Sapporo, Japan. Association for Computational Linguistics. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146–157, Vancouver, Canada. Association for Computational Linguistics. Angelika Kratzer and Irene Heim. 1998. Semantics in generative grammar. Blackwell Oxford. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the penn treebank. Computational Linguistics, 19(2):313–330. Stephan Oepen and Jan Tore Lønning. 2006. Discriminant-based mrs banking. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC-2006), Genoa, Italy. European Language Resources Association (ELRA). ACL Anthology Identifier: L06-1214. Daniel Quernheim and Kevin Knight. 2012. Towards probabilistic acceptors and transducers for feature structures. In Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation, SSST-6 ’12, pages 76–85, Stroudsburg, PA, USA. Association for Computational Linguistics. Linfeng Song, Xiaochang Peng, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2017. Amr-to-text generation with synchronous node replacement grammar. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 7–13, Vancouver, Canada. Association for Computational Linguistics.
2018
179
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 185–196 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 185 Comprehensive Supersense Disambiguation of English Prepositions and Possessives Nathan Schneider∗ Georgetown University Jena D. Hwang IHMC Vivek Srikumar University of Utah Jakob Prange Austin Blodgett Georgetown University Sarah R. Moeller University of Colorado Boulder Aviram Stern Adi Bitan Omri Abend Hebrew University of Jerusalem Abstract Semantic relations are often signaled with prepositional or possessive marking—but extreme polysemy bedevils their analysis and automatic interpretation. We introduce a new annotation scheme, corpus, and task for the disambiguation of prepositions and possessives in English. Unlike previous approaches, our annotations are comprehensive with respect to types and tokens of these markers; use broadly applicable supersense classes rather than fine-grained dictionary definitions; unite prepositions and possessives under the same class inventory; and distinguish between a marker’s lexical contribution and the role it marks in the context of a predicate or scene. Strong interannotator agreement rates, as well as encouraging disambiguation results with established supervised methods, speak to the viability of the scheme and task. 1 Introduction Grammar, as per a common metaphor, gives speakers of a language a shared toolbox to construct and deconstruct meaningful and fluent utterances. Being highly analytic, English relies heavily on word order and closed-class function words like prepositions, determiners, and conjunctions. Though function words bear little semantic content, they are nevertheless crucial to the meaning. Consider prepositions: they serve, for example, to convey place and time (We met at/in/outside the restaurant for/after an hour), to express configurational relationships like quantity, possession, part/whole, and membership (the coats of dozens of children in the class), and to indicate semantic roles in argument structure (Grandma cooked dinner for the children ∗[email protected] (1) I was booked for/DURATION 2 nights at/LOCUS this hotel in/TIME Oct 2007 . (2) I went to/GOAL ohm after/EXPLANATION;TIME reading some of/QUANTITY;WHOLE the reviews . (3) It was very upsetting to see this kind of/SPECIES behavior especially in_front_of/LOCUS my/SOCIALREL;GESTALT four year_old . Figure 1: Annotated sentences from our corpus. vs. Grandma cooked the children for dinner). Frequent prepositions like for are maddeningly polysemous, their interpretation depending especially on the object of the preposition—I rode the bus for 5 dollars/minutes—and the governor of the prepositional phrase (PP): I Ubered/asked for $5. Possessives are similarly ambiguous: Whistler’s mother/painting/hat/death. Semantic interpretation requires some form of sense disambiguation, but arriving at a linguistic representation that is flexible enough to generalize across usages and types, yet simple enough to support reliable annotation, has been a daunting challenge (§2). This work represents a new attempt to strike that balance. Building on prior work, we argue for an approach to describing English preposition and possessive semantics with broad coverage. Given the semantic overlap between prepositions and possessives (the hood of the car vs. the car’s hood or its hood), we analyze them using the same inventory of semantic labels.1 Our contributions include: • a new hierarchical inventory (“SNACS”) of 50 supersense classes, extensively documented in guidelines for English (§3); • a gold-standard corpus with comprehensive annotations: all types and tokens of prepositions and possessives are disambiguated (§4; example sentences appear in figure 1); • an interannotator agreement study that 1Some uses of certain other closed-class markers— intransitive particles, subordinators, infinitive to—are also included (§3.1). 186 shows the scheme is reliable and generalizes across genres—and for the first time demonstrating empirically that the lexical semantics of a preposition can sometimes be detached from the PP’s semantic role (§5); • disambiguation experiments with two supervised classification architectures to establish the difficulty of the task (§6). 2 Background: Disambiguation of Prepositions and Possessives Studies of preposition semantics in linguistics and cognitive science have generally focused on the domains of space and time (e.g., Herskovits, 1986; Bowerman and Choi, 2001; Regier, 1996; Khetarpal et al., 2009; Xu and Kemp, 2010; Zwarts and Winter, 2000) or on motivated polysemy structures that cover additional meanings beyond core spatial senses (Brugman, 1981; Lakoff, 1987; Tyler and Evans, 2003; Lindstromberg, 2010). Possessive constructions can likewise denote a number of semantic relations, and various factors—including semantics—influence whether attributive possession in English will be expressed with of, or with ’s and possessive pronouns (the ‘genitive alternation’; Taylor, 1996; Nikiforidou, 1991; Rosenbach, 2002; Heine, 2006; Wolk et al., 2013; Shih et al., 2015). Corpus-based computational work on semantic disambiguation specifically of prepositions and possessives2 falls into two categories: the lexicographic/word sense disambiguation approach (Litkowski and Hargraves, 2005, 2007; Litkowski, 2014; Ye and Baldwin, 2007; Saint-Dizier, 2006; Dahlmeier et al., 2009; Tratz and Hovy, 2009; Hovy et al., 2010, 2011; Tratz and Hovy, 2013), and the semantic class approach (Moldovan et al., 2004; Badulescu and Moldovan, 2009; O’Hara and Wiebe, 2009; Srikumar and Roth, 2011, 2013; Schneider et al., 2015, 2016; Hwang et al., 2017, see also Müller et al., 2012 for German). The lexicographic approach can capture finer-grained meaning distinctions, at a risk of relying upon idiosyncratic and potentially incomplete dictionary definitions. The semantic class approach, which we follow here, focuses on commonalities in meaning across multiple lexical items, and aims to general2Of course, meanings marked by prepositions/possessives are to some extent captured in predicate-argument or graphbased meaning representations (e.g., Palmer et al., 2005; Fillmore and Baker, 2009; Oepen et al., 2016; Banarescu et al., 2013) and domain-centric representations like TimeML and ISO-Space (Pustejovsky et al., 2003, 2012). ize more easily to new types and usages. The most recent class-based approach to prepositions was our initial framework of 75 preposition supersenses arranged in a multiple inheritance taxonomy (Schneider et al., 2015, 2016). It was based largely on relation/role inventories of Srikumar and Roth (2013) and VerbNet (Bonial et al., 2011; Palmer et al., 2017). The framework was realized in version 3.0 of our comprehensively annotated corpus, STREUSLE3 (Schneider et al., 2016). However, several limitations of our approach became clear to us over time. First, as pointed out by Hwang et al. (2017), the one-label-per-token assumption in STREUSLE is flawed because it in some cases puts into conflict the semantic role of the PP with respect to a predicate, and the lexical semantics of the preposition itself. Hwang et al. (2017) suggested a solution, discussed in §3.3, but did not conduct an annotation study or release a corpus to establish its feasibility empirically. We address that gap here. Second, 75 categories is an unwieldy number for both annotators and disambiguation systems. Some are quite specialized and extremely rare in STREUSLE 3.0, which causes data sparseness issues for supervised learning. In fact, the only published disambiguation system for preposition supersenses collapsed the distinctions to just 12 labels (Gonen and Goldberg, 2016). Hwang et al. (2017) remarked that solving the aforementioned problem could remove the need for many of the specialized categories and make the taxonomy more tractable for annotators and systems. We substantiate this here, defining a new hierarchy with just 50 categories (SNACS, §3) and providing disambiguation results for the full set of distinctions. Finally, given the semantic overlap of possessive case and the preposition of, we saw an opportunity to broaden the application of the scheme to include possessives. Our reannotated corpus, STREUSLE 4.0, thus has supersense annotations for over 1000 possessive tokens that were not semantically annotated in version 3.0. We include these in our annotation and disambiguation experiments alongside reannotated preposition tokens. 3 Annotation Scheme 3.1 Lexical Categories of Interest Apart from canonical prepositions and possessives, there are many lexically and semantically overlap3https://github.com/nert-gu/streusle/ 187 ping closed-class items which are sometimes classified as other parts of speech, such as adverbs, particles, and subordinating conjunctions. The Cambridge Grammar of the English Language (Huddleston and Pullum, 2002) argues for an expansive definition of ‘preposition’ that would encompass these other categories. As a practical measure, we decided to encourage annotators to focus on the semantics of these functional items rather than their syntax, so we take an inclusive stance. Another consideration is developing annotation guidelines that can be adapted for other languages. This includes languages which have postpositions, circumpositions, or inpositions rather than prepositions; the general term for such items is adpositions.4 English possessive marking (via ’s or possessive pronouns like my) is more generally an example of case marking. Note that prepositions (4a–4c) differ in word order from possessives (4d), though semantically the object of the preposition and the possessive nominal pattern together: (4) a. eat in a restaurant b. the man in a blue shirt c. the wife of the ambassador d. the ambassador’s wife Cross-linguistically, adpositions and case marking are closely related, and in general both grammatical strategies can express similar kinds of semantic relations. This motivates a common semantic inventory for adpositions and case. We also cover multiword prepositions (e.g., out_of, in_front_of), intransitive particles (He flew away), purpose infinitive clauses (Open the door to let in some air5), prepositions with clausal complements (It rained before the party started), and idiomatic prepositional phrases (at_large). Our annotation guidelines give further details. 3.2 The SNACS Hierarchy The hierarchy of preposition and possessive supersenses, which we call Semantic Network of Adposition and Case Supersenses (SNACS), is shown in figure 2. It is simpler than its predecessor— Schneider et al.’s (2016) preposition supersense hierarchy—in both size and structural complexity. 4In English, ago is arguably a postposition because it follows rather than precedes its complement: five minutes ago, not *ago five minutes. 5To can be rephrased as in_order_to and have prepositional counterparts like in Open the door for some air. Circumstance 77 Temporal 0 Time 371 StartTime 28 EndTime 31 Frequency 9 Duration 91 Interval 35 Locus 846 Source 189 Goal 419 Path 49 Direction 161 Extent 42 Means 17 Manner 140 Explanation 123 Purpose 401 Participant 0 Causer 15 Agent 170 Co-Agent 65 Theme 238 Co-Theme 14 Topic 296 Stimulus 123 Experiencer 107 Originator 134 Recipient 122 Cost 48 Beneficiary 110 Instrument 30 Configuration 0 Identity 85 Species 39 Gestalt 709 Possessor 492 Whole 250 Characteristic 140 Possession 21 PartPortion 57 Stuff 25 Accompanier 49 InsteadOf 10 ComparisonRef 215 RateUnit 5 Quantity 191 Approximator 76 SocialRel 240 OrgRole 103 Figure 2: SNACS hierarchy of 50 supersenses and their token counts in the annotated corpus described in §4. Counts are of direct uses of labels, excluding uses of subcategories. Role and function positions are not distinguished (so if a token has different role and function labels, it will count toward two supersense frequencies). SNACS has 50 supersenses at 4 levels of depth; the previous hierarchy had 75 supersenses at 7 levels. The top-level categories are the same: • CIRCUMSTANCE: Circumstantial information, usually non-core properties of events (e.g., location, time, means, purpose) • PARTICIPANT: Entity playing a role in an event • CONFIGURATION: Thing, usually an entity or property, involved in a static relationship to some other entity The 3 subtrees loosely parallel adverbial adjuncts, event arguments, and adnominal complements, respectively. The PARTICIPANT and CIRCUMSTANCE subtrees primarily reflect semantic relationships prototypical to verbal arguments/adjuncts and were inspired by VerbNet’s thematic role hierarchy (Palmer et al., 2017; Bonial et al., 2011). Many CIRCUMSTANCE subtypes, like LOCUS (the concrete or abstract location of something), can be governed by eventive and non-eventive nominals as well as verbs: eat in the restaurant, a party in the restaurant, a table in the restaurant. CONFIGURATION mainly encompasses non-spatiotemporal relations holding between entities, such as quantity, possession, and part/whole. Unlike the previous hierarchy, SNACS does not use multiple inheritance, so there is no overlap between the 3 regions. The supersenses can be understood as roles in fundamental types of scenes (or schemas) such as: LOCATION—THEME is located at LO188 CUS; MOTION—THEME moves from SOURCE along PATH to GOAL; TRANSITIVE ACTION— AGENT acts on THEME, perhaps using an INSTRUMENT; POSSESSION—POSSESSION belongs to POSSESSOR; TRANSFER—THEME changes possession from ORIGINATOR to RECIPIENT, perhaps with COST; PERCEPTION—EXPERIENCER is mentally affected by STIMULUS; COGNITION— EXPERIENCER contemplates TOPIC; COMMUNICATION—information (TOPIC) flows from ORIGINATOR to RECIPIENT, perhaps via an INSTRUMENT. For AGENT, CO-AGENT, EXPERIENCER, ORIGINATOR, RECIPIENT, BENEFICIARY, POSSESSOR, and SOCIALREL, the object of the preposition is prototypically animate. Because prepositions and possessives cover a vast swath of semantic space, limiting ourselves to 50 categories means we need to address a great many nonprototypical, borderline, and special cases. We have done so in a 75-page annotation manual with over 400 example sentences (Schneider et al., 2018). Finally, we note that the Universal Semantic Tagset (Abzianidze and Bos, 2017) defines a crosslinguistic inventory of semantic classes for content and function words. SNACS takes a similar approach to prepositions and possessives, which in Abzianidze and Bos’s (2017) specification are simply tagged REL, which does not disambiguate the nature of the relational meaning. Our categories can thus be understood as refinements to REL. 3.3 Adopting the Construal Analysis Hwang et al. (2017) have pointed out the perils of teasing apart and generalizing preposition semantics so that each use has a clear supersense label. One key challenge they identified is that the preposition itself and the situation as established by the verb may suggest different labels. For instance: (5) a. Vernon works at Grunnings. b. Vernon works for Grunnings. The semantics of the scene in (5a, 5b) is the same: it is an employment relationship, and the PP contains the employer. SNACS has the label ORGROLE for this purpose.6 At the same time, at in (5a) strongly suggests a locational relationship, which would correspond to the label LOCUS; consistent with this 6ORGROLE is defined as “Either a party in a relation between an organization/institution and an individual who has a stable affiliation with that organization, such as membership or a business relationship.” hypothesis, Where does Vernon work? is a perfectly good way to ask a question that could be answered by the PP. In this example, then, there is overlap between locational meaning and organizationalbelonging meaning. (5b) is similar except the for suggests a notion of BENEFICIARY: the employee is working on behalf of the employer. Annotators would face a conundrum if forced to pick a single label when multiple ones appear to be relevant. Schneider et al. (2016) handled overlap via multiple inheritance, but entertaining a new label for every possible case of overlap is impractical, as this would result in a proliferation of supersenses. Instead, Hwang et al. (2017) suggest a construal analysis in which the lexical semantic contribution, or henceforth the function, of the preposition itself may be distinct from the semantic role or relation mediated by the preposition in a given sentence, called the scene role. The notion of scene role is a widely accepted idea that underpins the use of semantic or thematic roles: semantics licensed by the governor7 of the prepositional phrase dictates its relationship to the prepositional phrase. The innovative claim is that, in addition to a preposition’s relationship with its head, the prepositional choice introduces another layer of meaning or construal that brings additional nuance, creating the difficulty we see in the annotation of (5a, 5b). Construal is notated by ROLE;FUNCTION. Thus, (5a) would be annotated ORGROLE;LOCUS and (5b) as ORGROLE;BENEFICIARY to expose their common truth-semantic meaning but slightly different portrayals owing to the different prepositions. Another useful application of the construal analysis is with the verb put, which can combine with any locative PP to express a destination: (6) Put it on/by/behind/on_top_of/. . . the door. GOAL;LOCUS I.e., the preposition signals a LOCUS, but the door serves as the GOAL with respect to the scene. This approach also allows for resolution of various se7By “governor” of the preposition or prepositional phrase, we mean the head of the phrase to which the PP attaches in a constituency representation. In a dependency representation, this would be the head of the preposition itself or of the object of the preposition depending on which convention is used for PP headedness: e.g., the preposition heads the PP in CoNLL and Stanford Dependencies whereas the object is the head in Universal Dependencies. The governor is most often a verb or noun. Where the PP is a predicate complement (e.g. Vernon is with Grunnings), there is no governor to specify the nature of the scene, so annotators must rely on world knowledge and context to determine the scene. 189 Train Dev Test Total Documents 347 192 184 723 Sentences 2,723 554 535 3,812 Tokens 44,804 5,394 5,381 55,579 Annotated targets 4,522 453 480 5,455 Role = function 3,101 291 310 3,702 P or PP 3,397 341 366 4,104 Multiword unit 256 25 24 305 Infinitive to 201 26 20 247 Genitive clitic (’s) 52 6 1 59 Possessive pronoun 872 80 93 1,045 Attested SNACS labels 47 46 44 47 Unique scene roles 46 43 41 47 Unique functions 41 38 37 41 Unique pairs 167 79 87 177 Role = function 41 33 34 41 Table 1: Counts for the data splits used in our experiments. mantic phenomena including perceptual scenes (e.g., I care about education, where about is both the topic of cogitation and perceptual stimulus of caring: STIMULUS;TOPIC), and fictive motion (Talmy, 1996), where static location is described using motion verbiage (as in The road runs through the forest: LOCUS;PATH). Both role and function slots are filled by supersenses from the SNACS hierarchy. Annotators have the option of using distinct supersenses for the role and function; in general it is not a requirement (though we stipulate that certain SNACS supersenses can only be used as the role). When the same label captures both role and function, we do not repeat it: Vernon lives in/LOCUS England. Figure 1 shows some real examples from our corpus. We apply the construal analysis in SNACS annotation of our corpus to test its feasibility. It has proved useful not only for prepositions, but also possessives, where the general sense of possession may overlap with other scene relations, like creator/initial-possessor (ORIGINATOR): Da Vinci’s/ORIGINATOR;POSSESSOR sculptures. 4 Annotated Reviews Corpus We applied the SNACS annotation scheme (§3) to prepositions and possessives in the STREUSLE corpus (§2), a collection of online consumer reviews taken from the English Web Treebank (Bies et al., 2012). The sentences from the English Web Treebank also comprise the primary reference treebank for English Universal Dependencies (UD; Nivre et al., 2016), and we bundle the UD version 2 syntax alongside our annotations. Table 1 shows the total number of tokens present and those that we annotated. Altogether, 5,455 tokens were annotated for scene role and function. Rank Role Function 1 LOCUS 636 LOCUS 780 2 POSSESSOR 381 GESTALT 699 ⋮ ⋮ ⋮ last DIRECTION 1 POSSESSION 2 Table 2: Most and least frequent role and function labels. The new hierarchy and annotation guidelines were developed by consensus. The original preposition supersense annotations were placed in a spreadsheet and discussed. While most tokens were unambiguously annotated, some cases required a new analysis throughout the corpus. For example, the functions of for were so broad that they needed to be (manually) clustered before mapping clusters onto hierarchy labels. Unusual or rare contexts also presented difficulties. Where the correct supersense remained unclear, specific instructions and examples were included in the guidelines. Possessives were not covered by the original preposition supersense annotations, and thus were annotated from scratch.8 Special labels were applied to tokens deemed not to be prepositions or possessives evoking semantic relations, including uses of the infinitive marker that do not fall within the scope of SNACS (487 tokens: a majority of infinitives) and preposition-initial discourse expressions (e.g. after_all) and coordinating conjunctions (as_well_as).9 Other tokens requiring special labels are the opaque possessive slot in a multiword idiom (12 tokens), and tokens where unintelligble, incomplete, marginal, or nonnative usage made it impossible to assign a supersense (48 tokens). Table 2 shows the most and least common labels occurring as scene role and function. Three labels never appear in the annotated corpus: TEMPORAL from the CIRCUMSTANCE hierarchy, and PARTICIPANT and CONFIGURATION which are both the highest supersense in their respective hierarchies. While all remaining supersenses are attested as scene roles, there are some that never occur as functions, such as ORIGINATOR, which is most often realized as POSSESSOR or SOURCE, and EXPERIENCER. It is interesting to note that every subtype of CIRCUMSTANCE (except TEMPORAL) appears as both scene role and function, whereas many of the subtypes of the other two hierarchies are lim8Blodgett and Schneider (2018) detail the extension of the scheme to possessives. 9In the corpus, lexical expression tokens appear alongside a lexical category indicating which inventory of supersenses, if any, applies. SNACS-annotated units are those with ADP (adposition), PP, PRON.POSS (possessive pronoun), etc., whereas DISC (discourse) and CCONJ expressions do not receive any supersense. Refer to the STREUSLE README for details. 190 ited to either role or function. This reflects our view that prepositions primarily capture circumstantial notions such as space and time, but have been extended to cover other semantic relations.10 5 Interannotator Agreement Study Because the online reviews corpus was so central to the development of our guidelines, we sought to estimate the reliability of the annotation scheme on a new corpus in a new genre. We chose SaintExupéry’s novella The Little Prince, which is readily available in many languages and has been annotated with semantic representations such as AMR (Banarescu et al., 2013). The genre is markedly different from online reviews—it is quite literary, and employs archaic or poetic figures of speech. It is also a translation from French, contributing to the markedness of the language. This text is therefore a challenge for an annotation scheme based on colloquial contemporary English. We addressed this issue by running 3 practice rounds of annotation on small passages from The Little Prince, both to assess whether the scheme was applicable without major guidelines changes and to prepare the annotators for this genre. For the final annotation study, we chose chapters 4 and 5, in which 242 markables of 52 types were identified heuristically (§6.2). The types of, to, in, as, from, and for, as well as possessives, occurred at least 10 times. Annotators had the option to mark units as false positives using special labels (see §4) in addition to expressing uncertainty about the unit. For the annotation process, we adapted the open source web-based annotation tool UCCAApp (Abend et al., 2017) to our workflow, by extending it with a type-sensitive ranking module for the list of categories presented to the annotators. Annotators. Five annotators (A, B, C, D, E), all authors of this paper, took part in this study. All are computational linguistics researchers with advanced training in linguistics. Their involvement in the development of the scheme falls on a spectrum, with annotator A being the most active figure in guidelines development, and annotator E not being 10All told, 41 supersenses are attested as both role and function for the same token, and there are 136 unique construal combinations where the role differs from the function. Only four supersenses are never found in such a divergent construal: EXPLANATION, SPECIES, STARTTIME, RATEUNIT. Except for RATEUNIT which occurs only 5 times, their narrow use does not arise because they are rare. EXPLANATION, for example, occurs over 100 times, more than many labels which often appear in construal. Labels Role Function Exact 47 74.4% 81.3% Depth-3 43 75.0% 81.8% Depth-2 26 79.9% 87.4% Depth-1 3 92.6% 93.9% Table 3: Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (“Exact” means no coarsening). “Labels” refers to the number of distinct labels that annotators could have provided at that level of coarsening. Excludes tokens where at least one annotator assigned a nonsemantic label. involved in developing the guidelines and learning the scheme solely from reading the manual. Annotators A, B, and C are native speakers of English, while Annotators D and E are nonnative but highly fluent speakers. Results. In the Little Prince sample, 40 out of 47 possible supersenses were applied at least once by some annotator; 36 were applied at least once by a majority of annotators; and 33 were applied at least once by all annotators. APPROXIMATOR, COTHEME, COST, INSTEADOF, INTERVAL, RATEUNIT, and SPECIES were not used by any annotator. To evaluate interannotator agreement, we excluded 26 tokens for which at least one annotator has assigned a non-semantic label, considering only the 216 tokens that were identified correctly as SNACS targets and were clear to all annotators. Despite varying exposure to the scheme, there is no obvious relationship between annotators’ backgrounds and their agreement rates.11 Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators. Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).12 All annotators agree on the role for 119, and on the function for 139 tokens. Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter. This is expected considering the definition of construal: the function of an adposition is more lexical and less contextdependent, whereas the role depends on the context (the scene) and can be highly idiomatic (§3.3). The supersense hierarchy allows us to analyze agreement at different levels of granularity (rows 11See table 7 in appendix A for a more detailed description of the annotators’ backgrounds and pairwise IAA results. 12Average of pairwise Cohen’s k is 0.733 and 0.799 on, respectively, role and function, suggesting strong agreement. However, it is worth noting that annotators selected labels from a ranked list, with the ranking determined by preposition type. The model of chance agreement underlying k does not take the identity of the preposition into account, and thus likely underestimates the probability of chance agreement. 191 2–4 in table 3; see also confusion matrix in supplement). Coarser-grained analyses naturally give better agreement, with depth-1 coarsening into only 3 categories. Results show that most confusions are local with respect to the hierarchy. 6 Disambiguation Systems We now describe systems that identify and disambiguate SNACS-annotated prepositions and possessives in two steps. Target identification heuristics (§6.2) first determine which tokens (single-word or multiword) should receive a SNACS supersense. A supervised classifier then predicts a supersense analysis for each identified target. The research objectives are (a) to study the ability of statistical models to learn roles and functions of prepositions and possessives, and (b) to compare two different modeling strategies (feature-rich and neural), and the impact of syntactic parsing. 6.1 Experimental Setup Our experiments use the reviews corpus described in §4. We adopt the official training/development/ test splits of the Universal Dependencies (UD) project; their sizes are presented in table 1. All systems are trained on the training set only and evaluated on the test set; the development set was used for tuning hyperparameters. Gold tokenization was used throughout. Only targets with a semantic supersense analysis involving labels from figure 2 were included in training and evaluation—i.e., tokens with special labels (see §4) were excluded. To test the impact of automatic syntactic parsing, models in the auto syntax condition were trained and evaluated on automatic lemmas, POS tags, and Basic Universal Dependencies (according to the v1 standard) produced by Stanford CoreNLP version 3.8.0 (Manning et al., 2014).13 Named entity tags from the default 12-class CoreNLP model were used in all conditions. 6.2 Target Identification §3.1 explains that the categories in our scheme apply not only to (transitive) adpositions in a very narrow definition of the term, but also to lexical items that traditionally belong to variety of syntactic classes (such as adverbs and particles), as 13The CoreNLP parser was trained on all 5 genres of the English Web Treebank—i.e., a superset of our training set. Gold syntax follows the UDv2 standard, whereas the classifiers in the auto syntax conditions are trained and tested with UDv1 parses produced by CoreNLP. well as possessive case markers and multiword expressions. 61.2% of the units annotated in our corpus are adpositions according to gold POS annotation, 20.2% are possessives, and 18.6% belong to other POS classes. Furthermore, 14.1% of tokens labeled as adpositions or possessives are not annotated because they are part of a multiword expression (MWE). It is therefore neither obvious nor trivial to decide which tokens and groups of tokens should be selected as targets for SNACS annotation. To facilitate both manual annotation and automatic classification, we developed heuristics for identifying annotation targets. The algorithm first scans the sentence for known multiword expressions, using a blacklist of non-prepositional MWEs that contain preposition tokens (e.g., take_care_of) and a whitelist of prepositional MWEs (multiword prepositions like out_of and PP idioms like in_town). Both lists were constructed from the training data. From segments unaffected by the MWE heuristics, single-word candidates are identified by matching a high-recall set of parts of speech, then filtered through 5 different heuristics for adpositions, possessives, subordinating conjunctions, adverbs, and infinitivals. Most of these filters are based on lexical lists learned from the training portion of the STREUSLE corpus, but there are some specific rules for infinitivals that handle forsubjects (I opened the door for Steve to take out the trash—to, but not for, should receive a supersense) and comparative constructions with too and enough (too short to ride). 6.3 Classification The next step of disambiguation is predicting the role and function labels. We explore two different modeling strategies. Feature-rich Model. Our first model is based on the features for preposition relation classification developed by Srikumar and Roth (2013), which were themselves extended from the preposition sense disambiguation features of Hovy et al. (2010). We briefly describe the feature set here, and refer the reader to the original work for further details. At a high level, it consists of features extracted from selected neighboring words in the dependency tree (i.e., heuristically identified governor and object) and in the sentence (previous verb, noun and adjective, and next noun). In addition, all these features are also conjoined with the lemma of the rightmost word in the preposition token to capture 192 target-specific interactions with the labels. The features extracted from each neighboring word are listed in the supplementary material. Using these features extracted from targets, we trained two multi-class SVM classifiers to predict the role and function labels using the LIBLINEAR library (Fan et al., 2008). Neural Model. Our second classifier is a multilayer perceptron (MLP) stacked on top of a BiLSTM. For every sentence, tokens are first embedded using a concatenation of fixed pre-trained word2vec (Mikolov et al., 2013) embeddings of the word and the lemma, and an internal embedding vector, which is updated during training.14 Token embeddings are then fed into a 2-layer BiLSTM encoder, yielding a list of token representations. For each identified target unit u, we extract its first token, and its governor and object headword. For each of these tokens, we construct a feature vector by concatenating its token representation with embeddings of its (1) language-specific POS tag, (2) UD dependency label, and (3) NER label. We additionally concatenate embeddings of u’s lexical category, a syntactic label indicating whether u is predicative/stranded/subordinating/none of these, and an indicator of whether either of the two tokens following the unit is capitalized. All these embeddings, as well as internal token embedding vectors, are considered part of the model parameters and are initialized randomly using the Xavier initialization (Glorot and Bengio, 2010). A NONE label is used when the corresponding feature is not given, both in training and at test time. The concatenated feature vector for u is fed into two separate 2-layered MLPs, followed by a separate softmax layer that yields the predicted probabilities for the role and function labels. We tuned hyperparameters on the development set to maximize F-score (see supplementary material). We used the cross-entropy loss function, optimizing with simple gradient ascent for 80 epochs with minibatches of size 20. Inverted dropout was used during training. The model is implemented with the DyNet library (Neubig et al., 2017). The model architecture is largely comparable to that of Gonen and Goldberg (2016), who experimented with a coarsened version of STREUSLE 3.0. The main difference is their use of unlabeled multilingual datasets to improve pre14Word2vec is pre-trained on the Google News corpus. Zero vectors are used where vectors are not available. Syntax P R F gold 88.8 89.6 89.2 auto 86.0 85.8 85.9 Table 4: Target identification results for disambiguation. diction by exploiting the differences in preposition ambiguities across languages. 6.4 Results & Analysis Following the two-stage disambiguation pipeline (i.e. target identification and classification), we separate the evaluation across the phases. Table 4 reports the precision, recall, and F-score (P/R/F) of the target identification heuristics. Table 5 reports the disambiguation performance of both classifiers with gold (left) and automatic target identification (right). We evaluate each classifier along three dimensions—role and function independently, and full (i.e. both role and function together). When we have the gold targets, we only report accuracy because precision and recall are equal. With automatically identified targets, we report P/R/F for each dimension. Both tables show the impact of syntactic parsing on quality. The rest of this section presents analyses of the results along various axes. Target identification. The identification heuristics described in §6.2 achieve an F1 score of 89.2% on the test set using gold syntax.15 Most false positives (47/54=87%) can be ascribed to tokens that are part of a (non-adpositional or larger adpositional) multiword expression. 9 of the 50 false negatives (18%) are rare multiword expressions not occurring in the training data and there are 7 partially identified ones, which are counted as both false positives and false negatives. Automatically generated parse trees slightly decrease quality (table 4). Target identification, being the first step in the pipeline, imposes an upper bound on disambiguation scores. We observe this degradation when we compare the Gold ID and the Auto ID blocks of table 5, where automatically identified targets decrease F-score by about 10 points in all settings.16 Classification. Along with the statistical classifier results in table 5, we also report performance 15Our evaluation script counts tokens that received special labels in the gold standard (see §4) as negative examples of SNACS targets, with the exception of the tokens labeled as unintelligible/nonnative/etc., which are not counted toward or against target ID performance. 16A variant of the target ID module, optimized for recall, is used as preprocessing for the agreement study discussed in §5. With this setting, the heuristic achieves an F1 score of 90.2% (P=85.3%, R=95.6%) on the test set. 193 Gold ID Auto ID Role Func. Full Role Func. Full Syntax Acc. Acc. Acc. P R F P R F P R F Most frequent N/A 40.6 53.3 37.9 37.0 37.3 37.1 49.8 50.2 50.0 34.3 34.6 34.4 Neural gold 71.7 82.5 67.5 62.0 62.5 62.2 73.1 73.8 73.4 58.7 59.2 58.9 Feature-rich gold 73.5 81.0 70.0 62.0 62.5 62.2 70.7 71.2 71.0 59.3 59.8 59.5 Neural auto 67.7 78.5 64.4 56.4 56.2 56.3 66.8 66.7 66.7 53.7 53.5 53.6 Feature-rich auto 67.9 79.4 65.2 58.2 58.1 58.2 66.8 66.7 66.7 55.7 55.6 55.7 Table 5: Overall performance of SNACS disambiguation systems on the test set. Results are reported for the role supersense (Role), the function supersense (Func.), and their conjunction (Full). All figures are percentages. Left: Accuracies with gold standard target identification (480 targets). Right: Precision, recall, and F1 with automatic target identification (§6.2 and table 4). for the most frequent baseline, which selects the most frequent role–function label pair given the (gold) lemma according to the training data. Note that all learned classifiers, across all settings, outperform the most frequent baseline for both role and function prediction. The feature-rich and the neural models perform roughly equivalently despite the significantly different modeling strategies. Function and scene role performance. Function prediction is consistently more accurate than role prediction, with roughly a 10-point gap across all systems. This mirrors a similar effect in the interannotator agreement scores (see §5), and may be due to the reduced ambiguity of functions compared to roles (as attested by the baseline’s higher accuracy for functions than roles), and by the more literal nature of function labels, as opposed to role labels that often require more context to determine. Impact of automatic syntax. Automatic syntactic analysis decreases scores by 4 to 7 points, most likely due to parsing errors which affect the identification of the preposition’s object and governor. In the auto ID/auto syntax condition, the worse target ID performance with automatic parses (noted above) contributes to lower classification scores. 6.5 Errors & Confusions We can use the structure of the SNACS hierarchy to probe classifier performance. As with the interannotator study, we evaluate the accuracy of predicted labels when they are coarsened post hoc by moving up the hierarchy to a specific depth. Table 6 shows this for the feature-rich classifier for different depths, with depth-1 representing the coarsening of the labels into the 3 root labels. Depth-4 (Exact) represents the full results in table 5. These results show that the classifiers often mistake a label for another that is nearby in the hierarchy. Examining the most frequent confusions of both models, we observe that LOCUS is overpredicted Labels Role Function Exact 47 67.9% 79.4% Depth-3 43 67.9% 79.6% Depth-2 26 76.2% 86.2% Depth-1 3 86.0% 93.8% Table 6: Accuracy of the feature-rich model (gold identification and syntax) on the test set (480 tokens) with different levels of hierarchy coarsening of its output. “Labels” refers to the number of labels in the training set after coarsening. (which makes sense as it is most frequent overall), and SOCIALROLE–ORGROLE and GESTALT– POSSESSOR are often confused (they are close in the hierarchy: one inherits from the other). 7 Conclusion This paper introduced a new approach to comprehensive analysis of the semantics of prepositions and possessives in English, backed by a thoroughly documented hierarchy and annotated corpus. We found good interannotator agreement and provided initial supervised disambiguation results. We expect that future work will develop methods to scale the annotation process beyond requiring highly trained experts; bring this scheme to bear on other languages; and investigate the relationship of our scheme to more structured semantic representations, which could lead to more robust models. Our guidelines, corpus, and software are available at https://github.com/nert-gu/streusle/ blob/master/ACL2018.md. Acknowledgments We thank Oliver Richardson, whose codebase we adapted for this project; Na-Rae Han, Archna Bhatia, Tim O’Gorman, Ken Litkowski, Bill Croft, and Martha Palmer for helpful discussions and support; and anonymous reviewers for useful feedback. This research was supported in part by DTRA HDTRA116-1-0002/Project #1553695, by DARPA 15-18CwC-FP-032, and by grant 2016375 from the United States–Israel Binational Science Foundation (BSF), Jerusalem, Israel. 194 References Omri Abend, Shai Yerushalmi, and Ari Rappoport. 2017. UCCAApp: Web-application for syntactic and semantic phrase-based annotation. In Proc. of ACL 2017, System Demonstrations, pages 109–114, Vancouver, Canada. Lasha Abzianidze and Johan Bos. 2017. Towards universal semantic tagging. In Proc. of IWCS, Montpellier, France. Adriana Badulescu and Dan Moldovan. 2009. A Semantic Scattering model for the automatic interpretation of English genitives. Natural Language Engineering, 15(2):215–239. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proc. of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Ann Bies, Justin Mott, Colin Warner, and Seth Kulick. 2012. English Web Treebank. Technical Report LDC2012T13, Linguistic Data Consortium, Philadelphia, PA. Austin Blodgett and Nathan Schneider. 2018. Semantic supersenses for English possessives. In Proc. of LREC, pages 1529–1534, Miyazaki, Japan. Claire Bonial, William Corvey, Martha Palmer, Volha V. Petukhova, and Harry Bunt. 2011. A hierarchical unification of LIRICS and VerbNet semantic roles. In Fifth IEEE International Conference on Semantic Computing, pages 483–489, Palo Alto, CA, USA. Melissa Bowerman and Soonja Choi. 2001. Shaping meanings for language: universal and languagespecific in the acquisition of spatial semantic categories. In Melissa Bowerman and Stephen Levinson, editors, Language Acquisition and Conceptual Development, pages 475–511. Cambridge University Press, Cambridge, UK. Claudia Brugman. 1981. The story of ‘over’: polysemy, semantics and the structure of the lexicon. MA thesis, University of California, Berkeley, Berkeley, CA. Published New York: Garland, 1981. Daniel Dahlmeier, Hwee Tou Ng, and Tanja Schultz. 2009. Joint learning of preposition senses and semantic roles of prepositional phrases. In Proc. of EMNLP, pages 450–458, Suntec, Singapore. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: a library for large linear classification. Journal of Machine Learning Research, 9(Aug):1871–1874. Charles J. Fillmore and Collin Baker. 2009. A frames approach to semantic analysis. In Bernd Heine and Heiko Narrog, editors, The Oxford Handbook of Linguistic Analysis, pages 791–816. Oxford University Press, Oxford, UK. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proc. of AISTATS, pages 249–256, Chia Laguna, Sardinia, Italy. Hila Gonen and Yoav Goldberg. 2016. Semi supervised preposition-sense disambiguation using multilingual data. In Proc. of COLING, pages 2718–2729, Osaka, Japan. Bernd Heine. 2006. Possession: Cognitive Sources, Forces, and Grammaticalization. Cambridge University Press, Cambridge, UK. Annette Herskovits. 1986. Language and spatial cognition: an interdisciplinary study of the prepositions in English. Cambridge University Press, Cambridge, UK. Dirk Hovy, Stephen Tratz, and Eduard Hovy. 2010. What’s in a preposition? Dimensions of sense disambiguation for an interesting word class. In Coling 2010: Posters, pages 454–462, Beijing, China. Dirk Hovy, Ashish Vaswani, Stephen Tratz, David Chiang, and Eduard Hovy. 2011. Models and training for unsupervised preposition sense disambiguation. In Proc. of ACL-HLT, pages 323–328, Portland, Oregon, USA. Rodney Huddleston and Geoffrey K. Pullum, editors. 2002. The Cambridge Grammar of the English Language. Cambridge University Press, Cambridge, UK. Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O’Gorman, Vivek Srikumar, and Nathan Schneider. 2017. Double trouble: the problem of construal in semantic annotation of adpositions. In Proc. of *SEM, pages 178–188, Vancouver, Canada. Naveen Khetarpal, Asifa Majid, and Terry Regier. 2009. Spatial terms reflect near-optimal spatial categories. In Proc. of the 31st Annual Conference of the Cognitive Science Society, pages 2396–2401, Amsterdam. George Lakoff. 1987. Women, fire, and dangerous things: what categories reveal about the mind. University of Chicago Press, Chicago. Seth Lindstromberg. 2010. English Prepositions Explained, revised edition. John Benjamins, Amsterdam. Ken Litkowski. 2014. Pattern Dictionary of English Prepositions. In Proc. of ACL, pages 1274–1283, Baltimore, Maryland, USA. 195 Ken Litkowski and Orin Hargraves. 2005. The Preposition Project. In Proc. of the Second ACL-SIGSEM Workshop on the Linguistic Dimensions of Prepositions and their Use in Computational Linguistics Formalisms and Applications, pages 171–179, Colchester, Essex, UK. Ken Litkowski and Orin Hargraves. 2007. SemEval2007 Task 06: Word-Sense Disambiguation of Prepositions. In Proc. of SemEval, pages 24–29, Prague, Czech Republic. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proc. of ACL: System Demonstrations, pages 55–60, Baltimore, Maryland, USA. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Dan Moldovan, Adriana Badulescu, Marta Tatu, Daniel Antohe, and Roxana Girju. 2004. Models for the semantic classification of noun phrases. In HLTNAACL 2004: Workshop on Computational Lexical Semantics, pages 60–67, Boston, Massachusetts, USA. Antje Müller, Claudia Roch, Tobias Stadtfeld, and Tibor Kiss. 2012. The annotation of preposition senses in German. In Britta Stolterfoht and Sam Featherston, editors, Empirical Approaches to Linguistic Theory: Studies in Meaning and Structure, pages 63–82. Walter de Gruyter, Berlin. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. DyNet: The dynamic neural network toolkit. arXiv:1701.03980. Kiki Nikiforidou. 1991. The meanings of the genitive: a case study in semantic structure and semantic change. Cognitive Linguistics, 2(2):149–205. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: a multilingual treebank collection. In Proc. of LREC, pages 1659– 1666, Portorož, Slovenia. Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkova, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Zdenka Uresova. 2016. Towards comparability of linguistic graph banks for semantic parsing. In Proc. of LREC, pages 3991– 3995, Paris, France. Tom O’Hara and Janyce Wiebe. 2009. Exploiting semantic role resources for preposition disambiguation. Computational Linguistics, 35(2):151–184. Martha Palmer, Claire Bonial, and Jena D. Hwang. 2017. VerbNet: Capturing English verb behavior, meaning and usage. In Susan E. F. Chipman, editor, The Oxford Handbook of Cognitive Science, pages 315–336. Oxford University Press. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: an annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. James Pustejovsky, José M. Castaño, Robert Ingria, Roser Saurí, Robert J. Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir R. Radev. 2003. TimeML: Robust specification of event and temporal expressions in text. In IWCS-5, Fifth International Workshop on Computational Semantics, Tilburg, Netherlands. James Pustejovsky, Jessica Moszkowicz, and Marc Verhagen. 2012. A linguistically grounded annotation language for spatial information. TAL, 53(2):87– 113. Terry Regier. 1996. The human semantic potential: spatial language and constrained connectionism. MIT Press, Cambridge, MA. Anette Rosenbach. 2002. Genitive variation in English: conceptual factors in synchronic and diachronic studies. Mouton de Gruyter, Berlin. Patrick Saint-Dizier. 2006. PrepNet: a multilingual lexical description of prepositions. In Proc. of LREC, volume 6, pages 1021–1026, Genoa, Italy. Nathan Schneider, Jena D. Hwang, Archna Bhatia, NaRae Han, Vivek Srikumar, Tim O’Gorman, Sarah R. Moeller, Omri Abend, Austin Blodgett, and Jakob Prange. 2018. Adposition and Case Supersenses v2: Guidelines for English. arXiv:1704.02134. Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Meredith Green, Abhijit Suresh, Kathryn Conger, Tim O’Gorman, and Martha Palmer. 2016. A corpus of preposition supersenses. In Proc. of LAW X – the 10th Linguistic Annotation Workshop, pages 99– 109, Berlin, Germany. Nathan Schneider, Vivek Srikumar, Jena D. Hwang, and Martha Palmer. 2015. A hierarchy with, of, and for preposition supersenses. In Proc. of The 9th Linguistic Annotation Workshop, pages 112–123, Denver, Colorado, USA. 196 Stephanie Shih, Jason Grafmiller, Richard Futrell, and Joan Bresnan. 2015. Rhythm’s role in genitive construction choice in spoken English. In Ralf Vogel and Ruben van de Vijver, editors, Rhythm in cognition and grammar: a Germanic perspective, pages 207–234. De Gruyter Mouton, Berlin. Vivek Srikumar and Dan Roth. 2011. A joint model for extended semantic role labeling. In Proc. of EMNLP, pages 129–139, Edinburgh, Scotland, UK. Vivek Srikumar and Dan Roth. 2013. Modeling semantic relations expressed by prepositions. Transactions of the Association for Computational Linguistics, 1:231–242. Leonard Talmy. 1996. Fictive motion in language and “ception”. In Paul Bloom, Mary A. Peterson, Nadel Lynn, and Merrill F. Garrett, editors, Language and Space, pages 211–276. MIT Press, Cambridge, MA. John R. Taylor. 1996. Possessives in English: An Exploration in Cognitive Grammar. Clarendon Press, Oxford, UK. Stephen Tratz and Dirk Hovy. 2009. Disambiguation of preposition sense using linguistically motivated features. In Proc. of NAACL-HLT Student Research Workshop and Doctoral Consortium, pages 96–100, Boulder, Colorado. Stephen Tratz and Eduard Hovy. 2013. Automatic interpretation of the English possessive. In Proc. of ACL, pages 372–381, Sofia, Bulgaria. Andrea Tyler and Vyvyan Evans. 2003. The Semantics of English Prepositions: Spatial Scenes, Embodied Meaning and Cognition. Cambridge University Press, Cambridge, UK. Christoph Wolk, Joan Bresnan, Anette Rosenbach, and Benedikt Szmrecsanyi. 2013. Dative and genitive variability in Late Modern English: Exploring crossconstructional variation and change. Diachronica, 30(3):382–419. Yang Xu and Charles Kemp. 2010. Constructing spatial concepts from universal primitives. In Proc. of CogSci, pages 346–351, Portland, Oregon. Patrick Ye and Timothy Baldwin. 2007. MELB-YB: Preposition sense disambiguation using rich semantic features. In Proc. of SemEval, pages 241–244, Prague, Czech Republic. Joost Zwarts and Yoad Winter. 2000. Vector space semantics: a model-theoretic analysis of locative prepositions. Journal of Logic, Language and Information, 9:169–211.
2018
18
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1938–1947 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1938 A Distributional and Orthographic Aggregation Model for English Derivational Morphology Daniel Deutsch∗, John Hewitt∗and Dan Roth Department of Computer and Information Science University of Pennsylvania {ddeutsch,johnhew,danroth}@seas.upenn.edu Abstract Modeling derivational morphology to generate words with particular semantics is useful in many text generation tasks, such as machine translation or abstractive question answering. In this work, we tackle the task of derived word generation. That is, given the word “run,” we attempt to generate the word “runner” for “someone who runs.” We identify two key problems in generating derived words from root words and transformations: suffix ambiguity and orthographic irregularity. We contribute a novel aggregation model of derived word generation that learns derivational transformations both as orthographic functions using sequence-to-sequence models and as functions in distributional word embedding space. Our best open-vocabulary model, which can generate novel words, and our best closed-vocabulary model, show 22% and 37% relative error reductions over current state-of-the-art systems on the same dataset. 1 Introduction The explicit modeling of morphology has been shown to improve a number of tasks (Seeker and C¸ etinoglu, 2015; Luong et al., 2013). In a large number of the world’s languages, many words are composed through morphological operations on subword units. Some languages are rich in inflectional morphology, characterized by syntactic transformations like pluralization. Similarly, languages like English are rich in derivational morphology, where the semantics of words are composed from ∗These authors contributed equally; listed alphabetically. Figure 1: Diagram depicting the flow of our aggregation model. Two models generate a hypothesis according to orthogonal information; then one is chosen as the final model generation. Here, the hypothesis from the distributional model is chosen. smaller parts. The AGENT derivational transformation, for example, answers the question, what is the word for ‘someone who runs’? with the answer, a runner.1 Here, AGENT is spelled out as suffixing -ner onto the root verb run. We tackle the task of derived word generation. In this task, a root word x and a derivational transformation t are given to the learner. The learner’s job is to produce the result of the transformation on the root word, called the derived word y. Table 1 gives examples of these transformations. Previous approaches to derived word generation model the task as a character-level sequenceto-sequence (seq2seq) problem (Cotterell et al., 2017b). The letters from the root word and some encoding of the transformation are given as input to a neural encoder, and the decoder is trained to produce the derived word, one letter at a time. We identify the following problems with these approaches: First, because these models are unconstrained, they can generate sequences of characters that do 1We use the verb run as a demonstrative example; the transformation can be applied to most verbs. 1939 x t y wise ADVERB → wisely simulate RESULT → simulation approve RESULT → approval overstate RESULT → overstatement yodel AGENT → yodeler survive AGENT → survivor intense NOMINAL → intensity effective NOMINAL → effectiveness pessimistic NOMINAL → pessimism Table 1: The goal of derived word generation is to produce the derived word, y, given both the root word, x, and the transformation t, as demonstrated here with examples from the dataset. not form actual words. We argue that requiring the model to generate a known word is a reasonable constraint in the special case of English derivational morphology, and doing so avoids a large number of common errors. Second, sequence-based models can only generalize string manipulations (such as “add -ment”) if they appear frequently in the training data. Because of this, they are unable to generate derived words that do not follow typical patterns, such as generating truth as the nominative derivation of true. We propose to learn a function for each transformation in a low dimensional vector space that corresponds to mapping from representations of the root word to the derived word. This eliminates the reliance on orthographic information, unlike related approaches to distributional semantics, which operate at the suffix level (Gupta et al., 2017). We contribute an aggregation model of derived word generation that produces hypotheses independently from two separate learned models: one from a seq2seq model with only orthographic information, and one from a feed-forward network using only distributional semantic information in the form of pretrained word vectors. The model learns to choose between the hypotheses according to the relative confidence of each. This system can be interpreted as learning to decide between positing an orthographically regular form or a semantically salient word. See Figure 1 for a diagram of our model. We show that this model helps with two open problems with current state-of-the-art seq2seq derived word generation systems, suffix ambiguity and orthographic irregularity (Section 2). We also improve the accuracy of seq2seq-only derived word systems by adding external information through constrained decoding and hypothesis rescoring. These methods provide orthogonal gains to our main contribution. We evaluate models in two categories: open vocabulary models that can generate novel words unattested in a preset vocabulary, and closedvocabulary models, which cannot. Our best openvocabulary and closed-vocabulary models demonstrate 22% and 37% relative error reductions over the current state of the art. 2 Background: Derivational Morphology Derivational transformations generate novel words that are semantically composed from the root word and the transformation. We identify two unsolved problems in derived word transformation, each of which we address in Sections 3 and 4. First, many plausible choices of suffix for a single pair of root word and transformation. For example, for the verb ground, the RESULT transformation could plausibly take as many forms as2 (ground, RESULT) →grounding (ground, RESULT) →*groundation (ground, RESULT) →*groundment (ground, RESULT) →*groundal However, only one is correct, even though each suffix appears often in the RESULT transformation of other words. We will refer to this problem as “suffix ambiguity.” Second, many derived words seem to lack a generalizable orthographic relationship to their root words. For example, the RESULT of the verb speak is speech. It is unlikely, given an orthographically similar verb creak, that the RESULT be creech instead of, say, creaking. Seq2seq models must grapple with the problem of derived words that are the result of unlikely or potentially unseen string transformations. We refer to this problem as “orthographic irregularity.” 3 Sequence Models and Corpus Knowledge In this section, we introduce the prior state-of-theart model, which serves as our baseline system. Then we build on top of this system by incorporating a dictionary constraint and rescoring the 2The * indicates a non-word. 1940 model’s hypotheses with token frequency information to address the suffix ambiguity problem. 3.1 Baseline Architecture We begin by formalizing the problem and defining some notation. For source word x = x1, x2, . . . xm, a derivational transformation t, and target word y = y1, y2, . . . yn, our goal is to learn some function from the pair (x, t) to y. Here, xi and yj are the ith and jth characters of the input strings x and y. We will sometimes use x1:i to denote x1, x2, . . . xi, and similarly for y1:j. The current state-of-the-art model for derivedform generation approaches this problem by learning a character-level encoder-decoder neural network with an attention mechanism (Cotterell et al., 2017b; Bahdanau et al., 2014). The input to the bidirectional LSTM encoder (Hochreiter and Schmidhuber, 1997; Graves and Schmidhuber, 2005) is the sequence #, x1, x2, . . . xm, #, t, where # is a special symbol to denote the start and end of a word, and the encoding of the derivational transformation t is concatenated to the input characters. The model is trained to minimize the cross entropy of the training data. We refer to our reimplementation of this model as SEQ. For a more detailed treatment of neural sequenceto-sequence models with attention, we direct the reader to Luong et al. (2015). 3.2 Dictionary Constraint The suffix ambiguity problem poses challenges for models which rely exclusively on input characters for information. As previously demonstrated, words derived via the same transformation may take different suffixes, and it is hard to select among them based on character information alone. Here, we describe a process for restricting our inference procedure to only generate known English words, which we call a dictionary constraint. We believe that for English morphology, a large enough corpus will contain the vast majority of derived forms, so while this approach is somewhat restricting, it removes a significant amount of ambiguity from the problem. To describe how we implemented this dictionary constraint, it is useful first to discuss how decoding in a seq2seq model is equivalent to solving a shortest path problem. The notation is specific to our model, but the argument is applicable to seq2seq models in general. The goal of decoding is to find the most probable structure ˆy conditioned on some observation x and transformation t. That is, the problem is to solve ˆy = arg max y∈Y p(y | x, t) (1) = arg min y∈Y −log p(y | x, t) (2) where Y is the set of valid structures. Sequential models have a natural ordering y = y1, y2, . . . yn over which −log p(y | x, t) can be decomposed −log p(y | x, t) = n X t=1 −log p(yt | y1:t−1, x, t) (3) Solving Equation 2 can be viewed as solving a shortest path problem from a special starting state to a special ending state via some path which uniquely represents y. Each vertex in the graph represents some sequence y1:i, and the weight of the edge from y1:i to y1:i+1 is given by −log p(yi+1 | y1:i−1, x, t) (4) The weight of the path from the start state to the end state via the unique path that describes y is exactly equal to Equation 3. When the vocabulary size is too large, the exact shortest path is intractable, and approximate search methods, such as beam search, are used instead. In derived word generation, Y is an infinite set of strings. Since Y is unrestricted, almost all of the strings in Y are not valid words. Given a dictionary YD, the search space is restricted to only those words in the dictionary by searching over the trie induced from YD, which is a subgraph of the unrestricted graph. By limiting the search space to YD, the decoder is guaranteed to generate some known word. Models which use this dictionaryconstrained inference procedure will be labeled with +DICT. Algorithm 1 has the pseudocode for our decoding procedure. We discuss specific details of the search procedure and interesting observations of the search space in Section 6. Section 5.2 describes how we obtained the dictionary of valid words. 3.3 Word Frequency Knowledge through Rescoring We also consider the inclusion of explicit word frequency information to help solve suffix ambiguity, using the intuition that “real” derived words 1941 are likely to be frequently attested. This permits a high-recall, potentially noisy dictionary. We are motivated by very high top-10 accuracy compared to top-1 accuracy, even among dictionary-constrained models. By rescoring the hypotheses of a model using word frequency (a word-global signal) as a feature, attempt to recover a portion of this top-10 accuracy. When a model has been trained, we query it for its top-10 most likely hypotheses. The union of all hypotheses for a subset of the training observations forms the training set for a classifier that learns to predict whether a hypothesis generated by the model is correct. Each hypothesis is labelled with its correctness, a value in {±1}. We train a simple combination of two scores: the seq2seq model score for the hypothesis, and the log of the word frequency of the hypothesis. To permit a nonlinear combination of word frequency and model score, we train a small multilayer perceptron with the model score and the frequency of a derived word hypothesis as features. At testing time, the 10 hypotheses generated by a single seq2seq model for a single observation are rescored. The new model top-1 hypothesis, then, is the argmax over the 10 hypotheses according to the rescorer. In this way, we are able to incorporate word-global information, e.g. word frequency, that is ill-suited for incorporation at each character prediction step of the seq2seq model. We label models that are rescored in this way +FREQ. 4 Distributional Models So far, we have presented models that learn derivational transformations as orthographic operations. Such models struggle by construction with the orthographic irregularity problem, as they are trained to generalize orthographic information. However, the semantic relationships between root words and derived words are the same even when the orthography is dissimilar. It is salient, for example, that irregular word speech is related to its root speak in about the same way as how exploration is related to the word explore. We model distributional transformations as functions in dense distributional word embedding spaces, crucially learning a function per derivational transformation, not per suffix pair. In this way, we aim to explicitly model the semantic transformation, not the othographic information. 4.1 Feed-forward derivational transformations For all source words x and all target words y, we look up static distributional embeddings vx, vy ∈ Rd. For each derivational transformation t, we learn a function ft : Rd →Rd that maps vx to vy. ft is parametrized as two-layer perceptron, trained using a squared loss, L = bTb (5) b = ft(vx) −vy (6) We perform inference by nearest neighbor search in the embedding space. This inference strategy requires a subset of strings for our embedding dictionary, YV . Upon receiving (x, t) at test time, we compute ft(vx) and find the most similar embeddings in YV . Specifically, we find the top-k most similar embeddings, and take the most similar derived word that starts with the same 4 letters as the root word, and is not identical to it. This heuristic filters out highly implausible hypotheses. We use the single-word subset of the Google News vectors (Mikolov et al., 2013) as YV , so the size of the vocabulary is 929k words. 4.2 SEQ and DIST Aggregation The seq2seq and distributional models we have presented learn with disjoint information to solve separate problems. We leverage this intuition to build a model that chooses, for each observation, whether to generate according to orthographic information via the SEQ model, or produce a potentially irregular form via the DIST model. To train this model, we use a held-out portion of the training set, and filter it to only observations for which exactly one of the two models produces the correct derived form. Finally, we make the strong assumption that the probability of a derived form being generated correctly according to 1 model as opposed to the other is dependent only on the unnormalized model score from each. We model this as a logistic regression (t is omitted for clarity): P(·|yD, yS, x) = softmax(We [DIST(yD|x); SEQ(yS|x)] + be) where We and be are learned parameters, yD and yS are the hypotheses of the distributional and seq2seq models, and DIST(·) and SEQ(·) are the models’ likelihood functions. We denote this aggregate AGGR in our results. 1942 5 Datasets In this section we describe the derivational morphology dataset used in our experiments and how we collected the dictionary and token frequencies used in the dictionary constraint and rescorer. 5.1 Derivational Morphology In our experiments, we use the derived word generation derivational morphology dataset released in Cotterell et al. (2017b). The dataset, derived from NomBank (Meyers et al., 2004) , consists of 4,222 training, 905 validation, and 905 test triples of the form (x, t, y). The transformations are from the following categories: ADVERB (ADJ →ADV), RESULT (V →N), AGENT (V →N), and NOMINAL (ADJ →N). Examples from the dataset can be found in Table 1. 5.2 Dictionary and Token Frequency Statistics The dictionary and token frequency statistics used in the dictionary constraint and frequency reranking come from the Google Books NGram corpus (Michel et al., 2011). The unigram frequency counts were aggregated across years, and any tokens which appear fewer than approximately 2,000 times, do not end in a known possible suffix, or contain a character outside of our vocabulary were removed. The frequency threshold was determined using development data, optimizing for high recall. We collect a set of known suffixes from the training data by removing the longest common prefix between the source and target words from the target word. The result is a dictionary with frequency information for around 360k words, which covers 98% of the target words in the training data.3 6 Inference Procedure Discussion In many sequence models where the vocabulary size is large, exact inference by finding the true shortest path in the graph discussed in Section 3.2 is intractable. As a result, approximate inference techniques such as beam search are often used, or the size of the search space is reduced, for example, by using a Markov assumption. We, however, observed that exact inference via a shortest path algorithm is not only tractable in our model, but 3 The remaining 2% is mostly words with hyphens or mistakes in the dataset. Method Accuracy Avg. #States GREEDY 75.9 11.8 BEAM 76.2 101.2 SHORTEST 76.2 11.8 DICT+GREEDY 77.2 11.7 DICT+BEAM 82.6 91.2 DICT+SHORTEST 82.6 12.4 Table 2: The average accuracies and number of states explored in the search graph of 30 runs of the SEQ model with various search procedures. The BEAM models use a beam size of 10. only slightly more expensive than greedy search and significantly less expensive than beam search. To quantify this claim, we measured the accuracy and number of states explored by greedy search, beam search, and shortest path with and without a dictionary constraint on the development data. Table 2 shows the results averaged over 30 runs. As expected, beam search and shortest path have higher accuracies than greedy search and explore more of the search space. Surprisingly, beam search and shortest path have nearly identical accuracies, but shortest path explores significantly fewer hypotheses. At least two factors contribute to the tractability of exact search in our model. First, our characterlevel sequence model has a vocabulary size of 63, which is significantly smaller than token-level models, in which a vocabulary of 50k words is not uncommon. The search space of sequence models is dependent upon the size of the vocabulary, so the model’s search space is dramatically smaller than for a token-level model. Second, the inherent structure of the task makes it easy to eliminate large subgraphs of the search space. The first several characters of the input word and output word are almost always the same, so the model assigns very low probability to any sequence with different starting characters than the input. Then, the rest of the search procedure is dedicated to deciding between suffixes. Any suffix which does not appear frequently in the training data receives a low score, leaving the search to decide between a handful of possible options. The result is that the learned probability distribution is very spiked; it puts very high probability on just a few output sequences. It is empirically true that the top few most probable sequences have significantly higher scores than the next most probable sequences, which supports this hypothesis. In our subsequent experiments, we decode using 1943 Algorithm 1 The decoding procedure uses a shortest-path algorithm to find the most probable output sequence. The dictionary constraint is (optionally) implemented on line 9 by only considering prefixes that are contained in some trie T. 1: procedure DECODE(x, t, V , T ) 2: H ←Heap() 3: H.insert(0, #) 4: while H is not empty do 5: y ←H.remove() 6: if y is a complete word then return y 7: for y ∈V do 8: y′ ←y + y 9: if y′ ∈T then 10: s ←FORWARD(x, t, y′) 11: H.insert(s, y′) exact inference by running a shortest path algorithm (see Algorithm 1). For reranking models, instead of typically using a beam of size k, we use the top k most probable sequences. 7 Results In all of our experiments, we use the training, development, and testing splits provided by Cotterell et al. (2017b) and average over 30 random restarts. Table 3 displays the accuracies and average edit distances on the test set of each of the systems presented in this work and the state-of-the-art model from Cotterell et al. (2017b). First, we observed that SEQ outperforms the results reported in Cotterell et al. (2017b) by a large margin, despite the fact that the model architectures are the same. We attribute this difference to better hyperparameter settings and improved learning rate annealing. Then, it is clear that the accuracy of the distributional model, DIST, is significantly lower than any seq2seq model. We believe the orthographyinformed models perform better because most observations in the dataset are orthographically regular, providing low-hanging fruit. Open-vocabulary models Our open-vocabulary aggregation model AGGR improves performance by 3.8 points accuracy over SEQ, indicating that the sequence models and the distributional model are contributing complementary signals. AGGR is an open-vocabulary model like Cotterell et al. (2017b) and improves upon it by 6.3 points, making it our best comparable model. We provide an inModel Accuracy Edit Cotterell et al. (2017b) 71.7 0.97 DIST 54.9 3.23 SEQ 74.2 0.88 AGGR 78.0 0.83 SEQ+FREQ 79.3 0.71 DUAL+FREQ 82.0 0.64 SEQ+DICT 80.4 0.72 AGGR+DICT 81.0 0.78 SEQ+FREQ+DICT 81.2 0.71 AGGR+FREQ+DICT 82.4 0.67 Table 3: The accuracies and edit distances of the models presented in this paper and prior work. For edit distance, lower is better. The dictionary-constrained models are on the lower half of the table. depth analysis of the strengths of SEQ and DIST in Section 7.1. Closed-vocabulary models We now consider closed-vocabulary models that improve upon the seq2seq model in AGGR. First, we see that restricting the decoder to only generate known words is extremely useful, with SEQ+DICT improving over SEQ by 6.2 points. Qualitatively, we note that this constraint helps solve the suffix ambiguity problem, since orthographically plausible incorrect hypotheses are pruned as non-words. See Table 6 for examples of this phenomenon. Additionally, we observe that the dictionary-constrained model outperforms the unconstrained model according to top-10 accuracy (see Table 5). Rescoring (+FREQ) provides further improvement of 0.8 points, showing that the decoding dictionary constraint provides a higher-quality beam that still has room for top-1 improvement. All together, AGGR+FREQ+DICT provides a 4.4 point improvement over the best open-vocabulary model, AGGR. This shows the disambiguating power of assuming a closed vocabulary. Edit Distance One interesting side effect of the dictionary constraint appears when comparing AGGR+FREQ with and without the dictionary constraint. Although the accuracy of the dictionaryconstrained model is better, the average edit distance is worse. The unconstrained model is free to put invalid words which are orthographically similar to the target word in its top-k, however the constrained model can only choose valid words. This means it is easier for the unconstrained model to generate words which have a low edit distance to the ground truth, whereas the constrained model 1944 Cotterell et al. (2017b) AGGR AGGR+FREQ +DICT acc edit acc edit acc edit NOMINAL 35.1 2.67 68.0 1.32 62.1 1.40 RESULT 52.9 1.86 59.1 1.83 69.7 1.29 AGENT 65.6 0.78 73.5 0.65 79.1 0.57 ADVERB 93.3 0.18 94.0 0.18 95.0 0.22 Table 4: The accuracies and edit distances of our best open-vocabulary and closed-vocabulary models, AGGR and AGGR+FREQ+DICT compared to prior work, evaluated at the transformation level. For edit distance, lower is better. can only do that if such a word exists. The result is a more accurate, yet more orthographically diverse, set of hypotheses. Results by Transformation Next, we compare our best open vocabulary and closed vocabulary models to previous work across each derivational transformation. These results are in Table 4. The largest improvement over the baseline system is for NOMINAL transformations, in which the AGGR has a 49% reduction in error. We attribute most of this gain to the difficulty of this particular transformation. NOMINAL is challenging because there are several plausible endings (e.g. -ity, -ness, -ence) which occur at roughly the same rate. Additionally, NOMINAL examples are the least frequent transformation in the dataset, so it is challenging for a sequential model to learn to generalize. The distributional model, which does not rely on suffix information, does not have this same weakness, so the aggregation AGGR model has better results. The performance of AGGR+FREQ+DICT is worse than AGGR, however. This is surprising because, in all other transformations, adding dictionary information improves the accuracies. We believe this is due to the ambiguity of the ground truth: Many root words have seemingly multiple plausible nominal transformations, such as rigid →{rigidness, rigidity} and equivalent → {equivalence, equivalency}. The dictionary constraint produces a better set of hypotheses to rescore, as demonstrated in Table 5. Therefore, the dictionary-constrained model is likely to have more of these ambiguous cases, which makes the task more difficult. 7.1 Strengths of SEQ and DIST In this subsection we explore why AGGR improves consistently over SEQ even though it maintains an open vocabulary. We have argued that DIST is able to correctly produce derived words that are Cotterell et al. (2017b) SEQ SEQ+DICT top-10-acc top-10-acc top-10-acc NOMINAL 70.2 73.7 87.5 RESULT 72.6 79.9 90.4 AGENT 82.2 88.4 91.6 ADVERB 96.5 96.9 96.9 Table 5: The accuracies of the top-10 best outputs for the SEQ, SEQ+DICT, and prior work demonstrate that the dictionary constraint significantly improves the overall candidate quality. Figure 2: Aggregating across 30 random restarts, we tallied when SEQ and DIST correctly produced derived forms of each suffix. The y-axis shows the logarithm of the difference, per suffix, between the tally for DIST and the tally for SEQ. On the x-axis is the logarithm of the frequency of derived words with each suffix in the training data. A linear regression line is plotted to show the relationship between log suffix frequency and log difference in correct predictions. Suffixes that differ only by the first letter, as with -ger and -er, have been merged and represented by the more frequent of the two. orthographically irregular or infrequent in the training data. Figure 2 quantifies this phenomenon, analyzing the difference in accuracy between the two models, and plotting this in relationship to the frequency of the suffix in the training data. The plot shows that SEQ excels at generating derived words ending in -ly, -ion, and other suffixes that appeared frequently in the training data. DIST’s improvements over SEQ are generally much less frequent in the training data, or as in the case of -ment, are less frequent than other suffixes for the same transformation (like -ion.) By producing derived words whose suffixes show up rarely in the training data, DIST helps solve the orthographic irregularity problem. 8 Prior Work There has been much work on the related task of inflected word generation (Durrett and DeNero, 1945 x t DIST SEQ AGGR AGGR+DICT approve RESULT approval approvation approval approval bankrupt NOMINAL bankruptcy bankruption bankruptcy bankruptcy irretrievable ADVERB irreparably irretrievably irretrievably irretrievably connect RESULT connectivity connection connection connection stroll AGENT strolls stroller stroller stroller emigrate SUBJECT emigre emigrator emigrator emigrant ubiquitous NOMINAL ubiquity ubiquit ubiquit ubiquity hinder AGENT hinderer hinderer hinderer hinderer vacant NOMINAL vacance vacance vacance vacance Table 6: Sample output from a selection of models. The words in bold mark the correct derivations. “Hindrance” and “vacancy” are the expected derived words for the last two rows. 2013; Rastogi et al., 2016; Hulden et al., 2014). It is a structurally similar task to ours, but does not have the same difficulty of challenges (Cotterell et al., 2017a,b), which we have addressed in our work. The paradigm completion for derivational morphology dataset we use in this work was introduced in Cotterell et al. (2017b). They apply the model that won the 2016 SIGMORPHON shared task on inflectional morphology to derivational morphology (Kann and Sch¨utze, 2016; Cotterell et al., 2016). We use this as our baseline. Our implementation of the dictionary constraint is an example of a special constraint which can be directly incorporated into the inference algorithm at little additional cost. Roth and Yih (2004, 2007) propose a general inference procedure that naturally incorporates constraints through recasting inference as solving an integer linear program. Beam or hypothesis rescoring to incorporate an expensive or non-decomposable signal into search has a history in machine translation (Huang and Chiang, 2007). In inflectional morphology, Nicolai et al. (2015) use this idea to rerank hypotheses using orthographic features and Faruqui et al. (2016) use a character-level language model. Our approach is similar to Faruqui et al. (2016) in that we use statistics from a raw corpus, but at the token level. There have been several attempts to use distributional information in morphological generation and analysis. Soricut and Och (2015) collect pairs of words related by any morphological change in an unsupervised manner, then select a vector offset which best explains their observations. There has been subsequent work exploring the vector offset method, finding it unsuccessful in capturing derivational transformations (Gladkova et al., 2016). However, we use more expressive, nonlinear functions to model derivational transformations and report positive results. Gupta et al. (2017) then learn a linear transformation per orthographic rule to solve a word analogy task. Our distributional model learns a function per derivational transformation, not per orthographic rule, which allows it to generalize to unseen orthography. 9 Implementation Details Our models are implemented in Python using the DyNet deep learning library (Neubig et al., 2017). The code is freely available for download.4 Sequence Model The sequence-to-sequence model uses character embeddings of size 20, which are shared across the encoder and decoder, with a vocabulary size of 63. The hidden states of the LSTMs are of size 40. For training, we use Adam with an initial learning rate of 0.005, a batch size of 5, and train for a maximum of 30 epochs. If after one epoch of the training data, the loss on the validation set does not decrease, we anneal the learning rate by half and revert to the previous best model. During decoding, we find the top 1 most probable sequence as discussed in Section 6 unless rescoring is used, in which we use the top 10. Rescorer The rescorer is a 1-hidden-layer perceptron with a tanh nonlinearity and 4 hidden units. It is trained for a maximum of 5 epochs. Distributional Model The DIST model is a 1hidden-layer perceptron with a tanh nonlinearity 4https://github.com/danieldeutsch/ acl2018 1946 and 100 hidden units. It is trained for a maximum of 25 epochs. 10 Conclusion In this work, we present a novel aggregation model for derived word generation. This model learns to choose between the predictions of orthographicallyand distributionally-informed models. This ameliorates suffix ambiguity and orthographic irregularity, the salient problems of the generation task. Concurrently, we show that derivational transformations can be usefully modeled as nonlinear functions on distributional word embeddings. The distributional and orthographic models aggregated contribute orthogonal information to the aggregate, as shown by substantial improvements over state-of-the-art results, and qualitative analysis. Two ways of incorporating corpus knowledge – constrained decoding and rescoring – demonstrate further improvements to our main contribution. Acknowledgements We would like to thank Shyam Upadhyay, Jordan Kodner, and Ryan Cotterell for insightful discussions about derivational morphology. We would also like to thank our anonymous reviewers for helpful feedback on clarity and presentation. This work was supported by Contract HR001115-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G´eraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K¨ubler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017a. Conll-sigmorphon 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection. Association for Computational Linguistics, Vancouver, pages 1–30. http://www.aclweb.org/anthology/K172001. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The sigmorphon 2016 shared taskmorphological reinflection. ACL 2016 page 10. Ryan Cotterell, Ekaterina Vylomova, Huda Khayrallah, Christo Kirov, and David Yarowsky. 2017b. Paradigm completion for derivational morphology. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 714–720. https://www.aclweb.org/anthology/D17-1074. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 1185–1195. http://www.aclweb.org/anthology/N13-1138. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 634–643. http://www.aclweb.org/anthology/N16-1077. Anna Gladkova, Aleksandr Drozd, and Satoshi Matsuoka. 2016. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In Proceedings of the NAACL Student Research Workshop. Association for Computational Linguistics, San Diego, California, pages 8–15. http://www.aclweb.org/anthology/N16-2002. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 18(5-6):602–610. Arihant Gupta, Syed Sarfaraz Akhtar, Avijit Vajpayee, Arjit Srivastava, Madan Gopal Jhanwar, and Manish Shrivastava. 2017. Exploiting morphological regularities in distributional word representations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 292–297. https://www.aclweb.org/anthology/D17-1028. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Association of Computational 1947 Linguistics. Association for Computational Linguistics, Prague, Czech Republic, pages 144–151. http://www.aclweb.org/anthology/P07-1019. Mans Hulden, Markus Forsberg, and Malin Ahlberg. 2014. Semi-supervised learning of morphological paradigms and lexicons. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Gothenburg, Sweden, pages 569–578. http://www.aclweb.org/anthology/E14-1060. Katharina Kann and Hinrich Sch¨utze. 2016. Singlemodel encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 555–560. http://anthology.aclweb.org/P16-2090. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1412– 1421. http://aclweb.org/anthology/D15-1166. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Sofia, Bulgaria, pages 104–113. http://www.aclweb.org/anthology/W133512. A. Meyers, R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman. 2004. The nombank project: An interim report. In A. Meyers, editor, HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation. Association for Computational Linguistics, Boston, Massachusetts, USA, pages 24– 31. Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. 2011. Quantitative analysis of culture using millions of digitized books. Science 331 6014:176–82. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In ICLR Workshop Papers. Scottsdale, Arizona. https://arxiv.org/pdf/1301.3781.pdf. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 . Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discriminative string transduction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 922– 931. http://www.aclweb.org/anthology/N15-1093. Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 623–633. http://www.aclweb.org/anthology/N16-1076. D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Hwee Tou Ng and Ellen Riloff, editors, Proc. of the Conference on Computational Natural Language Learning (CoNLL). Association for Computational Linguistics, pages 1–8. http://cogcomp.org/papers/RothYi04.pdf. D. Roth and W. Yih. 2007. Global inference for entity and relation identification via a linear programming formulation http://cogcomp.org/papers/RothYi07.pdf. Wolfgang Seeker and ¨Ozlem C¸ etinoglu. 2015. A graphbased lattice dependency parser for joint morphological segmentation and syntactic analysis. Transactions of the Association for Computational Linguistics 3:359–373. Radu Soricut and Franz Och. 2015. Unsupervised morphology induction using word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1627–1637. http://www.aclweb.org/anthology/N15-1186.
2018
180
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1948–1958 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1948 Deep-speare: A joint neural model of poetic language, meter and rhyme Jey Han Lau1,2 Trevor Cohn2 Timothy Baldwin2 Julian Brooke3 Adam Hammond4 1 IBM Research Australia 2 School of Computing and Information Systems, The University of Melbourne 3 Thomson Reuters 4 Department of English, University of Toronto [email protected], [email protected], [email protected], [email protected], [email protected] Abstract In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language. 1 Introduction With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes? Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; Choi et al., 2016), the design of sculptures (Lehman et al., 2016), and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016). In this paper, we focus on a creative textual task: automatic poetry composition. A distinguishing feature of poetry is its aesthetic forms, e.g. rhyme and rhythm/meter.1 In this work, we treat the task of poem generation as a constrained language modelling task, such that lines of a given poem rhyme, and each line follows a canonical meter and has a fixed number 1Noting that there are many notable divergences from this in the work of particular poets (e.g. Walt Whitman) and poetry types (such as free verse or haiku). Shall I compare thee to a summer’s day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer’s lease hath all too short a date: Figure 1: 1st quatrain of Shakespeare’s Sonnet 18. of stresses. Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g. see Figure 1), based on an unsupervised model of language, rhyme and meter trained on a novel corpus of sonnets. Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017); our results suggest that future research should look beyond meter and focus on improving readability. In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research.2 2https://github.com/jhlau/deepspeare 1949 2 Related Work Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gerv´as, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013). The earliest attempt at using statistical modelling for poetry generation was Greene et al. (2010), based on a language model paired with a stress model. Neural networks have dominated recent research. Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al. (2016) later simplified by incorporating an attention mechanism and training at the character level. For English poetry, Ghazvininejad et al. (2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation. Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry. A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically. 3 Sonnet Structure and Dataset The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines);3 an example quatrain is presented in Figure 1. It follows a number of aesthetic forms, of which two are particularly salient: stress and rhyme. A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.: S−S+ S−S+ S−S+ S−S+ S− S+ Shall I compare thee to a summer’s day? where S−and S+ denote unstressed and stressed syllables, respectively. A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG. There are a number of variants, however, mostly seen in the quatrains; e.g. AABB or ABBA are also common. We build our sonnet dataset from the latest image of Project Gutenberg.4 We first create a 3There are other forms of sonnets, but the Shakespearean sonnet is the dominant one. Hereinafter “sonnet” is used to specifically mean Shakespearean sonnets. 4https://www.gutenberg.org/. Partition #Sonnets #Words Train 2685 367K Dev 335 46K Test 335 46K Table 1: SONNET dataset statistics. (generic) poetry document collection using the GutenTag tool (Brooke et al., 2015), based on its inbuilt poetry classifier and rule-based structural tagging of individual poems. Given the poems, we use word and character statistics derived from Shakespeare’s 154 sonnets to filter out all non-sonnet poems (to form the “BACKGROUND” dataset), leaving the sonnet corpus (“SONNET”).5 Based on a small-scale manual analysis of SONNET, we find that the approach is sufficient for extracting sonnets with high precision. BACKGROUND serves as a large corpus (34M words) for pre-training word embeddings, and SONNET is further partitioned into training, development and testing sets. Statistics of SONNET are given in Table 1.6 4 Architecture We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words. Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns.7 The rhyme model, on the other hand, uses a margin-based loss to separate rhyming word pairs from non-rhyming word pairs in a quatrain. For generation we use the language model to generate one word at a time, while applying the pentame5The following constraints were used to select sonnets: 8.0 ⩽mean words per line ⩽11.5; 40 ⩽mean characters per line ⩽51.0; min/max number of words per line of 6/15; min/max number of characters per line of 32/60; and min letter ratio per line ⩾0.59. 6The sonnets in our collection are largely in Modern English, with possibly a small number of poetry in Early Modern English. The potentially mixed-language dialect data might add noise to our system, and given more data it would be worthwhile to include time period as a factor in the model. 7There are a number of variations in addition to the standard pattern (Greene et al., 2010), but our model uses only the standard pattern as it is the dominant one. 1950 (a) Language model (b) Pentameter model (c) Rhyme model Figure 2: Architecture of the language, pentameter and rhyme models. Colours denote shared weights. ter model to sample meter-conforming sentences and the rhyme model to enforce rhyme. The architecture of the joint model is illustrated in Figure 2. We train all the components together by treating each component as a sub-task in a multitask learning setting.8 4.1 Language Model The language model is a variant of an LSTM encoder–decoder model with attention (Bahdanau et al., 2015), where the encoder encodes the preceding context (i.e. all sonnet lines before the current line) and the decoder decodes one word at a time for the current line, while attending to the preceding context. In the encoder, we embed context words zi using embedding matrix Wwrd to yield wi, and feed them to a biLSTM9 to produce a sequence of encoder hidden states hi = [⃗hi; ⃗ hi]. Next we apply 8We stress that although the components appear to be disjointed, the shared parameters allow the components to mutually influence each other during joint training. To exemplify this, we found that the pentameter model performs very poorly when we train each component separately. 9We use a single layer for all LSTMs. a selective mechanism (Zhou et al., 2017) to each hi. By defining the representation of the whole context h = [⃗hC; ⃗ h1] (where C is the number of words in the context), the selective mechanism filters the hidden states hi using h as follows: h′ i = hi ⊙σ(Wahi + Uah + ba) where ⊙denotes element-wise product. Hereinafter W, U and b are used to refer to model parameters. The intuition behind this procedure is to selectively filter less useful elements from the context words. In the decoder, we embed words xt in the current line using the encoder-shared embedding matrix (Wwrd) to produce wt. In addition to the word embeddings, we also embed the characters of a word using embedding matrix Wchr to produce ct,i, and feed them to a bidirectional (character-level) LSTM: ⃗ut,i = LSTMf(ct,i, ⃗ut,i−1) ⃗ ut,i = LSTMb(ct,i, ⃗ ut,i+1) (1) We represent the character encoding of a word by concatenating the last forward and first back1951 ward hidden states ut = [⃗ut,L; ⃗ ut,1], where L is the length of the word. We incorporate character encodings because they provide orthographic information, improve representations of unknown words, and are shared with the pentameter model (Section 4.2).10 The rationale for sharing the parameters is that we see word stress and language model information as complementary. Given the word embedding wt and character encoding ut, we concatenate them together and feed them to a unidirectional (word-level) LSTM to produce the decoding states: st = LSTM([wt; ut], st−1) (2) We attend st to encoder hidden states h′ i and compute the weighted sum of h′ i as follows: et i = v⊺ b tanh(Wbh′ i + Ubst + bb) at = softmax(et) h∗ t = X i at ih′ i To combine st and h∗ t , we use a gating unit similar to a GRU (Cho et al., 2014; Chung et al., 2014): s′ t = GRU(st, h∗ t ). We then feed s′ t to a linear layer with softmax activation to produce the vocabulary distribution (i.e. softmax(Wouts′ t + bout), and optimise the model with standard categorical cross-entropy loss. We use dropout as regularisation (Srivastava et al., 2014), and apply it to the encoder/decoder LSTM outputs and word embedding lookup. The same regularisation method is used for the pentameter and rhyme models. As our sonnet data is relatively small for training a neural language model (367K words; see Table 1), we pre-train word embeddings and reduce parameters further by introducing weight-sharing between output matrix Wout and embedding matrix Wwrd via a projection matrix Wprj (Inan et al., 2016; Paulus et al., 2017; Press and Wolf, 2017): Wout = tanh(WwrdWprj) 4.2 Pentameter Model This component is designed to capture the alternating iambic stress pattern. Given a sonnet line, 10We initially shared the character encodings with the rhyme model as well, but found sub-par performance for the rhyme model. This is perhaps unsurprising, as rhyme and stress are qualitatively very different aspects of forms. the pentameter model learns to attend to the appropriate characters to predict the 10 binary stress symbols sequentially.11 As punctuation is not pronounced, we preprocess each sonnet line to remove all punctuation, leaving only spaces and letters. Like the language model, the pentameter model is fashioned as an encoder–decoder network. In the encoder, we embed the characters using the shared embedding matrix Wchr and feed them to the shared bidirectional character-level LSTM (Equation (1)) to produce the character encodings for the sentence: uj = [⃗uj; ⃗ uj]. In the decoder, it attends to the characters to predict the stresses sequentially with an LSTM: gt = LSTM(u∗ t−1, gt−1) where u∗ t−1 is the weighted sum of character encodings from the previous time step, produced by an attention network which we describe next,12 and gt is fed to a linear layer with softmax activation to compute the stress distribution. The attention network is designed to focus on stress-producing characters, whose positions are monotonically increasing (as stress is predicted sequentially). We first compute µt, the mean position of focus: µ′ t = σ(v⊺ c tanh(Wcgt + Ucµt−1 + bc)) µt = M × min(µ′ t + µt−1, 1.0) where M is the number of characters in the sonnet line. Given µt, we can compute the (unnormalised) probability for each character position: pt j = exp −(j −µt)2 2T 2  where standard deviation T is a hyper-parameter. We incorporate this position information when computing u∗ t :13 u′ j = pt juj dt j = v⊺ d tanh(Wdu′ j + Udgt + bd) ft = softmax(dt + log pt) u∗ t = X j bt juj 11That is, given the input line Shall I compare thee to a summer’s day? the model is required to output S−S+ S− S+ S−S+ S−S+ S−S+, based on the syllable boundaries from Section 3. 12Initial input (u∗ 0) and state (g0) is a trainable vector and zero vector respectively. 13Spaces are masked out, so they always yield zero attention weights. 1952 Intuitively, the attention network incorporates the position information at two points, when computing: (1) dt j by weighting the character encodings; and (2) ft by adding the position log probabilities. This may appear excessive, but preliminary experiments found that this formulation produces the best performance. In a typical encoder–decoder model, the attended encoder vector u∗ t would be combined with the decoder state gt to compute the output probability distribution. Doing so, however, would result in a zero-loss model as it will quickly learn that it can simply ignore u∗ t to predict the alternating stresses based on gt. For this reason we use only u∗ t to compute the stress probability: P(S−) = σ(Weu∗ t + be) which gives the loss Lent = P t −log P(S⋆ t ) for the whole sequence, where S⋆ t is the target stress at time step t. We find the decoder still has the tendency to attend to the same characters, despite the incorporation of position information. To regularise the model further, we introduce two loss penalties: repeat and coverage loss. The repeat loss penalises the model when it attends to previously attended characters (See et al., 2017), and is computed as follows: Lrep = X t X j min(ft j, t−1 X t=1 ft j) By keeping a sum of attention weights over all previous time steps, we penalise the model when it focuses on characters that have non-zero history weights. The repeat loss discourages the model from focussing on the same characters, but does not assure that the appropriate characters receive attention. Observing that stresses are aligned with the vowels of a syllable, we therefore penalise the model when vowels are ignored: Lcov = X j∈V ReLU(C − 10 X t=1 ft j) where V is a set of positions containing vowel characters, and C is a hyper-parameter that defines the minimum attention threshold that avoids penalty. To summarise, the pentameter model is optimised with the following loss: Lpm = Lent + αLrep + βLcov (3) where α and β are hyper-parameters for weighting the additional loss terms. 4.3 Rhyme Model Two reasons motivate us to learn rhyme in an unsupervised manner: (1) we intend to extend the current model to poetry in other languages (which may not have pronunciation dictionaries); and (2) the language in our SONNET data is not Modern English, and so contemporary dictionaries may not accurately reflect the rhyme of the data. Exploiting the fact that rhyme exists in a quatrain, we feed sentence-ending word pairs of a quatrain as input to the rhyme model and train it to learn how to separate rhyming word pairs from non-rhyming ones. Note that the model does not assume any particular rhyming scheme — it works as long as quatrains have rhyme. A training example consists of a number of word pairs, generated by pairing one target word with 3 other reference words in the quatrain, i.e. {(xt, xr), (xt, xr+1), (xt, xr+2)}, where xt is the target word and xr+i are the reference words.14 We assume that in these 3 pairs there should be one rhyming and 2 non-rhyming pairs. From preliminary experiments we found that we can improve the model by introducing additional non-rhyming or negative reference words. Negative reference words are sampled uniform randomly from the vocabulary, and the number of additional negative words is a hyper-parameter. For each word x in the word pairs we embed the characters using the shared embedding matrix Wchr and feed them to an LSTM to produce the character states uj.15 Unlike the language and pentameter models, we use a unidirectional forward LSTM here (as rhyme is largely determined by the final characters), and the LSTM parameters are not shared. We represent the encoding of the whole word by taking the last state u = uL, where L is the character length of the word. Given the character encodings, we use a 14E.g. for the quatrain in Figure 1, a training example is {(day, temperate), (day, may), (day, date)}. 15The character embeddings are the only shared parameters in this model. 1953 margin-based loss to optimise the model: Q = {cos(ut, ur), cos(ut, ur+1), ...} Lrm = max(0, δ −top(Q, 1) + top(Q, 2)) where top(Q, k) returns the k-th largest element in Q, and δ is a margin hyper-parameter. Intuitively, the model is trained to learn a sufficient margin (defined by δ) that separates the best pair with all others, with the second-best being used to quantify all others. This is the justification used in the multi-class SVM literature for a similar objective (Wang and Xue, 2014). With this network we can estimate whether two words rhyme by computing the cosine similarity score during generation, and resample words as necessary to enforce rhyme. 4.4 Generation Procedure We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry. During generation we feed the hidden state from the previous time step to the language model’s decoder to compute the vocabulary distribution for the current time step. Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before;16 (3) any generated words with a frequency ⩾2; (4) the preceding 3 words; and (5) a number of symbols including parentheses, single and double quotes.17 The first sonnet line is generated without using any preceding context. We next describe how to incorporate the pentameter model for generation. Given a sonnet line, the pentameter model computes a loss Lpm (Equation (3)) that indicates how well the line conforms to the iambic pentameter. We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (Lpm). We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1. To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary. Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the 16We use the NLTK stopword list (Bird et al., 2009). 17We add these constraints to prevent the model from being too repetitive, in generating the same words. two words rhyme. We resample the second word of a rhyming pair (e.g. when generating the second A in AABB) until it produces a cosine similarity ⩾ 0.9. We also resample the second word of a nonrhyming pair (e.g. when generating the first B in AABB) by requiring a cosine similarity ⩽0.7.18 When generating in the forward direction we can never be sure that any particular word is the last word of a line, which creates a problem for resampling to produce good rhymes. This problem is resolved in our model by reversing the direction of the language model, i.e. generating the last word of each line first. We apply this inversion trick at the word level (character order of a word is not modified) and only to the language model; the pentameter model receives the original word order as input. 5 Experiments We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert. A sample of machine-generated sonnets are included in the supplementary material. We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material). Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training. For optimisers, we use Adagrad (Duchi et al., 2011) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models. We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens. 5.1 Component Evaluation 5.1.1 Language Model We use standard perplexity for evaluating the language model. In terms of model variants, we have:19 • LM: Vanilla LSTM language model; • LM∗: LSTM language model that incorporates character encodings (Equation (2)); 18Maximum number of resampling steps is capped at 1000. If the threshold is exceeded the model is reset to generate from scratch again. 19All models use the same (applicable) hyper-parameter configurations. 1954 shall i com pa re thee to a summe r s day thou art mo re lovely and mo re tempe rate rough winds do shake the darling buds of may and summer s lease hath all too short a date Figure 3: Character attention weights for the first quatrain of Shakespeare’s Sonnet 18. Model Ppl Stress Acc Rhyme F1 LM 90.13 – – LM∗ 84.23 – – LM∗∗ 80.41 – – LM∗∗-C 83.68 – – LM∗∗+PM+RM 80.22 0.74 0.91 Stress-BL – 0.80 – Rhyme-BL – – 0.74 Rhyme-EM – – 0.71 Table 2: Component evaluation for the language model (“Ppl” = perplexity), pentameter model (“Stress Acc”), and rhyme model (“Rhyme F1”). Each number is an average across 10 runs. • LM∗∗: LSTM language model that incorporates both character encodings and preceding context; • LM∗∗-C: Similar to LM∗∗, but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014);20 • LM∗∗+PM+RM: the full model, with joint training of the language, pentameter and rhyme models. Perplexity on the test partition is detailed in Table 2. Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM∗∗. The inferior performance of LM∗∗-C compared to LM∗∗ demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks. The full model LM∗∗+PM+RM, which learns stress 20In Zhang and Lapata (2014), the authors use a series of convolutional networks with a width of 2 words to convert 5/7 poetry lines into a fixed size vector; here we use a standard convolutional network with max-pooling operation (Kim, 2014) to process the context. and rhyme patterns simultaneously, also appears to improve the language model slightly. 5.1.2 Pentameter Model To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary.21 Words that have no coverage or have nonalternating patterns given by the dictionary are discarded. We use accuracy as the metric, and a predicted stress pattern is judged to be correct if it matches any of the dictionary stress patterns. To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g. 1st time step = S−) to the word if any of its characters receives an attention ⩾0.20. For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017).22 The WFST maps a sequence word to a sequence of stresses by assuming each word has 1–5 stresses and the full word sequence produces iambic pentameter. It is trained using the EM algorithm on a sonnet corpus developed by the authors. We present stress accuracy in Table 2. LM∗∗+PM+RM performs competitively, and informal inspection reveals that a number of mistakes are due to dictionary errors. To understand the predicted stresses qualitatively, we display attention heatmaps for the the first quatrain of Shakespeare’s Sonnet 18 in Figure 3. The y-axis represents the ten stresses of the iambic pentameter, and 21http://www.speech.cs.cmu.edu/cgi-bin/ cmudict. Note that the dictionary provides 3 levels of stresses: 0, 1 and 2; we collapse 1 and 2 to S+. 22https://github.com/JackHopkins/ ACLPoetry 1955 CMU Rhyming Pairs CMU Non-Rhyming Pairs Word Pair Cos Word Pair Cos (endeavour, never) 0.028 (blood, stood) 1.000 (nowhere, compare) 0.098 (mood, stood) 1.000 (supply, sigh) 0.164 (overgrown, frown) 1.000 (sky, high) 0.164 (understood, food) 1.000 (me, maybe) 0.165 (brood, wood) 1.000 (cursed, burst) 0.172 (rove, love) 0.999 (weigh, way) 0.200 (sire, ire) 0.999 (royally, we) 0.217 (moves, shoves) 0.998 (use, juice) 0.402 (afraid, said) 0.998 (dim, limb) 0.497 (queen, been) 0.996 Table 3: Rhyming errors produced by the model. Examples on the left (right) side are rhyming (non-rhyming) word pairs — determined using the CMU dictionary — that have low (high) cosine similarity. “Cos” denote the system predicted cosine similarity for the word pair. x-axis the characters of the sonnet line (punctuation removed). The attention network appears to perform very well, without any noticeable errors. The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y. Additional heatmaps for the full sonnet are provided in the supplementary material. 5.1.3 Rhyme Model We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score. Word pairs that are not included in the dictionary are discarded. Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match. We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score ⩾ 0.8,23 it is considered to rhyme. As a baseline (Rhyme-BL), we first extract for each word the last vowel and all following consonants, and predict a word pair as rhyming if their extracted sequences match. The extracted sequence can be interpreted as a proxy for the last syllable of a word. Reddy and Knight (2011) propose an unsupervised model for learning rhyme schemes in poems via EM. There are two latent variables: φ specifies the distribution of rhyme schemes, and θ defines 230.8 is empirically found to be the best threshold based on development data. the pairwise rhyme strength between two words. The model’s objective is to maximise poem likelihood over all possible rhyme scheme assignments under the latent variables φ and θ. We train this model (Rhyme-EM) on our data24 and use the learnt θ to decide whether two words rhyme.25 Table 2 details the rhyming results. The rhyme model performs very strongly at F1 > 0.90, well above both baselines. Rhyme-EM performs poorly because it operates at the word level (i.e. it ignores character/orthographic information) and hence does not generalise well to unseen words and word pairs.26 To better understand the errors qualitatively, we present a list of word pairs with their predicted cosine similarity in Table 3. Examples on the left side are rhyming word pairs as determined by the CMU dictionary; right are non-rhyming pairs. Looking at the rhyming word pairs (left), it appears that these words tend not to share any wordending characters. For the non-rhyming pairs, we spot several CMU errors: (sire, ire) and (queen, been) clearly rhyme. 5.2 Generation Evaluation 5.2.1 Crowdworker Evaluation Following Hopkins and Kiela (2017), we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem. Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model). We generate 50 quatrains each for LM, LM∗∗and LM∗∗+PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch. An equal number of human-written quatrains was sampled from the training partition. A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT. Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati24We use the original authors’ implementation: https: //github.com/jvamvas/rhymediscovery. 25A word pair is judged to rhyme if θw1,w2 ⩾0.02; the threshold (0.02) is selected based on development performance. 26Word pairs that did not co-occur in a poem in the training data have rhyme strength of zero. 1956 Model Accuracy LM 0.742 LM∗∗ 0.672 LM∗∗+PM+RM 0.532 LM∗∗+RM 0.532 Table 4: Crowdworker accuracy performance. Model Meter Rhyme Read. Emotion LM 4.00±0.73 1.57±0.67 2.77±0.67 2.73±0.51 LM∗∗ 4.07±1.03 1.53±0.88 3.10±1.04 2.93±0.93 LM∗∗+PM+RM 4.10±0.91 4.43±0.56 2.70±0.69 2.90±0.79 Human 3.87±1.12 4.10±1.35 4.80±0.48 4.37±0.71 Table 5: Expert mean and standard deviation ratings on several aspects of the generated quatrains. cally, and they were restricted to do a maximum of 3 HITs. To dissuade workers from using search engines to identify real poems, we presented the quatrains as images. Accuracy is presented in Table 4. We see a steady decrease in accuracy (= improvement in model quality) from LM to LM∗∗to LM∗∗+PM+RM, indicating that each model generates quatrains that are less distinguishable from human-written ones. Based on the suspicion that workers were using rhyme to judge the poems, we tested a second model, LM∗∗+RM, which is the full model without the pentameter component. We found identical accuracy (0.532), confirming our suspicion that crowd workers depend on only rhyme in their judgements. These observations demonstrate that meter is largely ignored by lay persons in poetry evaluation. 5.2.2 Expert Judgement To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e. amount of emotion the poem evokes). All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best). In total, 120 quatrains were annotated, 30 each for LM, LM∗∗, LM∗∗+PM+RM, and human-written poems (Human). The expert was blind to the source of each poem. The mean and standard deviation of the ratings are presented in Table 5. We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets. This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997). Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability. In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models. Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model. Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry. 6 Conclusion We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets. We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert. Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well. Machine-generated generated poems, however, still underperform in terms of readability and emotion. References Stephen Adams. 1997. Poetic designs: An introduction to meters, verse forms, and figures of speech. Broadview Press. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, San Diego, USA. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python — Analyzing Text with the Natural Language Toolkit. O’Reilly Media, Sebastopol, USA. Julian Brooke, Adam Hammond, and Graeme Hirst. 2015. GutenTag: An NLP-driven tool for digital humanities research in the Project Gutenberg corpus. In Proceedings of the 4nd Workshop on Computational Literature for Literature (CLFL ’15). Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties 1957 of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar. Keunwoo Choi, George Fazekas, and Mark Sandler. 2016. Text-based LSTM networks for automatic music composition. In Proceedings of the 1st Conference on Computer Simulation of Musical Creativity, Huddersfield, UK. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning and Representation Learning Workshop, pages 103–111, Montreal, Canada. Simon Colton, Jacob Goodwin, and Tony Veale. 2012. Full face poetry generation. In Proceedings of the Third International Conference on Computational Creativity, pages 95–102. Luka Crnkovic-Friis and Louise Crnkovic-Friis. 2016. Generative choreography using deep learning. In Proceedings of the 7th International Conference on Computational Creativity, Paris, France. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Pablo Gerv´as. 2000. Wasp: Evaluation of different strategies for the automatic generation of spanish verse. In Proceedings of the AISB-00 Symposium on Creative & Cultural Aspects of AI, pages 93–100. Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. 2016. Generating topical poetry. pages 1183–1191, Austin, Texas. Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010), pages 524–533, Massachusetts, USA. Jack Hopkins and Douwe Kiela. 2017. Automatically generating rhythmic verse with neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pages 168–178, Vancouver, Canada. Eric J. Humphrey, Juan P. Bello, and Yann LeCun. 2013. Feature learning and deep architectures: new directions for music informatics. Journal of Intelligent Information Systems, 41(3):461–481. Hakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. CoRR, abs/1611.01462. Y. Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1746– 1751, Doha, Qatar. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Joel Lehman, Sebastian Risi, and Jeff Clune. 2016. Creative generation of 3D objects with deep learning and innovation engines. In Proceedings of the 7th International Conference on Computational Creativity, Paris, France. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of Workshop at the International Conference on Learning Representations, 2013, Scottsdale, USA. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Yael Netzer, David Gabay, Yoav Goldberg, and Michael Elhadad. 2009. Gaiku: Generating haiku with word associations norms. In Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pages 32–39. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the EACL (EACL 2017), pages 157–163, Valencia, Spain. Sravana Reddy and Kevin Knight. 2011. Unsupervised discovery of rhyme schemes. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL HLT 2011), pages 77–82, Portland, Oregon, USA. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pages 1073–1083, Vancouver, Canada. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. 1958 Bob L. Sturm, Jo ao Felipe Santos, Oded Ben-Tal, and Iryna Korshunova. 2016. Music transcription modelling and composition using deep learning. In Proceedings of the 1st Conference on Computer Simulation of Musical Creativity, Huddersfield, UK. Jukka M. Toivanen, Matti J¨arvisalo, and Hannu Toivonen. 2013. Harnessing constraint programming for poetry composition. In Proceedings of the Fourth International Conference on Computational Creativity, pages 160–160. Qixin Wang, Tianyi Luo, Dong Wang, and Chao Xing. 2016. Chinese song iambics generation with neural attention-based model. In Proceedings of the 25nd International Joint Conference on Artificial Intelligence (IJCAI-2016), pages 2943–2949, New York, USA. Zhe Wang and Xiangyang Xue. 2014. In Support Vector Machines Applications, pages 23–48. Springer. Xiaofeng Wu, Naoko Tosa, and Ryohei Nakatsu. 2009. Newhitch haiku: An interactive renku poem composition supporting tool applied for sightseeing navigation system. Entertainment Computing-ICEC 2009, pages 191–196. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 670–680, Doha, Qatar. Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pages 1095–1104, Vancouver, Canada.
2018
181
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1959–1969 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1959 NeuralREG: An end-to-end approach to referring expression generation Thiago Castro Ferreira1 Diego Moussallem2,3 ´Akos K´ad´ar1 Sander Wubben1 Emiel Krahmer1 1Tilburg center for Cognition and Communication (TiCC), Tilburg University, The Netherlands 2AKSW Research Group, University of Leipzig, Germany 3Data Science Group, University of Paderborn, Germany {tcastrof,a.kadar,s.wubben,e.j.krahmer}@tilburguniversity.edu [email protected] Abstract Traditionally, Referring Expression Generation (REG) models first decide on the form and then on the content of references to discourse entities in text, typically relying on features such as salience and grammatical function. In this paper, we present a new approach (NeuralREG), relying on deep neural networks, which makes decisions about form and content in one go without explicit feature extraction. Using a delexicalized version of the WebNLG corpus, we show that the neural model substantially improves over two strong baselines. Data and models are publicly available1. 1 Introduction Natural Language Generation (NLG) is the task of automatically converting non-linguistic data into coherent natural language text (Reiter and Dale, 2000; Gatt and Krahmer, 2018). Since the input data will often consist of entities and the relations between them, generating references for these entities is a core task in many NLG systems (Dale and Reiter, 1995; Krahmer and van Deemter, 2012). Referring Expression Generation (REG), the task responsible for generating these references, is typically presented as a twostep procedure. First, the referential form needs to be decided, asking whether a reference at a given point in the text should assume the form of, for example, a proper name (“Frida Kahlo”), a pronoun (“she”) or description (“the Mexican painter”). In addition, the REG model must account for the different ways in which a particular referential form can be realized. For example, both “Frida” and 1https://github.com/ThiagoCF05/ NeuralREG “Kahlo” are name-variants that may occur in a text, and she can alternatively also be described as, say, “the famous female painter”. Most of the earlier REG approaches focus either on selecting referential form (Orita et al., 2015; Castro Ferreira et al., 2016), or on selecting referential content, typically zooming in on one specific kind of reference such as a pronoun (e.g., Henschel et al., 2000; Callaway and Lester, 2002), definite description (e.g., Dale and Haddock, 1991; Dale and Reiter, 1995) or proper name generation (e.g., Siddharthan et al., 2011; van Deemter, 2016; Castro Ferreira et al., 2017b). Instead, in this paper, we propose NeuralREG: an end-to-end approach addressing the full REG task, which given a number of entities in a text, produces corresponding referring expressions, simultaneously selecting both form and content. Our approach is based on neural networks which generate referring expressions to discourse entities relying on the surrounding linguistic context, without the use of any feature extraction technique. Besides its use in traditional pipeline NLG systems (Reiter and Dale, 2000), REG has also become relevant in modern “end-to-end” NLG approaches, which perform the task in a more integrated manner (see e.g. Konstas et al., 2017; Gardent et al., 2017b). Some of these approaches have recently focused on inputs which references to entities are delexicalized to general tags (e.g., ENTITY-1, ENTITY-2) in order to decrease data sparsity. Based on the delexicalized input, the model generates outputs which may be likened to templates in which references to the discourse entities are not realized (as in “The ground of ENTITY-1 is located in ENTITY-2.”). While our approach, dubbed as NeuralREG, is compatible with different applications of REG models, in this paper, we concentrate on the last one, relying on a specifically constructed set of 1960 78,901 referring expressions to 1,501 entities in the context of the semantic web, derived from a (delexicalized) version of the WebNLG corpus (Gardent et al., 2017a,b). Both this data set and the model will be made publicly available. We compare NeuralREG against two baselines in an automatic and human evaluation, showing that the integrated neural model is a marked improvement. 2 Related work In recent years, we have seen a surge of interest in using (deep) neural networks for a wide range of NLG-related tasks, as the generation of (first sentences of) Wikipedia entries (Lebret et al., 2016), poetry (Zhang and Lapata, 2014), and texts from abstract meaning representations (e.g., Konstas et al., 2017; Castro Ferreira et al., 2017a). However, the usage of deep neural networks for REG has remained limited and we are not aware of any other integrated, end-to-end model for generating referring expressions in discourse. There is, however, a lot of earlier work on selecting the form and content of referring expressions, both in psycholinguistics and in computational linguistics. In psycholinguistic models of reference, various linguistic factors have been proposed as influencing the form of referential expressions, including cognitive status (Gundel et al., 1993), centering (Grosz et al., 1995) and information density (Jaeger, 2010). In models such as these, notions like salience play a central role, where it is assumed that entities which are salient in the discourse are more likely to be referred to using shorter referring expressions (like a pronoun) than less salient entities, which are typically referred to using longer expressions (like full proper names). Building on these ideas, many REG models for generating references in texts also strongly rely on the concept of salience and factors contributing to it. Reiter and Dale (2000) for instance, discussed a straightforward rule-based method based on this notion, stating that full proper names can be used for initial references, typically less salient than subsequent references, which, according to the study, can be realized by a pronoun in case there is no mention to any other entity of same person, gender and number between the reference and its antecedents. More recently, Castro Ferreira et al. (2016) proposed a data-driven, non-deterministic model for generating referential forms, taking into account salience features extracted from the discourse such as grammatical position, givenness and recency of the reference. Importantly, these models do not specify which contents a particular reference, be it a proper name or description, should have. To this end, separate models are typically used, including, for example, Dale and Reiter (1995) for generating descriptions, and Siddharthan et al. (2011); van Deemter (2016) for proper names. Of course, when texts are generated in practical settings, both form and content need to be chosen. This was the case, for instance, in the GREC shared task (Belz et al., 2010), which aimed to evaluate models for automatically generated referring expressions grounded in discourse. The input for the models were texts in which the referring expressions to the topic of the relevant Wikipedia entry were removed and appropriate references throughout the text needed to be generated (by selecting, for each gap, from a list of candidate referring expressions of different forms and with different contents). Some participating systems approached this with traditional pipelines for selecting referential form, followed by referential content, while others proposed more integrated methods. More details about the models can be seen on Belz et al. (2010). In sum, existing REG models for text generation strongly rely on abstract features such as the salience of a referent for deciding on the form or content of a referent. Typically, these features are extracted automatically from the context, and engineering relevant ones can be complex. Moreover, many of these models only address part of the problem, either concentrating on the choice of referential form or on deciding on the contents of, for example, proper names or definite descriptions. In contrast, we introduce NeuralREG, an end-to-end approach based on neural networks which generates referring expressions to discourse entities directly from a delexicalized/wikified text fragment, without the use of any feature extraction technique. Below we describe our model in more detail, as well as the data on which we develop and evaluate it. 3 Data and processing 3.1 WebNLG corpus Our data is based on the WebNLG corpus (Gardent et al., 2017a), which is a parallel resource ini1961 Subject Predicate Object 108 St Georges Terrace location Perth Perth country Australia 108 St Georges Terrace completionDate 1988@year 108 St Georges Terrace cost 120 million (Australian dollars)@USD 108 St Georges Terrace floorCount 50@Integer ↓ 108 St Georges Terrace was completed in 1988 in Perth, Australia. It has a total of 50 floors and cost 120m Australian dollars. Figure 1: Example of a set of triples (top) and corresponding text (bottom). tially released for the eponymous NLG challenge. In this challenge, participants had to automatically convert non-linguistic data from the Semantic Web into a textual format (Gardent et al., 2017b). The source side of the corpus are sets of Resource Description Framework (RDF) triples. Each RDF triple is formed by a Subject, Predicate and Object, where the Subject and Object are constants or Wikipedia entities, and predicates represent a relation between these two elements in the triple. The target side contains English texts, obtained by crowdsourcing, which describe the source triples. Figure 1 depicts an example of a set of 5 RDF triples and the corresponding text. The corpus consists of 25,298 texts describing 9,674 sets of up to 7 RDF triples (an average of 2.62 texts per set) in 15 domains (Gardent et al., 2017b). In order to be able to train and evaluate our models for referring expression generation (the topic of this study), we produced a delexicalized version of the original corpus. 3.2 Delexicalized WebNLG We delexicalized the training and development parts of the WebNLG corpus by first automatically mapping each entity in the source representation to a general tag. All entities that appear on the left and right side of the triples were mapped to AGENTs and PATIENTs, respectively. Entities which appear on both sides in the relations of a set were represented as BRIDGEs. To distinguish different AGENTs, PATIENTs and BRIDGEs in a set, an ID was given to each entity of each kind (PATIENT-1, PATIENT-2, etc.). Once all entities in the text were mapped to different roles, the first two authors of this study manually replaced the referring expressions in the original target texts by their respective tags. Figure 2 shows the entity mapping and the delexicalized template for the example in Figure 1 in its versions representing the references with general tags and Wikipedia IDs. We delexicalized 20,198 distinct texts describing 7,812 distinct sets of RDF triples, resulting in 16,628 distinct templates. While this dataset (which we make available) has various uses, we used it to extract a collection of referring expressions to Wikipedia entities in order to evaluate how well our REG model can produce references to entities throughout a (small) text. 3.3 Referring expression collection Using the delexicalized version of the WebNLG corpus, we automatically extracted all referring expressions by tokenizing the original and delexicalized versions of the texts and then finding the non overlapping items. For instance, by processing the text in Figure 1 and its delexicalized template in Figure 2, we would extract referring expressions like “108 St Georges Terrace” and “It” to ⟨AGENT-1, 108 St Georges Terrace ⟩, “Perth” to ⟨BRIDGE-1, Perth ⟩, “Australia” to ⟨PATIENT1, Australia ⟩and so on. Once all texts were processed and the referring expressions extracted, we filtered only the ones referring to Wikipedia entities, removing references to constants like dates and numbers, for which no references are generated by the model. In total, the final version of our dataset contains 78,901 referring expressions to 1,501 Wikipedia entities, in which 71.4% (56,321) are proper names, 5.6% (4,467) pronouns, 22.6% (17,795) descriptions and 0.4% (318) demonstrative referring expressions. We split this collection in training, developing and test sets, totaling 63,061, 7,097 and 8,743 referring expressions in each one of them. Each instance of the final dataset consists of a truecased tokenized referring expression, the target entity (distinguished by its Wikipedia ID), and the discourse context preceding and following the relevant reference (we refer to these as the pre- and pos-context). Pre- and pos-contexts are the lowercased, tokenized and delexicalized 1962 Tag Entity AGENT-1 108 St Georges Terrace BRIDGE-1 Perth PATIENT-1 Australia PATIENT-2 1988@year PATIENT-3 “120 million (Australian dollars)”@USD PATIENT-4 50@Integer AGENT-1 was completed in PATIENT-2 in BRIDGE-1 , PATIENT-1 . AGENT-1 has a total of PATIENT-4 floors and cost PATIENT-3 . ↓W iki 108 St Georges Terrace was completed in 1988 in Perth , Australia . 108 St Georges Terrace has a total of 50 floors and cost 20 million (Australian dollars) . Figure 2: Mapping between tags and entities for the related delexicalized/wikified templates. pieces of text before and after the target reference. References to other discourse entities in the pre- and pos-contexts are represented by their Wikipedia ID, whereas constants (numbers, dates) are represented by a one-word ID removing quotes and replacing white spaces with underscores (e.g., 120 million (Australian dollars) for “120 million (Australian dollars)” in Figure 2). Although the references to discourse entities are represented by general tags in a delexicalized template produced in the generation process (AGENT1, BRIDGE-1, etc.), for the purpose of disambiguation, NeuralREG’s inputs have the references represented by the Wikipedia ID of their entities. In this context, it is important to observe that the conversion of the general tags to the Wikipedia IDs can be done in constant time during the generation process, since their mapping, like the first representation in Figure 2, is the first step of the process. In the next section, we show in detail how NeuralREG models the problem of generating a referring expression to a discourse entity. 4 NeuralREG NeuralREG aims to generate a referring expression y = {y1, y2, ..., yT } with T tokens to refer to a target entity token x(wiki) given a discourse precontext X(pre) = {x(pre) 1 , x(pre) 2 , ..., x(pre) m } and pos-context X(pos) = {x(pos) 1 , x(pos) 2 , ..., x(pos) l } with m and l tokens, respectively. The model is implemented as a multi-encoder, attentiondecoder network with bidirectional (Schuster and Paliwal, 1997) Long-Short Term Memory Layers (LSTM) (Hochreiter and Schmidhuber, 1997) sharing the same input word-embedding matrix V , as explained further. 4.1 Context encoders Our model starts by encoding the pre- and poscontexts with two separate bidirectional LSTM encoders (Schuster and Paliwal, 1997; Hochreiter and Schmidhuber, 1997). These modules learn feature representations of the text surrounding the target entity x(wiki), which are used for the referring expression generation. The pre-context X(pre) = {x(pre) 1 , x(pre) 2 , ..., x(pre) m } is represented by forward and backward hidden-state vectors (−→h (pre) 1 , · · · , −→h (pre) m ) and (←−h (pre) 1 , · · · , ←−h (pre) m ). The final annotation vector for each encoding timestep t is obtained by the concatenation of the forward and backward representations h(pre) t = [−→h (pre) t , ←−h (pre) t ]. The same process is repeated for the pos-context resulting in representations (−→h (pos) 1 , · · · , −→h (pos) l ) and (←−h (pos) 1 , · · · , ←−h (pos) l ) and annotation vectors h(pos) t = [−→h (pos) t , ←−h (pos) t ]. Finally, the encoding of target entity x(wiki) is simply its entry in the shared input word-embedding matrix Vwiki. 4.2 Decoder The referring expression generation module is an LSTM decoder implemented in 3 different versions: Seq2Seq, CAtt and HierAtt. All decoders at each timestep i of the generation process take as input features their previous state si−1, the target entity-embedding Vwiki, the embedding of the previous word of the referring expression Vyi−1 and finally the summary vector of the pre- and poscontexts ci. The difference between the decoder variations is the method to compute ci. Seq2Seq models the context vector ci at each timestep i concatenating the pre- and pos-context 1963 annotation vectors averaged over time: ˆh(pre) = 1 N N X i h(pre) i (1) ˆh(pos) = 1 N N X i h(pos) i (2) ci = [ˆh(pre), ˆh(pos)] (3) CAtt is an LSTM decoder augmented with an attention mechanism (Bahdanau et al., 2015) over the pre- and pos-context encodings, which is used to compute ci at each timestep. We compute energies e(pre) ij and e(pos) ij between encoder states h(pre) i and h(post) i and decoder state si−1. These scores are normalized through the application of the softmax function to obtain the final attention probability α(pre) ij and α(post) ij . Equations 4 and 5 summarize the process with k ranging over the two encoders (k ∈[pre, pos]), being the projection matrices W (k) a and U (k) a and attention vectors v(k) a trained parameters. e(k) ij = v(k)T a tanh(W (k) a si−1 + U (k) a h(k) j ) (4) α(k) ij = exp(e(k) ij ) PN n=1 exp(e(k) in ) (5) In general, the attention probability α(k) ij determines the amount of contribution of the jth token of k-context in the generation of the ith token of the referring expression. In each decoding step i, a final summary-vector for each context c(k) i is computed by summing the encoder states h(k) j weighted by the attention probabilities α(k) i : c(k) i = N X j=1 α(k) ij h(k) j (6) To combine c(pre) i and c(pos) i into a single representation, this model simply concatenate the pre- and pos-context summary vectors ci = [c(pre) i , c(pos) i ]. HierAtt implements a second attention mechanism inspired by Libovick´y and Helcl (2017) in order to generate attention weights for the pre- and pos-context summary-vectors c(pre) i and c(pos) i instead of concatenate them. Equations 7, 8 and 9 depict the process, being the projection matrices W (k) b and U (k) b as well as attention vectors v(k) b trained parameters (k ∈[pre, pos]). e(k) i = v(k)T b tanh(W (k) b si−1 + U (k) b c(k) i ) (7) β(k) i = exp(e(k) i ) P n exp(e(n) i ) (8) ci = X k β(k) i U (k) b c(k) i (9) Decoding Given the summary-vector ci, the embedding of the previous referring expression token Vyi−1, the previous decoder state si−1 and the entity-embedding Vwiki, the decoders predict their next state which later is used to compute a probability distribution over the tokens in the output vocabulary for the next timestep as Equations 10 and 11 show. si = Φdec(si−1, [ci, Vyi−1, Vwiki]) (10) p(yi|y<i, X(pre),x(wiki), X(pos)) = softmax(Wcsi + b) (11) In Equation 10, s0 and c0 are zero-initialized vectors. In order to find the referring expression y that maximizes the likelihood in Equation 11, we apply a beam search with length normalization with α = 0.6 (Wu et al., 2016): lp(y) = (5 + |y|)α (5 + 1)α (12) The decoder is trained to minimize the negative log likelihood of the next token in the target referring expression: J(θ) = − X i log p(yi|y<i, X(pre), x(wiki), X(pos)) (13) 5 Models for Comparison We compared the performance of NeuralREG against two baselines: OnlyNames and a model based on the choice of referential form method of Castro Ferreira et al. (2016), dubbed Ferreira. OnlyNames is motivated by the similarity among the Wikipedia ID of an element and a proper name reference to it. This method refers to each entity by their Wikipedia ID, replacing each underscore in the ID for whitespaces (e.g., Appleton International Airport to “Appleton International Airport”). 1964 Ferreira works by first choosing whether a reference should be a proper name, pronoun, description or demonstrative. The choice is made by a Naive Bayes method as Equation 14 depicts. P(f | X) ∝ P(f) Q x∈X P(x | f) P f′∈F P(f ′) Q x∈X P(x | f ′) (14) The method calculates the likelihood of each referential form f given a set of features X, consisting of grammatical position and information status (new or given in the text and sentence). Once the choice of referential form is made, the most frequent variant is chosen in the training corpus given the referent, syntactic position and information status. In case a referring expression for a wiki target is not found in this way, a backoff method is applied by removing one factor at a time in the following order: sentence information status, text information status and grammatical position. Finally, if a referring expression is not found in the training set for a given entity, the same method as OnlyNames is used. Regarding the features, syntactic position distinguishes whether a reference is the subject, object or subject determiner (genitive) in a sentence. Text and sentence information statuses mark whether a reference is a initial or a subsequent mention to an entity in the text and the sentence, respectively. All features were extracted automatically from the texts using the sentence tokenizer and dependency parser of Stanford CoreNLP (Manning et al., 2014). 6 Automatic evaluation Data We evaluated our models on the training, development and test referring expression sets described in Section 3.3. Metrics We compared the referring expressions produced by the evaluated models with the goldstandards ones using accuracy and String Edit Distance (Levenshtein, 1966). Since pronouns are highlighted as the most likely referential form to be used when a referent is salient in the discourse, as argued in the introduction, we also computed pronoun accuracy, precision, recall and F1-score in order to evaluate the performance of the models for capturing discourse salience. Finally, we lexicalized the original templates with the referring expressions produced by the models and compared them with the original texts in the corpus using accuracy and BLEU score (Papineni et al., 2002) as a measure of fluency. Since our model does not handle referring expressions for constants (dates and numbers), we just copied their source version into the template. Post-hoc McNemar’s and Wilcoxon signed ranked tests adjusted by the Bonferroni method were used to test the statistical significance of the models in terms of accuracy and string edit distance, respectively. To test the statistical significance of the BLEU scores of the models, we used a bootstrap resampling together with an approximate randomization method (Clark et al., 2011)2. Settings NeuralREG was implemented using Dynet (Neubig et al., 2017). Source and target word embeddings were 300D each and trained jointly with the model, whereas hidden units were 512D for each direction, totaling 1024D in the bidirection layers. All non-recurrent matrices were initialized following the method of Glorot and Bengio (2010). Models were trained using stochastic gradient descent with Adadelta (Zeiler, 2012) and mini-batches of size 40. We ran each model for 60 epochs, applying early stopping for model selection based on accuracy on the development set with patience of 20 epochs. For each decoding version (Seq2Seq, CAtt and HierAtt), we searched for the best combination of drop-out probability of 0.2 or 0.3 in both the encoding and decoding layers, using beam search with a size of 1 or 5 with predictions up to 30 tokens or until 2 ending tokens were predicted (EOS). The results described in the next section were obtained on the test set by the NeuralREG version with the highest accuracy on the development set over the epochs. Results Table 1 summarizes the results for all models on all metrics on the test set and Table 2 depicts a text example lexicalized by each model. The first thing to note in the results of the first table is that the baselines in the top two rows performed quite strong on this task, generating more than half of the referring expressions exactly as in the goldstandard. The method based on Castro Ferreira et al. (2016) performed statistically better than OnlyNames on all metrics due to its capability, albeit to a limited extent, to predict pronominal references (which OnlyNames obviously cannot). We reported results on the test set for NeuralREG+Seq2Seq and NeuralREG+CAtt using 2https://github.com/jhclark/multeval 1965 All References Pronouns Text Acc. SED Acc. Prec. Rec. F-Score Acc. BLEU OnlyNames 0.53D 4.05D 0.15D 69.03D Ferreira 0.61C 3.18C 0.43B 0.57 0.54 0.55 0.19C 72.78C NeuralREG+Seq2Seq 0.74A,B 2.32A,B 0.75A 0.77 0.78 0.78 0.28B 79.27A,B NeuralREG+CAtt 0.74A 2.25A 0.75A 0.73 0.78 0.75 0.30A 79.39A NeuralREG+HierAtt 0.73B 2.36B 0.73A 0.74 0.77 0.75 0.28A,B 79.01B Table 1: (1) Accuracy (Acc.) and String Edit Distance (SED) results in the prediction of all referring expressions; (2) Accuracy (Acc.), Precision (Prec.), Recall (Rec.) and F-Score results in the prediction of pronominal forms; and (3) Accuracy (Acc.) and BLEU score results of the texts with the generated referring expressions. Rankings were determined by statistical significance. dropout probability 0.3 and beam size 5, and NeuralREG+HierAtt with dropout probability of 0.3 and beam size of 1 selected based on the highest accuracy on the development set. Importantly, the three NeuralREG variant models statistically outperformed the two baseline systems. They achieved BLEU scores, text and referential accuracies as well as string edit distances in the range of 79.01-79.39, 28%-30%, 73%-74% and 2.252.36, respectively. This means that NeuralREG predicted 3 out of 4 references completely correct, whereas the incorrect ones needed an average of 2 post-edition operations in character level to be equal to the gold-standard. When considering the texts lexicalized with the referring expressions produced by NeuralREG, at least 28% of them are similar to the original texts. Especially noteworthy was the score on pronoun accuracy, indicating that the model was well capable of predicting when to generate a pronominal reference in our dataset. The results for the different decoding methods for NeuralREG were similar, with the NeuralREG+CAtt performing slightly better in terms of the BLEU score, text accuracy and String Edit Distance. The more complex NeuralREG+HierAtt yielded the lowest results, even though the differences with the other two models were small and not even statistically significant in many of the cases. 7 Human Evaluation Complementary to the automatic evaluation, we performed an evaluation with human judges, comparing the quality judgments of the original texts to the versions generated by our various models. Material We quasi-randomly selected 24 instances from the delexicalized version of the WebNLG corpus related to the test part of the referring expression collection. For each of the selected instances, we took into account its source triple set and its 6 target texts: one original (randomly chosen) and its versions with the referring expressions generated by each of the 5 models introduced in this study (two baselines, three neural models). Instances were chosen following 2 criteria: the number of triples in the source set (ranging from 2 to 7) and the differences between the target texts. For each size group, we randomly selected 4 instances (of varying degrees of variation between the generated texts) giving rise to 144 trials (= 6 triple set sizes ∗4 instances ∗6 text versions), each consisting of a set of triples and a target text describing it with the lexicalized referring expressions highlighted in yellow. Method The experiment had a latin-square design, distributing the 144 trials over 6 different lists such that each participant rated 24 trials, one for each of the 24 corpus instances, making sure that participants saw equal numbers of triple set sizes and generated versions. Once introduced to a trial, the participants were asked to rate the fluency (“does the text flow in a natural, easy to read manner?”), grammaticality (“is the text grammatical (no spelling or grammatical errors)?”) and clarity (“does the text clearly express the data?”) of each target text on a 7-Likert scale, focussing on the highlighted referring expressions. The experiment is available on the website of the author3. Participants We recruited 60 participants, 10 per list, via Mechanical Turk. Their average age was 36 years and 27 of them were females. The majority declared themselves native speakers of 3https://ilk.uvt.nl/˜tcastrof/acl2018/ evaluation/ 1966 Model Text OnlyNames alan shepard was born in new hampshire on 1923-11-18 . before alan shepard death in california alan shepard had been awarded distinguished service medal (united states navy) an award higher than department of commerce gold medal . Ferreira alan shepard was born in new hampshire on 1923-11-18 . before alan shepard death in california him had been awarded distinguished service medal an award higher than department of commerce gold medal . Seq2Seq alan shepard was born in new hampshire on 1923-11-18 . before his death in california him had been awarded the distinguished service medal by the united states navy an award higher than the department of commerce gold medal . CAtt alan shepard was born in new hampshire on 1923-11-18 . before his death in california he had been awarded the distinguished service medal by the us navy an award higher than the department of commerce gold medal . HierAtt alan shephard was born in new hampshire on 1923-11-18 . before his death in california he had been awarded the distinguished service medal an award higher than the department of commerce gold medal . Original alan shepard was born in new hampshire on 18 november 1923 . before his death in california he had been awarded the distinguished service medal by the us navy an award higher than the department of commerce gold medal . Table 2: Example of text with references lexicalized by each model. Fluency Grammar Clarity OnlyNames 4.74C 4.68B 4.90B Ferreira 4.74C 4.58B 4.93B NeuralREG+Seq2Seq 4.95B,C 4.82A,B 4.97B NeuralREG+CAtt 5.23A,B 4.95A,B 5.26A,B NeuralREG+HierAtt 5.07B,C 4.90A,B 5.13A,B Original 5.41A 5.17A 5.42A Table 3: Fluency, Grammaticality and Clarity results obtained in the human evaluation. Rankings were determined by statistical significance. English (44), while 14 and 2 self-reported as fluent or having a basic proficiency, respectively. Results Table 3 summarizes the results. Inspection of the Table reveals a clear pattern: all three neural models scored higher than the baselines on all metrics, with especially NeuralREG+CAtt approaching the ratings for the original sentences, although – again – differences between the neural models were small. Concerning the size of the triple sets, we did not find any clear pattern. To test the statistical significance of the pairwise comparisons, we used the Wilcoxon signedrank test corrected for multiple comparisons using the Bonferroni method. Different from the automatic evaluation, the results of both baselines were not statistically significant for the three metrics. In comparison with the neural models, NeuralREG+CAtt significantly outperformed the baselines in terms of fluency, whereas the other comparisons between baselines and neural models were not statistically significant. The results for the 3 different decoding methods of NeuralREG also did not reveal a significant difference. Finally, the original texts were rated significantly higher than both baselines in terms of the three metrics, also than NeuralREG+Seq2Seq and NeuralREG+HierAtt in terms of fluency, and than NeuralREG+Seq2Seq in terms of clarity. 8 Discussion This study introduced NeuralREG, an end-to-end approach based on neural networks which tackles the full Referring Expression Generation process. It generates referring expressions for discourse entities by simultaneously selecting form and content without any need of feature extraction techniques. The model was implemented using an encoder-decoder approach where a target referent and its surrounding linguistic contexts were first encoded and combined into a single vector representation which subsequently was decoded into a referring expression to the target, suitable for the specific discourse context. In an automatic evaluation on a collection of 78,901 referring expressions to 1,501 Wikipedia entities, the different versions of the model all yielded better results than the two (competitive) baselines. Later in a complementary human evaluation, the texts with referring expressions generated by a variant of our novel model were considered statistically more fluent than the texts lexicalized by the two baselines. Data The collection of referring expressions used in our experiments was extracted from a novel, delexicalized and publicly available version 1967 of the WebNLG corpus (Gardent et al., 2017a,b), where the discourse entities were replaced with general tags for decreasing the data sparsity. Besides the REG task, these data can be useful for many other tasks related to, for instance, the NLG process (Reiter and Dale, 2000; Gatt and Krahmer, 2018) and Wikification (Moussallem et al., 2017). Baselines We introduced two strong baselines which generated roughly half of the referring expressions identical to the gold standard in an automatic evaluation. These baselines performed relatively well because they frequently generated full names, which occur often for our wikified references. However, they performed poorly when it came to pronominalization, which is an important ingredient for fluent, coherent text. OnlyNames, as the name already reveals, does not manage to generate any pronouns. However, the approach of Castro Ferreira et al. (2016) also did not perform well in the generation of pronouns, revealing a poor capacity to detect highly salient entities in a text. NeuralREG was implemented with 3 different decoding architectures: Seq2Seq, CAtt and HierAtt. Although all the versions performed relatively similar, the concatenativeattention (CAtt) version generated the closest referring expressions from the gold-standard ones and presented the highest textual accuracy in the automatic evaluation. The texts lexicalized by this variant were also considered statistically more fluent than the ones generated by the two proposed baselines in the human evaluation. Surprisingly, the most complex variant (HierAtt) with a hierarchical-attention mechanism gave lower results than CAtt, producing lexicalized texts which were rated as less fluent than the original ones and not significantly more fluent from the ones generated by the baselines. This result appears to be not consistent with the findings of Libovick´y and Helcl (2017), who reported better results on multi-modal machine translation with hierarchical-attention as opposed to the flat variants (Specia et al., 2016). Finally, our NeuralREG variant with the lowest results were our ‘vanilla’ sequence-to-sequence (Seq2Seq), whose the lexicalized texts were significantly less fluent and clear than the original ones. This shows the importance of the attention mechanism in the decoding step of NeuralREG in order to generate fine-grained referring expressions in discourse. Conclusion We introduced a deep learning model for the generation of referring expressions in discourse texts. NeuralREG decides both on referential form and on referential content in an integrated, end-to-end approach, without using explicit features. Using a new delexicalized version of the WebNLG corpus (made publicly available), we showed that the neural model substantially improves over two strong baselines in terms of accuracy of the referring expressions and fluency of the lexicalized texts. Acknowledgments This work has been supported by the National Council of Scientific and Technological Development from Brazil (CNPq) under the grants 203065/2014-0 and 206971/2014-1. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. Anja Belz, Eric Kow, Jette Viethen, and Albert Gatt. 2010. Generating referring expressions in context: The GREC task evaluation challenges. In Emiel Krahmer and Mari¨et Theune, editors, Empirical Methods in Natural Language Generation, pages 294–327. Springer-Verlag, Berlin, Heidelberg. Charles B. Callaway and James C. Lester. 2002. Pronominalization in generated discourse and dialogue. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL’02, pages 88–95, Philadelphia, Pennsylvania. Association for Computational Linguistics. Thiago Castro Ferreira, Iacer Calixto, Sander Wubben, and Emiel Krahmer. 2017a. Linguistic realisation as machine translation: Comparing different MT models for AMR-to-text generation. In Proceedings of the 10th International Conference on Natural Language Generation, INLG’17, pages 1–10, Santiago de Compostela, Spain. Association for Computational Linguistics. Thiago Castro Ferreira, Emiel Krahmer, and Sander Wubben. 2016. Towards more variation in text generation: Developing and evaluating variation models for choice of referential form. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL’16, pages 568— -577, Berlin, Germany. Association for Computational Linguistics. 1968 Thiago Castro Ferreira, Emiel Krahmer, and Sander Wubben. 2017b. Generating flexible proper name references in text: Data, models and evaluation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, EACL’17, pages 655–664, Valencia, Spain. Association for Computational Linguistics. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, ACL’11, pages 176–181, Portland, Oregon. Robert Dale and Nicholas Haddock. 1991. Generating referring expressions involving relations. In Proceedings of the fifth conference on European chapter of the Association for Computational Linguistics, EACL’91, pages 161–166, Berlin, Germany. Association for Computational Linguistics. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the gricean maxims in the generation of referring expressions. Cognitive science, 19(2):233–263. Kees van Deemter. 2016. Designing algorithms for referring with proper names. In Proceedings of the 9th International Natural Language Generation conference, INLG’16, pages 31–35, Edinburgh, UK. Association for Computational Linguistics. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017a. Creating training corpora for NLG micro-planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL’17, pages 179–188, Vancouver, Canada. Association for Computational Linguistics. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017b. The WebNLG challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, INLG’17, pages 124–133, Santiago de Compostela, Spain. Association for Computational Linguistics. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61:65–170. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249–256, Chia Laguna Resort, Sardinia, Italy. PMLR. Barbara J. Grosz, Scott Weinstein, and Aravind K. Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. Jeanette K Gundel, Nancy Hedberg, and Ron Zacharski. 1993. Cognitive status and the form of referring expressions in discourse. Language, pages 274–307. Renate Henschel, Hua Cheng, and Massimo Poesio. 2000. Pronominalization revisited. In Proceedings of the 18th Conference on Computational Linguistics - Volume 1, COLING’00, pages 306–312, Saarbr¨ucken, Germany. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. T Florian Jaeger. 2010. Redundancy and reduction: Speakers manage syntactic information density. Cognitive psychology, 61(1):23–62. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL’17, pages 146–157, Vancouver, Canada. Association for Computational Linguistics. Emiel Krahmer and Kees van Deemter. 2012. Computational generation of referring expressions: A survey. Computational Linguistics, 38(1):173–218. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP’16, pages 1203–1213, Austin, Texas. Association for Computational Linguistics. V. I. Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady, 10:707. Jindˇrich Libovick´y and Jindˇrich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL’17, pages 196–202, Vancouver, Canada. Association for Computational Linguistics. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. 1969 D. Moussallem, R. Usbeck, M. R¨oder, and A.-C. Ngonga Ngomo. 2017. MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach. ArXiv e-prints. G. Neubig, C. Dyer, Y. Goldberg, A. Matthews, W. Ammar, A. Anastasopoulos, M. Ballesteros, D. Chiang, D. Clothiaux, T. Cohn, K. Duh, M. Faruqui, C. Gan, D. Garrette, Y. Ji, L. Kong, A. Kuncoro, G. Kumar, C. Malaviya, P. Michel, Y. Oda, M. Richardson, N. Saphra, S. Swayamdipta, and P. Yin. 2017. DyNet: The Dynamic Neural Network Toolkit. ArXiv e-prints. Naho Orita, Eliana Vornov, Naomi Feldman, and Hal Daum´e III. 2015. Why discourse affects speakers’ choice of referring expressions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), ACL’15, pages 1639–1649, Beijing, China. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, ACL’02, pages 311– 318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge University Press, New York, NY, USA. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Advaith Siddharthan, Ani Nenkova, and Kathleen McKeown. 2011. Information status distinctions and referring expressions: An empirical study of references to people in news summaries. Computational Linguistics, 37(4):811–842. Lucia Specia, Stella Frank, Khalil Sima’an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 543–553, Berlin, Germany. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. CoRR, abs/1212.5701. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP’14, pages 670–680, Doha, Qatar. Association for Computational Linguistics.
2018
182
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1970–1979 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1970 Stock Movement Prediction from Tweets and Historical Prices Yumo Xu and Shay B. Cohen School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB [email protected], [email protected] Abstract Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data. We treat these three complexities and present a novel deep generative model jointly exploiting text and price signals for this task. Unlike the case with discriminative or topic modeling, our model introduces recurrent, continuous latent variables for a better treatment of stochasticity, and uses neural variational inference to address the intractable posterior inference. We also provide a hybrid objective with temporal auxiliary to flexibly capture predictive dependencies. We demonstrate the stateof-the-art performance of our proposed model on a new stock movement prediction dataset which we collected.1 1 Introduction Stock movement prediction has long attracted both investors and researchers (Frankel, 1995; Edwards et al., 2007; Bollen et al., 2011; Hu et al., 2018). We present a model to predict stock price movement from tweets and historical stock prices. In natural language processing (NLP), public news and social media are two primary content resources for stock market prediction, and the models that use these sources are often discriminative. Among them, classic research relies heavily on feature engineering (Schumaker and Chen, 2009; Oliveira et al., 2013). With the prevalence of deep neural networks (Le and Mikolov, 2014), eventdriven approaches were studied with structured event representations (Ding et al., 2014, 2015). 1https://github.com/yumoxu/ stocknet-dataset More recently, Hu et al. (2018) propose to mine news sequence directly from text with hierarchical attention mechanisms for stock trend prediction. However, stock movement prediction is widely considered difficult due to the high stochasticity of the market: stock prices are largely driven by new information, resulting in a random-walk pattern (Malkiel, 1999). Instead of using only deterministic features, generative topic models were extended to jointly learn topics and sentiments for the task (Si et al., 2013; Nguyen and Shirai, 2015). Compared to discriminative models, generative models have the natural advantage in depicting the generative process from market information to stock signals and introducing randomness. However, these models underrepresent chaotic social texts with bag-of-words and employ simple discrete latent variables. In essence, stock movement prediction is a time series problem. The significance of the temporal dependency between movement predictions is not addressed in existing NLP research. For instance, when a company suffers from a major scandal on a trading day d1, generally, its stock price will have a downtrend in the coming trading days until day d2, i.e. [d1, d2].2 If a stock predictor can recognize this decline pattern, it is likely to benefit all the predictions of the movements during [d1, d2]. Otherwise, the accuracy in this interval might be harmed. This predictive dependency is a result of the fact that public information, e.g. a company scandal, needs time to be absorbed into movements over time (Luss and d’Aspremont, 2015), and thus is largely shared across temporally-close predictions. Aiming to tackle the above-mentioned outstanding research gaps in terms of modeling high market stochasticity, chaotic market information and temporally-dependent prediction, we propose 2We use the notation [a, b] to denote the interval of integer numbers between a and b. 1971 StockNet, a deep generative model for stock movement prediction. To better incorporate stochastic factors, we generate stock movements from latent driven factors modeled with recurrent, continuous latent variables. Motivated by Variational Auto-Encoders (VAEs; Kingma and Welling, 2013; Rezende et al., 2014), we propose a novel decoder with a variational architecture and derive a recurrent variational lower bound for end-to-end training (Section 5.2). To the best of our knowledge, StockNet is the first deep generative model for stock movement prediction. To fully exploit market information, StockNet directly learns from data without pre-extracting structured events. We build market sources by referring to both fundamental information, e.g. tweets, and technical features, e.g. historical stock prices (Section 5.1).3 To accurately depict predictive dependencies, we assume that the movement prediction for a stock can benefit from learning to predict its historical movements in a lag window. We propose trading-day alignment as the framework basis (Section 4), and further provide a novel multi-task learning objective (Section 5.3). We evaluate StockNet on a stock movement prediction task with a new dataset that we collected. Compared with strong baselines, our experiments show that StockNet achieves state-of-the-art performance by incorporating both data from Twitter and historical stock price listings. 2 Problem Formulation We aim at predicting the movement of a target stock s in a pre-selected stock collection S on a target trading day d. Formally, we use the market information comprising of relevant social media corpora M, i.e. tweets, and historical prices, in the lag [d −∆d, d −1] where ∆d is a fixed lag size. We estimate the binary movement where 1 denotes rise and 0 denotes fall, y = 1 pc d > pc d−1  (1) where pc d denotes the adjusted closing price adjusted for corporate actions affecting stock prices, e.g. dividends and splits.4 The adjusted closing 3To a fundamentalist, stocks have their intrinsic values that can be derived from the behavior and performance of their company. On the contrary, technical analysis considers only the trends and patterns of the stock price. 4 Technically, d −1 may not be an eligible trading day and thus has no available price information. In the rest of this price is widely used for predicting stock price movement (Xie et al., 2013) or financial volatility (Rekabsaz et al., 2017). 3 Data Collection In finance, stocks are categorized into 9 industries: Basic Materials, Consumer Goods, Healthcare, Services, Utilities, Conglomerates, Financial, Industrial Goods and Technology.5 Since high-tradevolume-stocks tend to be discussed more on Twitter, we select the two-year price movements from 01/01/2014 to 01/01/2016 of 88 stocks to target, coming from all the 8 stocks in Conglomerates and the top 10 stocks in capital size in each of the other 8 industries (see supplementary material). We observe that there are a number of targets with exceptionally minor movement ratios. In a three-way stock trend prediction task, a common practice is to categorize these movements to another “preserve” class by setting upper and lower thresholds on the stock price change (Hu et al., 2018). Since we aim at the binary classification of stock changes identifiable from social media, we set two particular thresholds, 0.5% and 0.55% and simply remove 38.72% of the selected targets with the movement percents between the two thresholds. Samples with the movement percents ≤-0.5% and >0.55% are labeled with 0 and 1, respectively. The two thresholds are selected to balance the two classes, resulting in 26,614 prediction targets in the whole dataset with 49.78% and 50.22% of them in the two classes. We split them temporally and 20,339 movements between 01/01/2014 and 01/08/2015 are for training, 2,555 movements from 01/08/2015 to 01/10/2015 are for development, and 3,720 movements from 01/10/2015 to 01/01/2016 are for test. There are two main components in our dataset:6 a Twitter dataset and a historical price dataset. We access Twitter data under the official license of Twitter, then retrieve stock-specific tweets by querying regexes made up of NASDAQ ticker symbols, e.g. “\$GOOG\b” for Google Inc.. We preprocess tweet texts using the NLTK package (Bird et al., 2009) with the particular Twitter paper, the problem is solved by keeping the notational consistency with our recurrent model and using its time step t to index trading days. Details will be provided in Section 4. We use d here to make the formulation easier to follow. 5https://finance.yahoo.com/industries 6Our dataset is available at https://github.com/ yumoxu/stocknet-dataset. 1972 mode, including for tokenization and treatment of hyperlinks, hashtags and the “@” identifier. To alleviate sparsity, we further filter samples by ensuring there is at least one tweet for each corpus in the lag. We extract historical prices for the 88 selected stocks to build the historical price dataset from Yahoo Finance.7 4 Model Overview X |D| Z φ ✓ y Figure 1: Illustration of the generative process from observed market information to stock movements. We use solid lines to denote the generation process and dashed lines to denote the variational approximation to the intractable posterior. We provide an overview of data alignment, model factorization and model components. As explained in Section 1, we assume that predicting the movement on trading day d can benefit from predicting the movements on its former trading days. However, due to the general principle of sample independence, building connections directly across samples with temporally-close target dates is problematic for model training. As an alternative, we notice that within a sample with a target trading day d there are likely to be other trading days than d in its lag that can simulate the prediction targets close to d. Motivated by this observation and multi-task learning (Caruana, 1998), we make movement predictions not only for d, but also other trading days existing in the lag. For instance, as shown in Figure 2, for a sample targeting 07/08/2012 and a 5-day lag, 03/08/2012 and 06/08/2012 are eligible trading days in the lag and we also make predictions for them using the market information in this sample. The relations between these predictions can thus be captured within the scope of a sample. As shown in the instance above, not every single date in a lag is an eligible trading day, e.g. weekends and holidays. To better organize and use the input, we regard the trading day, instead of the 7http://finance.yahoo.com calendar day used in existing research, as the basic unit for building samples. To this end, we first find all the T eligible trading days referred in a sample, in other words, existing in the time interval [d −∆d + 1, d]. For clarity, in the scope of one sample, we index these trading days with t ∈[1, T],8 and each of them maps to an actual (absolute) trading day dt. We then propose trading-day alignment: we reorganize our inputs, including the tweet corpora and historical prices, by aligning them to these T trading days. Specifically, on the tth trading day, we recognize market signals from the corpus Mt in [dt−1, dt) and the historical prices pt on dt−1, for predicting the movement yt on dt. We provide an aligned sample for illustration in Figure 2. As a result, every single unit in a sample is a trading day, and we can predict a sequence of movements y = [y1, . . . , yT ]. The main target is yT while the remainder y∗= [y1, . . . , yT−1] serves as the temporal auxiliary target. We use these in addition to the main target to improve prediction accuracy (Section 5.3). We model the generative process shown in Figure 1. We encode observed market information as a random variable X = [x1; . . . ; xT ], from which we generate the latent driven factor Z = [z1; . . . ; zT ] for our prediction task. For the aforementioned multi-task learning purpose, we aim at modeling the conditional probability distribution pθ (y|X) = R Z pθ (y, Z|X) instead of pθ(yT |X). We write the following factorization for generation, pθ (y, Z|X) = pθ (yT |X, Z) pθ(zT |z<T , X) (2) T−1 Y t=1 pθ (yt|x≤t, zt) pθ (zt|z<t, x≤t, yt) where for a given indexed matrix of T vectors [v1; . . . ; vT ], we denote by v<t and v≤t the submatrix [v1; . . . ; vt−1] and the submatrix [v1; . . . ; vt], respectively. Since y∗is known in generation, we use the posterior pθ (zt|z<t, x≤t, yt) , t < T to incorporate market signals more accurately and only use the prior pθ(zT |z<T , X) when generating zT . Besides, when t < T, yt is independent of z<t while our main prediction target, yT is made dependent on z<T through a temporal attention mechanism (Section 5.3). We show StockNet modeling the above generative process in Figure 2. In a nutshell, StockNet 8It holds that T ≥1 since d is undoubtedly a trading day. 1973 z1 z2 z3 h2 h3 02/08 Input Output hdec henc µ log δ2 z N(0, I) DKL ⇥ N(µ, δ2) k N(0, I) ⇤ " Variational encoder Variational decoder Bi-GRUs Message Embedding Layer (d) VAEs h1 03/08 06/08 07/08 02/08 06/08 06/08 Attention Attention Attention 03/08 - 05/08 03/08 (b) Market Information Encoder (MIE) (a) Variational Movement Decoder (VMD) Message Corpora Historical Prices Temporal Attention Training Objective y1 y2 y3 (c) Attentive Temporal Auxiliary (ATA) ↵ g1 g2 g3 Figure 2: The architecture of StockNet. We use the main target of 07/08/2012 and the lag size of 5 for illustration. Since 04/08/2012 and 05/08/2012 are not trading days (a weekend), trading-day alignment helps StockNet to organize message corpora and historical prices for the other three trading days in the lag. We use dashed lines to denote auxiliary components. Red points denoting temporal objectives are integrated with a temporal attention mechanism to acquire the final training objective. comprises three primary components following a bottom-up fashion, 1. Market Information Encoder (MIE) that encodes tweets and prices to X; 2. Variational Movement Decoder (VMD) that infers Z with X, y and decodes stock movements y from X, Z; 3. Attentive Temporal Auxiliary (ATA) that integrates temporal loss through an attention mechanism for model training. 5 Model Components We detail next the components of our model (MIE, VMD, ATA) and the way we estimate our model parameters. 5.1 Market Information Encoder MIE encodes information from social media and stock prices to enhance market information quality, and outputs the market information input X for VMD. Each temporal input is defined as xt = [ct, pt] (3) where ct and pt are the corpus embedding and the historical price vector, respectively. The basic strategy of acquiring ct is to first feed messages into the Message Embedding Layer for their low-dimensional representations, then selectively gather them according to their quality. To handle the circumstance that multiple stocks are discussed in one single message, in addition to text information, we incorporate the position information of stock symbols mentioned in messages as well. Specifically, the layer consists of a forward GRU and a backward GRU for the preceding and following contexts of a stock symbol, s, respectively. Formally, in the message corpus of the tth trading day, we denote the word sequence of the kth message, k ∈[1, K], as W where Wℓ⋆= s, ℓ⋆∈[1, L], and its word embedding matrix as E = [e1; e2; . . . ; eL]. We run the two GRUs as follows, −→h f = −−−→ GRU(ef, −→h f−1) (4) ←−h b = ←−−− GRU(eb, ←−h b+1) (5) m = (−→h ℓ⋆+ ←−h ℓ⋆)/2 (6) where f ∈[1, . . . , ℓ⋆], b ∈[ℓ⋆, . . . , L]. The stock symbol is regarded as the last unit in both the preceding and the following contexts where the hidden values, −→h l⋆, ←−h l⋆, are averaged to acquire the message embedding m. Gathering all message embeddings for the tth trading day, we have a mes1974 sage embedding matrix Mt ∈Rdm×K. In practice, the layer takes as inputs a five-rank tensor for a mini-batch, and yields all Mt in the batch with shared parameters. Tweet quality varies drastically. Inspired by the news-level attention (Hu et al., 2018), we weight messages with their respective salience in collective intelligence measurement. Specifically, we first project Mt non-linearly to ut, the normalized attention weight over the corpus, ut = ζ(w⊺ u tanh(Wm,uMt)) (7) where ζ(·) is the softmax function and Wm,u ∈ Rdm×dm, wu ∈Rdm×1 are model parameters. Then we compose messages accordingly to acquire the corpus embedding, ct = Mtu⊺ t . (8) Since it is the price change that determines the stock movement rather than the absolute price value, instead of directly feeding the raw price vector ˜pt =  ˜pc t, ˜ph t , ˜pl t  comprising of the adjusted closing, highest and lowest price on a trading day t, into the networks, we normalize it with its last adjusted closing price, pt = ˜pt/˜pc t−1 −1. We then concatenate ct with pt to form the final market information input xt for the decoder. 5.2 Variational Movement Decoder The purpose of VMD is to recurrently infer and decode the latent driven factor Z and the movement y from the encoded market information X. Inference While latent driven factors help to depict the market status leading to stock movements, the posterior inference in the generative model shown in Eq. (2) is intractable. Following the spirit of the VAE, we use deep neural networks to fit latent distributions, i.e. the prior pθ (zt|z<t, x≤t) and the posterior pθ (zt|z<t, x≤t, yt), and sidestep the intractability through neural approximation and reparameterization (Kingma and Welling, 2013; Rezende et al., 2014). We first employ a variational approximator qφ (zt|z<t, x≤t, yt) for the intractable posterior. We observe the following factorization, qφ (Z|X, y) = T Y t=1 qφ (zt|z<t, x≤t, yt) . (9) Neural approximation aims at minimizing the Kullback-Leibler divergence between the qφ (Z|X, y) and pθ (Z|X, y). Instead of optimizing it directly, we observe that the following equation naturally holds, log pθ (y|X) (10) =DKL [qφ (Z|X, y) ∥pθ (Z|X, y)] +Eqφ(Z|X,y) [log pθ (y|X, Z)] −DKL [qφ (Z|X, y) ∥pθ (Z|X)] where DKL [q ∥p] is the Kullback-Leibler divergence between the distributions q and p. Therefore, we equivalently maximize the following variational recurrent lower bound by plugging Eq. (2, 9) into Eq. (10), L (θ, φ; X, y) (11) = T X t=1 Eqφ(zt|z<t,x≤t,yt)  log pθ (yt|x≤t, z≤t) − DKL [qφ (zt|z<t, x≤t, yt) ∥pθ (zt|z<t, x≤t)] ≤log pθ (y|X) where the likelihood term pθ (yt|x≤t, z≤t) = ( pθ (yt|x≤t, zt) , if t < T pθ (yT |X, Z) , if t = T. (12) Li et al. (2017) also provide a lower bound for inferring directly-connected recurrent latent variables in text summarization. In their work, priors are modeled with pθ (zt) ∼N(0, I), which, in fact, turns the KL term into a static regularization term encouraging sparsity. In Eq. (11), we provide a more theoretically rigorous lower bound where the KL term with pθ (zt|z<t, x≤t) plays a dynamic role in inferring dependent latent variables for every different model input and latent history. Decoding As per time series, VMD adopts an RNN with a GRU cell to extract features and decode stock signals recurrently, hs t = GRU(xt, hs t−1). (13) We let the approximator qφ (zt|z<t, x≤t, yt) subject to a standard multivariate Gaussian distribution N(µ, δ2I). We calculate µ and δ as µt = W φ z,µhz t + bφ µ (14) log δ2 t = W φ z,δhz t + bφ δ (15) 1975 and the shared hidden representation hz t as hz t = tanh(W φ z [zt−1, xt, hs t, yt] + bφ z) (16) where W φ z,µ, W φ z,δ, W φ z are weight matrices and bφ µ, bφ δ , bφ z are biases. Since Gaussian distribution belongs to the “location-scale” distribution family, we can further reparameterize zt as zt = µt + δt ⊙ϵ (17) where ⊙denotes an element-wise product. The noise term ϵ ∼N(0, I) naturally involves stochastic signals in our model. Similarly, We let the prior pθ (zt|z<t, x≤t) ∼ N(µ′, δ′2I). Its calculation is the same as that of the posterior except the absence of yt and independent model parameters, µ′ t = W θ o,µhz t ′ + bθ µ (18) log δ′2 t = W θ o,δhz t ′ + bθ δ (19) where hz t ′ = tanh(W θ z [zt−1, xt, hs t] + bθ z). (20) Following Zhang et al. (2016), differently from the posterior, we set the prior zt = µ′ t during decoding. Finally, we integrate deterministic features and the final prediction hypothesis is given as gt = tanh(Wg[xt, hs t, zt] + bg) (21) ˜yt = ζ(Wygt + by), t < T (22) where Wg, Wy are weight matrices and bg, by are biases. The softmax function ζ(·) outputs the confidence distribution over up and down. As introduced in Section 4, the decoding of the main target yT depends on z<T and thus lies at the interface between VMD and ATA. We will elaborate on it in the next section. 5.3 Attentive Temporal Auxiliary With the acquisition of a sequence of auxiliary predictions ˜Y ∗= [˜y1; . . . ; ˜yT−1], we incorporate two-folded auxiliary effects into the main prediction and the training objective flexibly by first introducing a shared temporal attention mechanism. Since each hypothesis of a temporal auxiliary contributes unequally to the main prediction and model training, as shown in Figure 3, temporal attention calculates their weights in these two contributions by employing two scoring components: an g2 g3 g1 Dependency Score Information Score Temporal Attention Training Objective 1 gT ˜yT Figure 3: The temporal attention in our model. Squares are the non-linear projections of gt and points are scores or normalized weights. information score and a dependency score. Specifically, v′ i = w⊺ i tanh(Wg,iG∗) (23) v′ d = g⊺ T tanh(Wg,dG∗) (24) v∗= ζ(v′ i ⊙v′ d) (25) where Wg,i, Wg,d ∈Rdg×dg, wi ∈Rdg×1 are model parameters. The integrated representations G∗= [g1; . . . ; gT−1] and gT are reused as the final representations of temporal market information. The information score v′ i evaluates historical trading days as per their own information quality, while the dependency score v′ d captures their dependencies with our main target. We integrate the two and acquire the final normalized attention weight v∗∈R1×(T−1) by feeding their elementwise product into the softmax function. As a result, the main prediction can benefit from temporally-close hypotheses have been made and we decode our main hypothesis ˜yT as ˜yT = ζ(WT [ ˜Y ∗v∗⊺, gT ] + bT ) (26) where WT is a weight matrix and bT is a bias. As to the model objective, we use the Monte Carlo method to approximate the expectation term in Eq. (11) and typically only one sample is used for gradient computation. To incorporate varied temporal importance at the objective level, we first break down the approximated L into a series of temporal objectives f ∈RT×1 where ft comprises a likelihood term and a KL term for a trading day t, ft = log pθ (yt|x≤t, z≤t) (27) −λDKL [qφ (zt|z<t, x≤t, yt) ∥pθ (zt|z<t, x≤t)] 1976 where we adopt the KL term annealing trick (Bowman et al., 2016; Semeniuta et al., 2017) and add a linearly-increasing KL term weight λ ∈(0, 1] to gradually release the KL regularization effect in the training procedure. Then we reuse v∗to build the final temporal weight vector v ∈R1×T , v = [αv∗, 1] (28) where 1 is for the main prediction and we adopt the auxiliary weight α ∈[0, 1] to control the overall auxiliary effects on the model training. α is tuned on the development set and its effects will be discussed at length in Section 6.5. Finally, we write the training objective F by recomposition, F (θ, φ; X, y) = 1 N N X n v(n)f(n) (29) where our model can learn to generalize with the selective attendance of temporal auxiliary. We take the derivative of F with respect to all the model parameters {θ, φ} through backpropagation for the update. 6 Experiments In this section, we detail our experimental setup and results. 6.1 Training Setup We use a 5-day lag window for sample construction and 32 shuffled samples in a batch.9 The maximal token number contained in a message and the maximal message number on a trading day are empirically set to 30 and 40, respectively, with the excess clipped. Since all tweets in the batched samples are simultaneously fed into the model, we set the word embedding size to 50 instead of larger sizes to control memory costs and make model training feasible on one single GPU (11GB memory). We set the hidden size of Message Embedding Layer to 100 and that of VMD to 150. All weight matrices in the model are initialized with the fan-in trick and biases are initialized with zero. We train the model with an Adam optimizer (Kingma and Ba, 2014) with the initial learning rate of 0.001. Following Bowman et al. (2016), we 9Typically the lag size is set between 3 and 10. As introduced in Section 4, trading days are treated as basic units in StockNet and 3 calendar days are thus too short to guarantee the existence of more than one trading day in a lag, e.g. the prediction for the movement of Monday. We also experiment with 7 and 10 but they do not yield better results than 5. use the input dropout rate of 0.3 to regularize latent variables. Tensorflow (Abadi et al., 2016) is used to construct the computational graph of StockNet and hyper-parameters are tweaked on the development set. 6.2 Evaluation Metrics Following previous work for stock prediction (Xie et al., 2013; Ding et al., 2015), we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) as evaluation metrics. MCC avoids bias due to data skew. Given the confusion matrix tp fn fp tn  containing the number of samples classified as true positive, false positive, true negative and false negative, MCC is calculated as MCC = tp × tn −fp × fn p (tp + fp)(tp + fn)(tn + fp)(tn + fn) . (30) 6.3 Baselines and Proposed Models We construct the following five baselines in different genres,10 • RAND: a naive predictor making random guess in up or down. • ARIMA: Autoregressive Integrated Moving Average, an advanced technical analysis method using only price signals (Brown, 2004) . • RANDFOREST: a discriminative Random Forest classifier using Word2vec text representations (Pagolu et al., 2016). • TSLDA: a generative topic model jointly learning topics and sentiments (Nguyen and Shirai, 2015). • HAN: a state-of-the-art discriminative deep neural network with hierarchical attention (Hu et al., 2018). To make a detailed analysis of all the primary components in StockNet, in addition to HEDGEFUNDANALYST, the fully-equipped StockNet, we also construct the following four variations, • TECHNICALANALYST: the generative StockNet using only historical prices. • FUNDAMENTALANALYST: the generative StockNet using only tweet information. • INDEPENDENTANALYST: the generative StockNet without temporal auxiliary targets. 10We do not treat event-driven models as comparable methods since our model uses no event pre-extraction tool. 1977 Baseline models Acc. MCC StockNet variations Acc. MCC RAND 50.89 -0.002266 TECHNICALANALYST 54.96 0.016456 ARIMA (Brown, 2004) 51.39 -0.020588 FUNDAMENTALANALYST 58.23 0.071704 RANDFOREST (Pagolu et al., 2016) 53.08 0.012929 INDEPENDENTANALYST 57.54 0.036610 TSLDA (Nguyen and Shirai, 2015) 54.07 0.065382 DISCRIMINATIVEANALYST 56.15 0.056493 HAN (Hu et al., 2018) 57.64 0.051800 HEDGEFUNDANALYST 58.23 0.080796 Table 1: Performance of baselines and StockNet variations in accuracy and MCC. 0.0 0.1 0.3 0.5 0.7 0.9 1.0 46 48 50 52 54 56 58 60 Acc. 57.54 57.24 55.56 58.23 57.54 57.44 54.27 (a) Acc. MCC 0.0 0.1 0.3 0.5 0.7 0.9 1.0 46 48 50 52 54 56 58 60 55.06 52.68 51.69 50.79 56.15 54.46 53.37 (b) Acc. MCC 0.00 0.02 0.04 0.06 0.08 0.10 0.036610 0.045046 0.010535 0.080796 0.036610 0.032907 0.007390 0.00 0.02 0.04 0.06 0.08 0.10 MCC 0.048112 0.027161 0.052397 0.038252 0.056493 0.035652 0.029556 Figure 4: (a) Performance of HEDGEFUNDANALYST with varied α, see Eq. (28). (b) Performance of DISCRIMINATIVEANALYST with varied α. • DISCRIMINATIVEANALYST: the discriminative StockNet directly optimizing the likelihood objective. Following Zhang et al. (2016), we set zt = µ′ t to take out the effects of the KL term. 6.4 Results Since stock prediction is a challenging task and a minor improvement usually leads to large potential profits, the accuracy of 56% is generally reported as a satisfying result for binary stock movement prediction (Nguyen and Shirai, 2015). We show the performance of the baselines and our proposed models in Table 1. TLSDA is the best baseline in MCC while HAN is the best baseline in accuracy. Our model, HEDGEFUNDANALYST achieves the best performance of 58.23 in accuracy and 0.080796 in MCC, outperforming TLSDA and HAN with 4.16, 0.59 in accuracy, and 0.015414, 0.028996 in MCC, respectively. Though slightly better than random guess, classic technical analysis, e.g. ARIMA, does not yield satisfying results. Similar in using only historical prices, TECHNICALANALYST shows an obvious advantage in this task compared ARIMA. We believe there are two major reasons: (1) TECHNICALANALYST learns from training data and incorporates more flexible non-linearity; (2) our test set contains a large number of stocks while ARIMA is more sensitive to peculiar sequence stationarity. It is worth noting that FUNDAMENTALANALYST gains exceptionally competitive results with only 0.009092 less in MCC than HEDGEFUNDANALYST. The performance of FUNDAMENTALANALYST and TECHNICALANALYST confirm the positive effects from tweets and historical prices in stock movement prediction, respectively. As an effective ensemble of the two market information, HEDGEFUNDANALYST gains even better performance. Compared with DISCRIMINATIVEANALYST, the performance improvements of HEDGEFUNDANALYST are not from enlarging the networks, demonstrating that modeling underlying market status explicitly with latent driven factors indeed benefits stock movement prediction. The comparison with INDEPENDENTANALYST also shows the effectiveness of capturing temporal dependencies between predictions with the temporal auxiliary. However, the effects of the temporal auxiliary are more complex and will be analyzed further in the next section. 6.5 Effects of Temporal Auxiliary We provide a detailed discuss of how the temporal auxiliary affects model performance. As introduced in Eq. (28), the temporal auxiliary weight α controls the overall effects of the objective-level temporal auxiliary to our model. Figure 4 presents how the performance of HEDGEFUNDANALYST and DISCRIMINATIVEANALYST fluctuates with α. As shown in Figure 4, enhanced by the temporal auxiliary, HEDGEFUNDANALYST approaches the best performance at 0.5, and DISCRIMINATIVEANALYST 1978 achieves its maximum at 0.7. In fact, objectivelevel auxiliary can be regarded as a denoising regularizer: for a sample with a specific movement as the main target, the market source in the lag can be heterogeneous, e.g. affected by bad news, tweets on earlier days are negative but turn to positive due to timely crises management. Without temporal auxiliary tasks, the model tries to identify positive signals on earlier days only for the main target of rise movement, which is likely to result in pure noise. In such cases, temporal auxiliary tasks help to filter market sources in the lag as per their respective aligned auxiliary movements. Besides, from the perspective of training variational models, the temporal auxiliary helps HEDGEFUNDANALYST to encode more useful information into the latent driven factor Z, which is consistent with recent research in VAEs (Semeniuta et al., 2017). Compared with HEDGEFUNDANALYST that contains a KL term performing dynamic regularization, DISCRIMINATIVEANALYST requires stronger regularization effects coming with a bigger α to achieve its best performance. Since y∗also involves in generating yT through the temporal attention, tweaking α acts as a tradeoff between focusing on the main target and generalizing by denoising. Therefore, as shown in Figure 4, our models do not linearly benefit from incorporating temporal auxiliary. In fact, the two models follow a similar pattern in terms of performance change: the curves first drop down with the increase of α, except the MCC curve for DISCRIMINATIVEANALYST rising up temporarily at 0.3. After that, the curves ascend abruptly to their maximums, then keep descending till α = 1. Though the start phase of increasing α even leads to worse performance, when auxiliary effects are properly introduced, the two models finally gain better results than those with no involvement of auxiliary effects, e.g. INDEPENDENTANALYST. 7 Conclusion We demonstrated the effectiveness of deep generative approaches for stock movement prediction from social media data by introducing StockNet, a neural network architecture for this task. We tested our model on a new comprehensive dataset and showed it performs better than strong baselines, including implementation of previous work. Our comprehensive dataset is publicly available at https://github.com/ yumoxu/stocknet-dataset. Acknowledgments The authors would like to thank the three anonymous reviewers and Miles Osborne for their helpful comments. This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139. References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 . Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. O’Reilly Media, Inc. Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of computational science 2(1):1–8. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Berlin, Germany, pages 10–21. Robert Goodell Brown. 2004. Smoothing, forecasting and prediction of discrete time series. Courier Corporation. Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95–133. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2014. Using structured events to predict stock price movement: An empirical investigation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Doha, Qatar, pages 1415–1425. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock prediction. In Proceedings of the 24th International Conference on Artificial Intelligence. Buenos Aires, Argentina, pages 2327–2333. Robert D Edwards, WHC Bassetti, and John Magee. 2007. Technical analysis of stock trends. CRC press. Jeffrey A Frankel. 1995. Financial markets and monetary policy. MIT Press. 1979 Ziniu Hu, Weiqing Liu, Jiang Bian, Xuanzhe Liu, and Tie-Yan Liu. 2018. Listening to chaotic whispers: A deep learning framework for news-oriented stock trend prediction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM, Los Angeles, California, USA, pages 261–269. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 . Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on International Conference on Machine Learning-Volume 32. JMLR. org, Beijing, China, pages 1188–1196. Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep recurrent generative decoder for abstractive text summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen, Denmark, pages 2081–2090. Ronny Luss and Alexandre d’Aspremont. 2015. Predicting abnormal returns from news using text classification. Quantitative Finance 15(6):999–1012. Burton Gordon Malkiel. 1999. A random walk down Wall Street: including a life-cycle guide to personal investing. WW Norton & Company. Thien Hai Nguyen and Kiyoaki Shirai. 2015. Topic modeling based sentiment analysis on social media for stock market prediction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Beijing, China, volume 1, pages 1354–1364. Nuno Oliveira, Paulo Cortez, and Nelson Areal. 2013. Some experiments on modeling stock market behavior using investor sentiment analysis and posting volume from twitter. In Proceedings of the 3rd International Conference on Web Intelligence, Mining and Semantics. ACM, Madrid, Spain, page 31. Venkata Sasank Pagolu, Kamal Nayan Reddy, Ganapati Panda, and Babita Majhi. 2016. Sentiment analysis of twitter data for predicting stock market movements. In Proceedings of 2016 International Conference on Signal Processing, Communication, Power and Embedded System. IEEE, Rajaseetapuram, India, pages 1345–1350. Navid Rekabsaz, Mihai Lupu, Artem Baklanov, Alexander D¨ur, Linda Andersson, and Allan Hanbury. 2017. Volatility prediction using financial disclosures sentiments with word embedding-based ir models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada, volume 1, pages 1712–1721. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning. Beijing, China, pages 1278– 1286. Robert P Schumaker and Hsinchun Chen. 2009. Textual analysis of stock market prediction using breaking financial news: The azfin text system. ACM Transactions on Information Systems 27(2):12. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen, Denmark, pages 627–637. Jianfeng Si, Arjun Mukherjee, Bing Liu, Qing Li, Huayi Li, and Xiaotie Deng. 2013. Exploiting topic based twitter sentiment for stock prediction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Sofia, Bulgaria, volume 2, pages 24–29. Boyi Xie, Rebecca J Passonneau, Leon Wu, and Germ´an G Creamer. 2013. Semantic frames to predict stock price movement. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Sofia, Bulgaria, volume 1, pages 873–883. Biao Zhang, Deyi Xiong, Hong Duan, Min Zhang, et al. 2016. Variational neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas, USA, pages 521–530.
2018
183
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1980–1989 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1980 Rumor Detection on Twitter with Tree-structured Recursive Neural Networks Jing Ma1, Wei Gao2, Kam-Fai Wong1,3 1The Chinese University of Hong Kong, Hong Kong SAR 2Victoria University of Wellington, New Zealand 3MoE Key Laboratory of High Confidence Software Technologies, China 1{majing,kfwong}@se.cuhk.edu.hk, [email protected] Abstract Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. 1 Introduction Rumors have always been a social disease. In recent years, it has become unprecedentedly convenient for the “evil-doers” to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc. The worst effect of false rumors could be devastating to individual and/or society. Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (DiFonzo and Bordia, 2007; Donovan, 2007), social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005), political studies (Allport and Postman, 1946; Berinsky, 2017), management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015). Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (DiFonzo and Bordia, 2007; Qazvinian et al., 2011). Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b). However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming. The proliferation of social media makes it worse due to the ever-increasing information load and dynamics. Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking. For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015), and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017). These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors. Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities. But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features. In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues. RvNN and its variants 1981 were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011, 2012). Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words. The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure. So, why can such neural model do better for the task? Analysis has generally found that Twitter could “self-correct” some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017). To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true1. Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example. However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies. This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor. Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a), suggesting obvious local characteristic of the interaction. The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches. To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions. The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1False (true) rumor means the veracity of the rumorous claim is false (true). (a) False rumor (b) True rumor Figure 1: Propagation trees of two rumorous source tweets. Nodes may express stances on their parent as commenting, supporting, questioning or denying. The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as ‘+’ (‘-’) for support (denial). The same node color indicates the same stance on the veracity of root node (i.e., source tweet). tree. As a result, it can be expected that the discriminative signals are better embedded into the learned representations. We evaluate our proposed approach based on two public Twitter datasets. The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking. Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts. • We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors. • Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks. We make the source codes in our experiments publicly accessible 2. 2 Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu 2https://github.com/majingCUHK/Rumor_ RvNN 1982 et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns. Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites. Kwon et al. (2013) introduced a time-series-fitting model based on the volume of tweets over time. Ma et al. (2015) extended their model with more chronological social context features. These approaches typically require heavy preprocessing and feature engineering. Zhao et al. (2015) alleviated the engineering effort by using a set of regular expressions (such as “really?”, “not true”, etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall. Ma et al. (2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series. Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018). However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence. Some kernel-based methods were exploited to model the propagation structure. Wu et al. (2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo. Ma et al. (2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter. Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content. RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011), phrase representation from word vectors (Socher et al., 2012), and sentiment classification in sentences (Socher et al., 2013). More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014). In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN (Zhu et al., 2015; Tai et al., 2015). Mou et al. (2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences. 3 Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C1, C2, · · · , C|C|}, where each claim Ci corresponds to a source tweet ri which consists of ideally all its relevant responsive tweets in chronological order, i.e., Ci = {ri, xi1, xi2, · · · , xim} where each xi∗is a responsive tweet of the root ri. Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with ri being the root node. We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : Ci →Yi, where Yi takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b). An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1, where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level. We represent a tree as Ti = ⟨Vi, Ei⟩, where Vi = Ci which consists of all relevant posts as nodes, and Ei denotes a set of all directed links, where for any u, v ∈Vi, u ←v exists if v responses to u. This structure is similar to a citation network where a response mimics a reference. • Top-down tree naturally conforms to the direction of information propagation, in which a link u →v means the information flows from u to v and v sees it and provides a response to u. This structure reverses bottomup tree and simulates how information cas1983 Figure 2: A binarized sentence parse tree (left) and its corresponding RvNN architecture (right). cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017). 4 RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree. For instance, the responsive nodes confirming or supporting a node (e.g., “I agree”, “be right”, etc) can further reinforce the stance of that node while denial or questioning responses (e.g., “disagree, “really?!) otherwise weaken its stance. Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017), our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure. In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3. 4.1 Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks. The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012), in which the representation associated with each node of a parse tree is computed from its direct children. The overall structure of the standard RvNN is illustrated as the right side of Figure 2, corresponding to the input parse tree at the left side. Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding. Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes. Let p be the feature vector of a parent node whose children are c1 and c2, the representation of the parent is computed by p = f(W ·[c1; c2]+b), where f(·) is the activation function with W and b as parameters. This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks. 4.2 Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top. In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space. And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree. For this purpose, we make a natural extension to the original RvNN. The overall structure of our proposed bottom-up model is illustrated in Figure 3(b), taking a bottom-up tree (see Figure 3(a)) as input. Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tfidf values. Here, every node has an input vector, and the number of children of nodes varies significantly3. In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016). In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters. Let S(j) denote the set of direct children of the node j. The transition equations of node j in the bottom-up model are formulated as follows: ˜xj = xjE hS = X s∈S(j) hs rj = σ (Wr˜xj + UrhS) zj = σ (Wz˜xj + UzhS) ˜hj = tanh (Wh˜xj + Uh(hS ⊙rj)) hj = (1 −zj) ⊙hS + zj ⊙˜hj (1) 3In standard RvNN, since an input instance is the parse tree of a sentence, only leaf nodes have input vector, each node representing a word of the input sentence, and the nonleaf nodes are constituents of the sentence, and thus the number of children of a node is limited. 1984 (a) Bottom-up/Top-down tree (b) Bottom-up RvNN model (c) Top-down RvNN model Figure 3: A bottom-up/top-down propagation tree and the corresponding RvNN-based models. The black-color and red-color edges differentiate the bottom-up and top-down tree in Figure 3(a). where xj is the original input vector of node j, E denotes the parameter matrix for transforming this input post, ˜xj is the transformed representation of j, [W∗, U∗] are the weight connections inside GRU, and hj and hs refer to the hidden state of j and its s-th child. Thus hS denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j. As with the standard GRU, ⊙denotes element-wise multiplication; a reset gate rj determines how to combine the current input ˜xj with the memory of children, and an update gate zj defines how much memory from the children is cascaded into the current node; and ˜hj denotes the candidate activation of the hidden state of the current node. Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children. After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification. So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: ˆy = Softmax(Vh0 + b) (2) where h0 is the learned hidden vector of root node; V and b are the weights and bias in output layer. 4.3 Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3(c). It models how the information flows from source post to the current node. The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path. For example, if current post agree with its parent’s stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced. Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive. However, the nature of top-down tree lends this model different from the bottom-up one. The representation of each node is computed by combining its own input and its parent node instead of its children nodes. This process proceeds recursively from the root node to its children until all leaf nodes are reached. Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss. Then the hidden state hj of a node j can be computed by combining the hidden state hP(j) of its parent node P(j) and its own input vector xj. Therefore, the transition equations of node j can be formulated as a standard GRU: ˜xj = xjE rj = σ Wr˜xj + UrhP(j)  zj = σ Wz˜xj + UzhP(j)  ˜hj = tanh Wh˜xj + Uh(hP(j) ⊙rj)  hj = (1 −zj) ⊙hP(j) + zj ⊙˜hj (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes. Since the num1985 ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output. Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes. This can also help capture the most appealing indicative features from all the propagation paths. Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: ˆy = Softmax(Vh∞+ b) (4) where h∞is the pooling vector over all leaf nodes, V and b are parameters in the output layer. Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better. The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically. 4.4 Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y, ˆy) = N X n=1 C X c=1 (yc −ˆyc)2 + λ||θ||2 2 (5) where yc is the ground truth and ˆyc is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.||2 is the L2 regularization term over all model parameters θ, and λ is the trade-off coefficient. During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013), and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence. We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100. We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met. 5 Experiments and Results 5.1 Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al. (2017), namely Twitter15 and Twitter164, which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics). In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure. Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor. We remove the retweets from the trees since they do not provide any extra information or evidence contentwise. We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges’ direction. 5.2 Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks. - DTR: Zhao et al. (2015) proposed a DecisionTree-based Ranking model to identify trending rumors by searching for inquiry phrases. - DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets. - RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013). - SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015). - SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification. - SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015), respectively, both of which model propagation structures with kernels. - GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts. 4https://www.dropbox.com/s/ 7ewzdrbelpmrnxu/rumdetect2017.zip?dl=0 1986 (a) Twitter15 dataset Method NR FR TR UR Acc. F1 F1 F1 F1 DTR 0.409 0.501 0.311 0.364 0.473 DTC 0.454 0.733 0.355 0.317 0.415 RFC 0.565 0.810 0.422 0.401 0.543 SVM-TS 0.544 0.796 0.472 0.404 0.483 SVM-BOW 0.548 0.564 0.524 0.582 0.512 SVM-HK 0.493 0.650 0.439 0.342 0.336 SVM-TK 0.667 0.619 0.669 0.772 0.645 GRU-RNN 0.641 0.684 0.634 0.688 0.571 BU-RvNN 0.708 0.695 0.728 0.759 0.653 TD-RvNN 0.723 0.682 0.758 0.821 0.654 (b) Twitter16 dataset Method NR FR TR UR Acc. F1 F1 F1 F1 DTR 0.414 0.394 0.273 0.630 0.344 DTC 0.465 0.643 0.393 0.419 0.403 RFC 0.585 0.752 0.415 0.547 0.563 SVM-TS 0.574 0.755 0.420 0.571 0.526 SVM-BOW 0.585 0.553 0.556 0.655 0.578 SVM-HK 0.511 0.648 0.434 0.473 0.451 SVM-TK 0.662 0.643 0.623 0.783 0.655 GRU-RNN 0.633 0.617 0.715 0.577 0.527 BU-RvNN 0.718 0.723 0.712 0.779 0.659 TD-RvNN 0.737 0.662 0.743 0.835 0.708 Table 1: Results of rumor detection. (NR: nonrumor; FR: false rumor; TR: true rumor; UR: unverified rumor) - BU-RvNN and TD-RvNN: Our bottom-up and top-down RvNN models, respectively. We implement DTC and RFC using Weka5, SVM-based models using LibSVM6 and all neural-network-based models with Theano7. We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models. 5.3 Rumor Classification Performance As shown in Table 1, our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation. It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features. Among these baselines, SVMTS and RFC perform relatively better because they 5www.cs.waikato.ac.nz/ml/weka 6www.csie.ntu.edu.tw/˜cjlin/libsvm 7deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering. DTR uses a set of regular expressions indicative of stances. However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result. Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVMHK. There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015), which may not be generalize well on Twitter. 2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically. This under utilizes the propagation information due to such oversimplified treatment of tree structure. In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities. It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK. This is because the features of SVMHK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW. The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models. This is because it is a special case of the recursive model where each non-leaf node has only one child. It has to rely on a linear chain as input, which missed out valuable structural information. However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees. Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK. In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one. This verifies the hypothesis we made in Section 4.3 that the pooling layer 1987 (a) Twitter15 (elapsed time) (b) Twitter16 (elapsed time) (c) Twitter15 (tweets count) (d) Twitter16 (tweets count) Figure 4: Early rumor detection accuracy at different checkpoints in terms of elapsed time (tweets count). Figure 5: A correctly detected false rumor at early stage by both of our models, where propagation paths are marked with relevant stances. Note that edge direction is not shown as it applies to either case. in the top-down model can effectively select important features embedded into the leaf nodes. For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines. This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors. Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure. 5.4 Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner. We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted. The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume. Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage. Although all the methods are getting to their best performance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method. Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models. We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model. Similarly, some patterns of propagation from the root to leaf nodes like “support→deny→support” are also seized by our top-down model. In comparison, sequential models may be confused because the supportive key terms such as “be right”, “yeah”, “exactly!” dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words. 6 Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter. The inher1988 ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors. Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines. In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time. We also plan to use unsupervised models for the task by exploiting structural information. Acknowledgment This work is partly supported by Innovation and Technology Fund (ITF) Project No. 6904333, and General Research Fund (GRF) Project No. 14232816 (12183516). We would like to thank anonymous reviewers for the insightful comments. References Gordon W Allport and Leo Postman. 1946. An analysis of rumor. Public Opinion Quarterly 10(4):501– 517. G.W. Allport and L.J. Postman. 1965. The psychology of rumor. Russell & Russell. Adam J. Berinsky. 2017. Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science 47(2):241262. Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of WWW. pages 675–684. Tong Chen, Lin Wu, Xue Li, Jun Zhang, Hongzhi Yin, and Yang Wang. 2017. Call attention to rumors: Deep attention based recurrent neural networks for early rumor detection. arXiv preprint arXiv:1704.05973 . Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 . Nicholas DiFonzo and Prashant Bordia. 2007. Rumor, gossip and urban legends. Diogenes 54(1):19–35. Nicholas DiFonzo, Prashant Bordia, and Ralph L Rosnow. 1994. Reining in rumors. Organizational Dynamics 23(1):47–62. Pamela Donovan. 2007. How idle is idle talk? one hundred years of rumor research. Diogenes 54(1):59– 82. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159. Adrien Friggeri, Lada A Adamic, Dean Eckles, and Justin Cheng. 2014. Rumor cascades. In Proceedings of ICWSM. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In Neural Networks, 1996., IEEE International Conference on. IEEE, volume 1, pages 347–352. Aniko Hannak, Drew Margolin, Brian Keegan, and Ingmar Weber. 2014. Get back! you don’t know me like that: The social mediation of fact checking interventions in twitter conversations. In Proceedings of ICWSM. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Ozan Irsoy and Claire Cardie. 2014. Deep recursive neural networks for compositionality in language. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. NIPS’14, pages 2096–2104. Marianne E Jaeger, Susan Anthony, and Ralph L Rosnow. 1980. Who hears what from whom and with what effect: A study of rumor. Personality and Social Psychology Bulletin 6(3):473–478. Allan J Kimmel. 2004. Rumors and rumor control: A manager’s guide to understanding and combatting rumors. Routledge. Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, and Yajun Wang. 2013. Prominent features of rumor propagation in online social media. In Proceedings of ICDM. pages 1103–1108. Xiaomo Liu, Armineh Nourbakhsh, Quanzhi Li, Rui Fang, and Sameena Shah. 2015. Real-time rumor debunking on twitter. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. CIKM ’15, pages 1867–1870. Michal Lukasik, PK Srijith, Duy Vu, Kalina Bontcheva, Arkaitz Zubiaga, and Trevor Cohn. 2016. Hawkes processes for continuous time sequence classification: an application to rumour stance classification in twitter. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). volume 2, pages 393–398. 1989 Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. IJCAI’16, pages 3818–3824. Jing Ma, Wei Gao, Zhongyu Wei, Yueming Lu, and Kam-Fai Wong. 2015. Detect rumors using time series of social context information on microblogging websites. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. CIKM ’15, pages 1751–1754. Jing Ma, Wei Gao, and Kam-Fai Wong. 2017. Detect rumors in microblog posts using propagation structure via kernel learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 708–717. Jing Ma, Wei Gao, and Kam-Fai Wong. 2018. Detect rumor and stance jointly by neural multi-task learning. In Companion Proceedings of the The Web Conference 2018. WWW ’18, pages 585–593. Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015. Discriminative neural sentence modeling by tree-based convolution. arXiv preprint arXiv:1504.01106 . Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaozhu Mei. 2011. Rumor has it: Identifying misinformation in microblogs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. EMNLP ’11, pages 1589–1599. Jacob Ratkiewicz, Michael Conover, Mark Meiss, Bruno Gonc¸alves, Snehal Patil, Alessandro Flammini, and Filippo Menczer. 2011. Truthy: mapping the spread of astroturf in microblog streams. In Proceedings of the 20th International Conference Companion on World Wide Web. WWW ’11, pages 249– 252. Ralph L Rosnow and Eric K Foster. 2005. Rumor and gossip research. Psychological Science Agenda 19(4). Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. Csi: A hybrid deep model for fake news detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. CIKM ’17, pages 797–806. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. EMNLPCoNLL ’12, pages 1201–1211. Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11). pages 129–136. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing. pages 1631–1642. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 . Ke Wu, Song Yang, and Kenny Q Zhu. 2015. False rumors detection on sina weibo by propagation structures. In Data Engineering (ICDE), 2015 IEEE 31st International Conference on. IEEE, pages 651–662. Fan Yang, Yang Liu, Xiaohui Yu, and Min Yang. 2012. Automatic detection of rumor on sina weibo. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics. MDS ’12, pages 13:1–13:7. Zhe Zhao, Paul Resnick, and Qiaozhu Mei. 2015. Enquiring minds: Early detection of rumors in social media from enquiry posts. In Proceedings of the 24th International Conference on World Wide Web. WWW ’15, pages 1395–1405. Xiaodan Zhu, Parinaz Sobihani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the 32nd International Conference on Machine Learning. pages 1604– 1612. Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2017. Detection and resolution of rumours in social media: A survey. arXiv preprint arXiv:1704.00656 . Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, and Michal Lukasik. 2016a. Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 2438–2448. Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie. 2016b. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one 11(3):e0150989.
2018
184
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1990–1999 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 1990 Visual Attention Model for Name Tagging in Multimodal Social Media Di Lu∗1, Leonardo Neves2, Vitor Carvalho3, Ning Zhang2, Heng Ji1 1Computer Science, Rensselaer Polytechnic Institute {lud2,jih}@rpi.edu 2Snap Research {lneves, ning.zhang}@snap.com 3Intuit vitor [email protected] Abstract Everyday billions of multimodal posts containing both images and text are shared in social media sites such as Snapchat, Twitter or Instagram. This combination of image and text in a single message allows for more creative and expressive forms of communication, and has become increasingly common in such sites. This new paradigm brings new challenges for natural language understanding, as the textual component tends to be shorter, more informal, and often is only understood if combined with the visual context. In this paper, we explore the task of name tagging in multimodal social media posts. We start by creating two new multimodal datasets: one based on Twitter posts1 and the other based on Snapchat captions (exclusively submitted to public and crowdsourced stories). We then propose a novel model based on Visual Attention that not only provides deeper visual understanding on the decisions of the model, but also significantly outperforms other state-of-theart baseline methods for this task. 2 1 Introduction Social platforms, like Snapchat, Twitter, Instagram and Pinterest, have become part of our lives and play an important role in making communication easier and accessible. Once textcentric, social media platforms are becoming in∗This work was mostly done during the first author’s internship at Snap Research. 1The Twitter data and associated images presented in this paper were downloaded from https://archive.org/ details/twitterstream 2We will make the annotations on Twitter data available for research purpose upon request. creasingly multimodal, with users combining images, videos, audios, and texts for better expressiveness. As social media posts become more multimodal, the natural language understanding of the textual components of these messages becomes increasingly challenging. In fact, it is often the case that the textual component can only be understood in combination with the visual context of the message. In this context, here we study the task of Name Tagging for social media containing both image and textual contents. Name tagging is a key task for language understanding, and provides input to several other tasks such as Question Answering, Summarization, Searching and Recommendation. Despite its importance, most of the research in name tagging has focused on news articles and longer text documents, and not as much in multimodal social media data (Baldwin et al., 2015). However, multimodality is not the only challenge to perform name tagging on such data. The textual components of these messages are often very short, which limits context around names. Moreover, there linguistic variations, slangs, typos and colloquial language are extremely common, such as using ‘looooove’ for ‘love’, ‘LosAngeles’ for ‘Los Angeles’, and ‘#Chicago #Bull’ for ‘Chicago Bulls’. These characteristics of social media data clearly illustrate the higher difficulty of this task, if compared to traditional newswire name tagging. In this work, we modify and extend the current state-of-the-art model (Lample et al., 2016; Ma and Hovy, 2016) in name tagging to incorporate the visual information of social media posts using an Attention mechanism. Although the usually short textual components of social media posts provide limited contextual information, the accompanying images often provide rich information that can be useful for name tagging. For ex1991 Figure 1: Examples of Modern Baseball associated with different images. ample, as shown in Figure 1, both captions include the phrase ‘Modern Baseball’. It is not easy to tell if each Modern Baseball refers to a name or not from the textual evidence only. However using the associated images as reference, we can easily infer that Modern Baseball in the first sentence should be the name of a band because of the implicit features from the objects like instruments and stage, and the Modern Baseball in the second sentence refers to the sport of baseball because of the pitcher in the image. In this paper, given an image-sentence pair as input, we explore a new approach to leverage visual context for name tagging in text. First, we propose an attention-based model to extract visual features from the regions in the image that are most related to the text. It can ignore irrelevant visual information. Secondly, we propose to use a gate to combine textual features extracted by a Bidirectional Long Short Term Memory (BLSTM) and extracted visual features, before feed them into a Conditional Random Fields(CRF) layer for tag predication. The proposed gate architecture plays the role to modulate word-level multimodal features. We evaluate our model on two labeled datasets collected from Snapchat and Twitter respectively. Our experimental results show that the proposed model outperforms state-for-the-art name tagger in multimodal social media. The main contributions of this work are as follows: • We create two new datasets for name tagging in multimedia data, one using Twitter and the other using crowd-sourced Snapchat posts. These new datasets effectively constitute new benchmarks for the task. • We propose a visual attention model specifically for name tagging in multimodal social media data. The proposed end-to-end model only uses image-sentence pairs as input without any human designed features, and a Visual Attention component that helps understand the decision making of the model. 2 Model Figure 2 shows the overall architecture of our model. We describe three main components of our model in this section: BLSTM-CRF sequence labeling model (Section 2.1), Visual Attention Model (Section 2.3) and Modulation Gate (Section 2.4). Given a pair of sentence and image as input, the Visual Attention Model extracts regional visual features from the image and computes the weighted sum of the regional visual features as the visual context vector, based on their relatedness with the sentence. The BLSTM-CRF sequence labeling model predicts the label for each word in the sentence based on both the visual context vector and the textual information of the words. The modulation gate controls the combination of the visual context vector and the word representations for each word before the CRF layer. 2.1 BLSTM-CRF Sequence Labeling We model name tagging as a sequence labeling problem. Given a sequence of words: S = {s1, s2, ..., sn}, we aim to predict a sequence of labels: L = {l1, l2, ..., ln}, where li ∈L and L is a pre-defined label set. Bidirectional LSTM. Long Short-term Memory Networks (LSTMs) (Hochreiter and Schmidhuber, 1997) are variants of Recurrent Neural Networks (RNNs) designed to capture long-range dependencies of input. The equations of a LSTM cell are as follows: it = σ(Wxixt + Whiht−1 + bi) ft = σ(Wxfxt + Whfht−1 + bf) ˜ct = tanh(Wxcxt + Whcht−1 + bc) ct = ft ⊙ct−1 + it ⊙˜ct ot = σ(Wxoxt + Whoht−1 + bo) ht = ot ⊙tanh(ct) where xt, ct and ht are the input, memory and hidden state at time t respectively. Wxi, Whi, Wxf, Whf, Wxc, Whc, Wxo, and Who are weight matrices. ⊙is the element-wise product function and σ is the element-wise sigmoid function. 1992 Florence and the Machine surprises ill teen with private concert  CNN LSTM LSTM LSTM CRF LSTM LSTM CRF LSTM LSTM CRF LSTM LSTM CRF Forward LSTM Backward LSTM CRF word embedding LSTM encoded text Florence and the Machine B-PER I-PER I-PER I-PER char representations Multimodal Input Visual Attention Model Modulation Gate Attention Model Visual Gate Visual Gate Visual Gate Visual Gate Figure 2: Overall Architecture of the Visual Attention Name Tagging Model. Name Tagging benefits from both of the past (left) and the future (right) contexts, thus we implement the Bidirectional LSTM (Graves et al., 2013; Dyer et al., 2015) by concatenating the left and right context representations, ht = [−→ ht, ←− ht], for each word. Character-level Representation. Following (Lample et al., 2016), we generate the character-level representation for each word using another BLSTM. It receives character embeddings as input and generates representations combining implicit prefix, suffix and spelling information. The final word representation xi is the concatenation of word embedding ei and character-level representation ci. ci = BLSTMchar(si) si ∈S xi = [ei, ci] Conditional random fields (CRFs). For name tagging, it is important to consider the constraints of the labels in neighborhood (e.g., I-LOC must follow B-LOC). CRFs (Lafferty et al., 2001) are effective to learn those constraints and jointly predict the best chain of labels. We follow the implementation of CRFs in (Ma and Hovy, 2016). 2.2 Visual Feature Representation We use Convolutional Neural Networks (CNNs) (LeCun et al., 1989) to obtain the representations of images. Particularly, we use Residual Net (ResNet) (He et al., 2016), which Figure 3: CNN for visual features extraction. achieves state-of-the-art on ImageNet (Russakovsky et al., 2015) detection, ImageNet localization, COCO (Lin et al., 2014) detection, and COCO segmentation tasks. Given an input pair (S, I), where S represents the word sequence and I represents the image rescaled to 224x224 pixels, we use ResNet to extract visual features for regional areas as well as for the whole image (Fig 3): Vg = ResNetg(I) Vr = ResNetr(I) where the global visual vector Vg, which represents the whole image, is the output before the last fully connected layer3. The dimension of Vg is 1,024. Vr are the visual representations for regional areas and they are extracted from the last convolutional layer of ResNet, and the dimension is 1,024x7x7 as shown in Figure 3. 7x7 is the number of regions in the image and 1,024 is the 3the last fully connect layer outputs the probabilities over 1,000 classes of objects. 1993 dimension of the feature vector. Thus each feature vector of Vr corresponds to a 32x32 pixel region of the rescaled input image. 2.3 Visual Attention Model Figure 4: Example of partially related image and sentence. (‘I have just bought Jeremy Pied.’) The global visual representation is a reasonable representation of the whole input image, but not the best. Sometimes only parts of the image are related to the associated sentence. For example, the visual features from the right part of the image in Figure 4 cannot contribute to inferring the information in the associated sentence ‘I have just bought Jeremy Pied.’ In this work we utilize visual attention mechanism to combat the problem, which has been proven effective for vision-language related tasks such as Image Captioning (Xu et al., 2015) and Visual Question Answering (Yang et al., 2016b; Lu et al., 2016), by enforcing the model to focus on the regions in images that are mostly related to context textual information while ignoring irrelevant regions. Also the visualization of attention can also help us to understand the decision making of the model. Attention mechanism is mapping a query and a set of key-value pairs to an output. The output is a weighted sum of the values and the assigned weight for each value is computed by a function of the query and corresponding key. We encode the sentence into a query vector using an LSTM, and use regional visual representations Vr as both keys and values. Text Query Vector. We use an LSTM to encode the sentence into a query vector, in which the inputs of the LSTM are the concatenations of word embeddings and character-level word representations. Different from the LSTM model used for sequence labeling in Section 2.1, the LSTM here aims to get the semantic information of the sentence and it is unidirectional: Q = LSTMquery(S) (1) Attention Implementation. There are many implementations of visual attention mechanism such as Multi-layer Perceptron (Bahdanau et al., 2014), Bilinear (Luong et al., 2015), dot product (Luong et al., 2015), Scaled Dot Product (Vaswani et al., 2017), and linear projection after summation (Yang et al., 2016b). Based on our experimental results, dot product implementations usually result in more concentrated attentions and linear projection after summation results in more dispersed attentions. In the context of name tagging, we choose the implementation of linear projection after summation because it is beneficial for the model to utilize as many related visual features as possible, and concentrated attentions may make the model bias. For implementation, we first project the text query vector Q and regional visual features Vr into the same dimensions: Pt = tanh(WtQ) Pv = tanh(WvVr) then we sum up the projected query vector with each projected regional visual vector respectively: A = Pt ⊕Pv the weights of the regional visual vectors: E = softmax(WaA + ba) where Wa is weights matrix. The weighted sum of the regional visual features is: vc = X αivi αi ∈E, vi ∈Vr We use vc as the visual context vector to initialize the BLSTM sequence labeling model in Section 2.1. We compare the performances of the models using global visual vector Vg and attention based visual context vector Vc for initialization in Section 4. 2.4 Visual Modulation Gate The BLSTM-CRF sequence labeling model benefits from using the visual context vector to initialize the LSTM cell. However, the better way to utilize visual features for sequence labeling is to incorporate the features at word level individually. However visual features contribute quite 1994 differently when they are used to infer the tags of different words. For example, we can easily find matched visual patterns from associated images for verbs such as ‘sing’, ‘run’, and ‘play’. Words/Phrases such as names of basketball players, artists, and buildings are often well-aligned with objects in images. However it is difficult to align function words such as ‘the’, ‘of’ and ‘well’ with visual features. Fortunately, most of the challenging cases in name tagging involve nouns and verbs, the disambiguation of which can benefit more from visual features. We propose to use a visual modulation gate, similar to (Miyamoto and Cho, 2016; Yang et al., 2016a), to dynamically control the combination of visual features and word representation generated by BLSTM at word-level, before feed them into the CRF layer for tag prediction. The equations for the implementation of modulation gate are as follows: βv = σ(Wvhi + Uvvc + bv) βw = σ(Wwhi + Uwvc + bw) m = tanh(Wmhi + Umvc + bm) wm = βw · hi + βv · m where hi is the word representation generated by BLSTM, vc is the computed visual context vector, Wv, Ww, Wm, Uv, Uw and Um are weight matrices, σ is the element-wise sigmoid function, and wm is the modulated word representations fed into the CRF layer in Section 2.1. We conduct experiments to evaluate the impact of modulation gate in Section 4. 3 Datasets We evaluate our model on two multimodal datasets, which are collected from Twitter and Snapchat respectively. Table 1 summarizes the data statistics. Both datasets contain four types of named entities: Location, Person, Organization and Miscellaneous. Each data instance contains a pair of sentence and image, and the names in sentences are manually tagged by three expert labelers. Twitter name tagging. The Twitter name tagging dataset contains pairs of tweets and their associated images extracted from May 2016, January 2017 and June 2017. We use sports and social event related key words, such as concert, festival, soccer, basketball, as queries. We don’t take into consideration messages without images for this experiment. If a tweet has more than one image associated to it, we randomly select one of the images. Snap name tagging. The Snap name tagging dataset consists of caption and image pairs exclusively extracted from snaps submitted to public and live stories. They were collected between May and July of 2017. The data contains captions submitted to multiple community curated stories like the Electric Daisy Carnival (EDC) music festival and the Golden State Warrior’s NBA parade. Both Twitter and Snapchat are social media with plenty of multimodal posts, but they have obvious differences with sentence length and image styles. In Twitter, text plays a more important role, and the sentences in the Twitter dataset are much longer than those in the Snap dataset (16.0 tokens vs 8.1 tokens). The image is often more related to the content of the text and added with the purpose of illustrating or giving more context. On the other hand, as users of Snapchat use cameras to communicate, the roles of text and image are switched. Captions are often added to complement what is being portrayed by the snap. On our experiment section we will show that our proposed model outperforms baseline on both datasets. We believe the Twitter dataset can be an important step towards more research in multimodal name tagging and we plan to provide it as a benchmark upon request. 4 Experiment 4.1 Training Tokenization. To tokenize the sentences, we use the same rules as (Owoputi et al., 2013), except we separate the hashtag ‘#’ with the words after. Labeling Schema. We use the standard BIO schema (Sang and Veenstra, 1999), because we see little difference when we switch to BIOES schema (Ratinov and Roth, 2009). Word embeddings. We use the 100-dimensional GloVe4 (Pennington et al., 2014) embeddings trained on 2 billions tweets to initialize the lookup table and do fine-tuning during training. Character embeddings. As in (Lample et al., 2016), we randomly initialize the character embeddings with uniform samples. Based on experimental results, the size of the character embeddings affects little, and we set it as 50. 4https://nlp.stanford.edu/projects/glove/ 1995 Training Development Testing Snapchat Sentences 4,817 1,032 1,033 Tokens 39,035 8,334 8,110 Twitter Sentences 4,290 1,432 1,459 Tokens 68,655 22,872 23,051 Table 1: Sizes of the datasets in numbers of sentence and token. Pretrained CNNs. We use the pretrained ResNet152 (He et al., 2016) from Pytorch. Early Stopping. We use early stopping (Caruana et al., 2001; Graves et al., 2013) with a patience of 15 to prevent the model from over-fitting. Fine Tuning. The models are optimized with finetuning on both the word-embeddings and the pretrained ResNet. Optimization. The models achieve the best performance by using mini-batch stochastic gradient descent (SGD) with batch size 20 and momentum 0.9 on both datasets. We set an initial learning rate of η0 = 0.03 with decay rate of ρ = 0.01. We use a gradient clipping of 5.0 to reduce the effects of gradient exploding. Hyper-parameters. We summarize the hyperparameters in Table 2. Hyper-parameter Value LSTM hidden state size 300 Char LSTM hidden state size 50 visual vector size 100 dropout rate 0.5 Table 2: Hyper-parameters of the networks. 4.2 Results Table 3 shows the performance of the baseline, which is BLSTM-CRF with sentences as input only, and our proposed models on both datasets. BLSTM-CRF + Global Image Vector: use global image vector to initialize the BLSTM-CRF. BLSTM-CRF + Visual attention: use attention based visual context vector to initialize the BLSTM-CRF. BLSTM-CRF + Visual attention + Gate: modulate word representations with visual vector. Our final model BLSTM-CRF + VISUAL ATTENTION + GATE, which has visual attention component and modulation gate, obtains the best F1 scores on both datasets. Visual features successfully play a role of validating entity types. For example, when there is a person in the image, it is more likely to include a person name in the associated sentence, but when there is a soccer field in the image, it is more likely to include a sports team name. All the models get better scores on Twitter dataset than on Snap dataset, because the average length of the sentences in Snap dataset (8.1 tokens) is much smaller than that of Twitter dataset (16.0 tokens), which means there is much less contextual information in Snap dataset. Also comparing the gains from visual features on different datasets, we find that the model benefits more from visual features on Twitter dataset, considering the much higher baseline scores on Twitter dataset. Based on our observation, users of Snapchat often post selfies with captions, which means some of the images are not strongly related to their associated captions. In contrast, users of Twitter prefer to post images to illustrate texts 4.3 Attention Visualization Figure 5 shows some good examples of the attention visualization and their corresponding name tagging results. The model can successfully focus on appropriate regions when the images are well aligned with the associated sentences. Based on our observation, the multimodal contexts in posts related to sports, concerts or festival are usually better aligned with each other, therefore the visual features easily contribute to these cases. For example, the ball and shoot action in example (a) in Figure 5 indicates that the context should be related to basketball, thus the ‘Warriors’ should be the name of a sports team. A singing person with a microphone in example (b) indicates that the name of an artist or a band (‘Radiohead’) may appear in the sentence. The second and the third rows in Figure 5 show some more challenging cases whose tagging results benefit from visual features. In example (d), the model pays attention to the big Apple logo, thus tags the ‘Apple’ in the sentence as an Organization name. In example (e) and (i), a small 1996 Model Snap Captions Twitter Precision Recall F1 Precision Recall F1 BLSTM-CRF 57.71 58.65 58.18 78.88 77.47 78.17 BLSTM-CRF + Global Image Vector 61.49 57.84 59.61 79.75 77.32 78.51 BLSTM-CRF + Visual attention 65.53 57.03 60.98 80.81 77.36 79.05 BLSTM-CRF + Visual attention + Gate 66.67 57.84 61.94 81.62 79.90 80.75 Table 3: Results of our models on noisy social media data. group of people indicates that it is likely to include names of bands (‘Florence and the Machine’ and ‘BTS’). And a crowd can indicate an organization (‘Warriorette’ in example (i)). A jersey shirt on the table indicates a sports team. (‘Leicester’ in example (h) can refer to both a city and a soccer club based in it.) 4.4 Error Analysis Figure 6 shows some failed examples that are categorized into three types: (1) bad alignments between visual and textual information; (2) blur images; (3) wrong attention made by the model. Name tagging greatly benefits from visual features when the sentences are well aligned with the associated image as we show in Section 4.3. But it is not always the case in social media. The example (a) in Figure 6 shows a failed example resulted from poor alignment between sentences and images. In this image, there are two bins standing in front of a wall, but the sentence talks about basketball players. The unrelated visual information makes the model tag ‘Cleveland’ as a Location, however it refers to the basketball team ‘Cleveland Cavaliers’. The image in example (b) is blur, so the extracted visual information extracted actually introduces noise instead of additional information. The Figure 5: Examples of visual attentions and NER outputs. 1997 (a). Nice image of [PER Kevin Love] and [PER Kyle Korver] during 1st half #NBAFinals #Cavsin9 #[LOC Cleveland] (b). Very drunk in a #magnum concert (c). Looking forward to editing some SBU baseball shots from Saturday. Figure 6: Examples of Failed Visual Attention. image in example (c) is about a baseball pitcher, but our model pays attention to the top right corner of the image. The visual context feature computed by our model is not related to the sentence, and results in missed tagging of ‘SBU’, which is an organization name. 5 Related Work In this section, we summarize relevant background on previous work on name tagging and visual attention. Name Tagging. In recent years, (Chiu and Nichols, 2015; Lample et al., 2016; Ma and Hovy, 2016) proposed several neural network architectures for named tagging that outperform traditional explicit features based methods (Chieu and Ng, 2002; Florian et al., 2003; Ando and Zhang, 2005; Ratinov and Roth, 2009; Lin and Wu, 2009; Passos et al., 2014; Luo et al., 2015). They all use Bidirectional LSTM (BLSTM) to extract features from a sequence of words. For characterlevel representations, (Lample et al., 2016) proposed to use another BLSTM to capture prefix and suffix information of words, and (Chiu and Nichols, 2015; Ma and Hovy, 2016) used CNN to extract position-independent character features. On top of BLSTM, (Chiu and Nichols, 2015) used a softmax layer to predict the label for each word, and (Lample et al., 2016; Ma and Hovy, 2016) used a CRF layer for joint prediction. Compared with traditional approaches, neural networks based approaches do not require hand-crafted features and achieved state-of-the-art performance on name tagging (Ma and Hovy, 2016). However, these methods were mainly developed for newswire and paid little attention to social media. For name tagging in social media, (Ritter et al., 2011) leveraged a large amount of unlabeled data and many dictionaries into a pipeline model. (Limsopatham and Collier, 2016) adapted the BLSTM-CRF model with additional word shape information, and (Aguilar et al., 2017) utilized an effective multi-task approach. Among these methods, our model is most similar to (Lample et al., 2016), but we designed a new visual attention component and a modulation control gate. Visual Attention. Since the attention mechanism was proposed by (Bahdanau et al., 2014), it has been widely adopted to language and vision related tasks, such as Image Captioning and Visual Question Answering (VQA), by retrieving the visual features most related to text context (Zhu et al., 2016; Anderson et al., 2017; Xu and Saenko, 2016; Chen et al., 2015). (Xu et al., 2015) proposed to predict a word based on the visual patch that is most related to the last predicted word for image captioning. (Yang et al., 2016b; Lu et al., 2016) applied attention mechanism for VQA, to find the regions in images that are most related to the questions. (Yu et al., 2016) applied the visual attention mechanism on video captioning. Our attention implementation approach in this work is similar to those used for VQA. The model finds the regions in images that are most related to the accompanying sentences, and then feed the visual features into an BLSTM-CRF sequence labeling model. The differences are: (1) we add visual context feature at each step of sequence labeling; and (2) we propose to use a gate to control the combination of the visual information and textual information based on their relatedness. 2 6 Conclusions and Future Work We propose a gated Visual Attention for name tagging in multimodal social media. We construct two multimodal datasets from Twitter and Snapchat. Experiments show an absolute 3%-4% F-score gain. We hope this work will encourage more research on multimodal social media in the future and we plan on making our benchmark available upon request. Name Tagging for more fine-grained types (e.g. 1998 soccer team, basketball team, politician, artist) can benefit more from visual features. For example, an image including a pitcher indicates that the ‘Giants’ in context should refer to the baseball team ‘San Francisco Giants’. We plan to expand our model to tasks such as fine-grained Name Tagging or Entity Liking in the future. Acknowledgments This work was partially supported by the U.S. DARPA AIDA Program No. FA8750-18-2-0014 and U.S. ARL NS-CTA No. W911NF-09-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Gustavo Aguilar, Suraj Maharjan, Adrian Pastor L´opez Monroy, and Thamar Solorio. 2017. A multi-task approach for named entity recognition in social media data. In Proceedings of the 3rd Workshop on Noisy User-generated Text. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2017. Bottom-up and top-down attention for image captioning and vqa. arXiv preprint arXiv:1707.07998. Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2015 International Conference on Learning Representations. Timothy Baldwin, Marie-Catherine de Marneffe, Bo Han, Young-Bum Kim, Alan Ritter, and Wei Xu. 2015. Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition. In Proceedings of the Workshop on Noisy User-generated Text. Rich Caruana, Steve Lawrence, and C Lee Giles. 2001. Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. In Proceedings of the 2001 Advances in Neural Information Processing Systems. Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. Abccnn: An attention based convolutional neural network for visual question answering. arXiv preprint arXiv:1511.05960. Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In Proceedings of the 19th international conference on Computational Linguistics. Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association of Computational Linguistics. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through classifier combination. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE international conference on acoustics, speech and signal processing. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural computation. Nut Limsopatham and Nigel Henry Collier. 2016. Bidirectional lstm for named entity recognition in twitter messages. In Proceedings of the 2nd Workshop on Noisy User-generated Text. 1999 Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Proceedings of the 2014 European Conference on Computer Vision. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In Proceedings of the 2016 Advances In Neural Information Processing Systems. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Yasumasa Miyamoto and Kyunghyun Cho. 2016. Gated word-character recurrent language model. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning. Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the conference on Empirical Methods in Natural Language Processing. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision. Erik F Sang and Jorn Veenstra. 1999. Representing text chunks. In Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 2017 Advances in Neural Information Processing Systems. Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In Proceedings of the 2016 European Conference on Computer Vision. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 2015 International Conference on Machine Learning. Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W Cohen, and Ruslan Salakhutdinov. 2016a. Words or characters? fine-grained gating for reading comprehension. In Proceedings of the 2016 International Conference on Learning Representations. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016b. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
2018
185
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2000–2008 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2000 Zeroshot Multimodal Named Entity Disambiguation for Noisy Social Media Posts Seungwhan Moon1,2, Leonardo Neves2, Vitor Carvalho3 1 Language Technologies Institute, Carnegie Mellon University 2 Snap Research 3 Intuit [email protected], [email protected], vitor [email protected] Abstract We introduce the new Multimodal Named Entity Disambiguation (MNED) task for multimodal social media posts such as Snapchat or Instagram captions, which are composed of short captions with accompanying images. Social media posts bring significant challenges for disambiguation tasks because 1) ambiguity not only comes from polysemous entities, but also from inconsistent or incomplete notations, 2) very limited context is provided with surrounding words, and 3) there are many emerging entities often unseen during training. To this end, we build a new dataset called SnapCaptionsKB, a collection of Snapchat image captions submitted to public and crowd-sourced stories, with named entity mentions fully annotated and linked to entities in an external knowledge base. We then build a deep zeroshot multimodal network for MNED that 1) extracts contexts from both text and image, and 2) predicts correct entity in the knowledge graph embeddings space, allowing for zeroshot disambiguation of entities unseen in training set as well. The proposed model significantly outperforms the stateof-the-art text-only NED models, showing efficacy and potentials of the MNED task. 1 Introduction Online communications are increasingly becoming fast-paced and frequent, and hidden in these abundant user-generated social media posts are insights for understanding users and their preferences. However, these social media posts often come in unstructured text or images, making massive-scale opinion mining extremely challeng(a) Traditional NED (b) Multimodal NED Figure 1: Examples of (a) a traditional NED task, focused on disambiguating polysemous entities based on surrounding textual contexts, and (b) the proposed Multimodal NED task for short media posts, which leverages both visual and textual contexts to disambiguate an entity. Note that mentions are often lexically inconsistent or incomplete, and thus a fixed candidates generation method (based on exact mention-entity statistics) is not viable. ing. Named entity disambiguation (NED), the task of linking ambiguous entities from free-form text mention to specific entities in a pre-defined knowledge base (KB), is thus a critical step for extracting structured information which leads to its application for recommendations, advertisement, personalized assistance, etc. While many previous approaches on NED been successful for well-formed text in disambiguating polysemous entities via context resolution, several additional challenges remain for disambiguating entities from extremely short and coarse text found in social media posts (e.g. “juuustin ” as opposed to “I love Justin Bieber/Justin Trudeau/etc.”). In many of these cases it is simply impossible to disambiguate entities from text alone, due to enormous number of surface forms arising from incomplete and 2001 inconsistent notations. In addition, social media posts often include mentions of newly emerging entities unseen in training sets, making traditional context-based entity linking often not viable. However, as popular social media platforms are increasingly incorporating a mix of text and images (e.g. Snapchat, Instargram, Pinterest, etc.), we can advance the disambiguation task to incorporate additional visual context for understanding posts. For example, the mention of ‘juuustin’ is completely ambiguous in its textual form, but an accompanying snap image of a concert scene may help disambiguate or re-rank among several lexical candidates (e.g. Justin Bieber (a pop singer) versus Justin Trudeau (a politician) in Figure 1). To this end, we introduce a new task called Multimodal Named Entity Disambiguation (MNED) that handles unique challenges for social media posts composed of extremely short text and images, aimed at disambiguationg entities by leveraging both textual and visual contexts. We then propose a novel zeroshot MNED model, which obtains visual context vectors from images with a CNN (LeCun et al., 1989), and combines with textual context extracted from a bidirectional LSTM (Dyer et al., 2015) (Section 2.2). In addition, we obtain embeddings representation of 1M entities from a knowledge graph, and train the MNED network to predict label embeddings of entities in the same space as corresponding knowledge graph embeddings (Section 2.4). This approach effectively allows for zeroshot prediction of unseen entities, which is critical for scarce-label scenario due to extensive human annotation efforts required. Lastly, we develop a lexical embeddings model that determines lexical similarity between a mention and potential entities, to aid in prediction of a correct entity (Section 2.3). Section 2.5 details the model combining the components above. Note that our method takes different perspectives from the previous work on NED (He et al., 2013; Yamada et al., 2016; Eshel et al., 2017) in the following important ways. First, while most of the previous methods generate fixed “candidates” for disambiguation given a mention from mentionentity pair statistics (thus disambiguation is limited for entities with exact surface form matches), we do not fixate candidate generation, due to intractable variety of surface forms for each named entity and unforeseen mentions of emerging entities. Instead, we have a lexical model incorporated into the discriminative score function that serves as soft normalization of various surface forms. Second, we extract auxiliary visual contexts for detected entities from user-generated images accompanied with textual posts, which is crucial because captions in our dataset are substantially shorter than text documents in most other NED datasets. To the best of our knowledge, our work is the first in using visual contexts for the named entity disambiguation task. See Section 4 for the detailed literature review. Our contributions are as follows: for the new MNED task we introduce, we propose a deep zeroshot multimodal network with (1) a CNNLSTM hybrid module that extracts contexts from both image and text, (2) a zeroshot learning layer which via embeddings projection allows for entity linking with 1M knowledge graph entities even for entities unseen from captions in training set, and (3) a lexical language model called Deep Levenshtein to compute lexical similarities between mentions and entities, relaxing the need for fixed candidates generation. We show that the proposed approaches successfully disambiguate incomplete mentions as well as polysemous entities, outperforming the state-of-the-art models on our newly crawled SnapCaptionsKB dataset, composed of 12K image-caption pairs with named entities annotated and linked with an external KB. 2 Proposed Methods Figure 2 illustrates the proposed model, which maps each multimodal social media post data to one of the corresopnding entities in the KB. Given a multimodal input that contains a mention of an ambiguous entity, we first extract textual and visual features contexts with RCNNs and Bi-LSTMs, respectively (Section 2.2). We also obtain lexical character-level representation of a mention to compare with lexical representation of KB entities, using a proposed model called Deep Levenshtein (Section 2.3). We then get highdimensional label embeddings of KB entities constructed from a knowledge graph, where similar entities are mapped as neighbors in the same space (Section 2.4). Finally, we aggregate all the contextual information extracted from surrounding text, image, and lexical notation of a mention, and predict the best matching KB entity based on knowledge graph label representation and lexical notation of KB entity candidates (Section 2.5). 2002 Figure 2: The main architecture of our Multimodal NED network. We extract contextual information from an image, surrounding words, and lexical embeddings of a mention. The modality attention module determines weights for modalities, the weighted projections of which produce label embeddings in the same space as knowledge-base (KB) entity embeddings. We predict a final candidate by ranking based on similarities with KB entity knowledge graph embeddings as well as with lexical embeddings. 2.1 Notations Let X = {x(i)}N i=1 a set of N input social media posts samples for disambiguation, with corresponding ground truth named entities Y = {y(i)}N i=1 for y ∈YKB, where YKB is a set of entities in KB. Each input sample is composed of three modalities: x = {xw; xv; xc}, where xw = {xw,t}Lw t=1 is a sequence of words with length Lw surrounding a mention in a post, xv is an image associated with a post (Section 2.2), and xc = {xc,t}Lc t=1 is a sequence of characters comprising a mention (Section 2.3), respectively. We denote high-dimensinal feature extractor functions for each modality as: w(xw), c(xc), v(xv). We represent each output label in two modalities: y = {yKB; yc}, where yKB is a knowledge base label embeddings representation (Section 2.4), and and yc is a character embeddings representation of KB entities (Section 2.3: Deep Levenshtein). We formulate our zeroshot multimodal NED task as follows: y = argmax y′∈YKB sim fx→y(x), y′ where fx→y is a function with learnable parameters that project multimodal input samples (x) into the same space as label representations (y), and sim(·) produces a similarity score between prediction and ground truth KB entities. 2.2 Textual and Visual Contexts Features Textual features: we represent textual context of surrounding words of a mention with a BiLSTM language model (Dyer et al., 2015) with distributed word semantics embeddings. We use the following implementation for the LSTM. it = σ(Wxiht−1 + Wcict−1) ct = (1 −it) ⊙ct−1 + it ⊙tanh(Wxcxw,t + Whcht−1) ot = σ(Wxoxw,t + Whoht−1 + Wcoct) ht = ot ⊙tanh(ct) w(xw) = [−−→ hLw; ←−− hLw] (1) where ht is an LSTM hidden layer output at decoding step t, and w(xw) is an output textual representation of bi-directional LSTM concatenating left and right context at the last decoding step t = Lw. Biase terms for gates are omitted for simplicity of formulation. For the Bi-LSTM sentence encoder, we use pretrained word embeddings obtained from an unsupervised language model aimed at learning cooccurrence statistics of words from a large external corpus. Word embeddings are thus represented as distributional semantics of words. In our experiments, we use pre-trained embeddings from Stanford GloVE model (Pennington et al., 2014). Visaul features: we take the final activation of a modified version of the recurrent convolutional network model called Inception (GoogLeNet) (Szegedy et al., 2015) trained on the ImageNet dataset (Russakovsky et al., 2015) to classify multiple objects in the scene. The final layer representation (v(xv)) thus encodes discriminative information describing what objects are shown in an image, providing cues for disambiguation. 2003 2.3 Lexical Embeddings: Deep Levenshtein While traditional NED tasks assume perfect lexical match between mentions and their corresopnding entities, in our task it is important to account for various surface forms of mentions (nicknames, mis-spellings, inconsistent notations, etc.) corresponding to each entity. Towards this goal, we train a separate deep neural network to compute approximate Levenshtein distance which we call Deep Levenshtein (Figure 3), composed of a shared bi-directional character LSTM, shared character embedding matrix, fully connected layers, and a dot product merge operation layer. The optimization is as follows: min c 1 2  c(xc) · c(x′c)⊤ ∥c(xc)∥∥c(x′c)∥+ 1  −sim(xc, x′c) 2 (2) where c(xc) = [−−−→ hc,Lc; ←−−− hc,Lc] where c(·) is a bi-directional LSTM output vector for a character sequence defined similar as in Eq.1, sim(·) is an output of the Deep Levenshtein network, producing a normalized similarity score with a range [0,1] based on Levenshtein edit distance, and (xc, x′c) is any pair of two strings. We generate millions of these pairs as training data by artificially corrupting seed strings by varying degrees (addition, deletion, replacement). Once trained, it can produce a purely lexical embedding of a string without semantic allusion (via c(·)), and predict lexical similarity between two strings based on their distance in the embedding space. On an intuitive level, this component effectively bypasses normalization steps, and instead incorporates lexical similarities between input mentions and output KB entities into the overall optimization of the disambiguation network. We use by-product c(·) network to extract lexical embedings of mentions and KB entities, and freeze c in training of the disambiguation network. We observe that this approach significantly outperforms alternative ways to obtain character embeddings (e.g. having a character Bi-LSTM as a part of the disambiguation network training, which unnecessarily learns semantic allusions that are prone to errors when notations are inconsistent.) 2.4 Label Embeddings from Knowledge Graph Due to the overwhelming variety of (newly trending) entities mentioned over social media posts, at Figure 3: Deep Levenshtein, which predicts approximate Levenshtein scores between two strings. As a byproduct of this model, the shared Bi-LSTM can produce lexical embeddings purely based on lexical property of character sequences. test phases we frequently encounter new named entities that are unseen in the training data. In order to address this issue, we propose a zeroshot learning approach (Frome et al., 2013) by inducing embeddings obtained from knowledge graphs on KB entities. Knowledge graph label embeddings are learned from known relations among entities within a graph (e.g. ‘IS-A’, ‘LOCATED-AT’, etc.), the resulting embeddings of which can group similar entities closer in the same space (e.g. ‘pop stars’ are in a small cluster, ‘people’ and ‘organizations’ clusters are far apart, etc.) (Bordes et al., 2013; Wang et al., 2014; Nickel et al., 2016). Once high-level mapping from contextual information to label embeddings is learned, the knowledgegraph based zeroshot approach can improve the entity linking performance given ambiguous entities unseen in training data. In brief formulation, the model for obtaining embeddings from a knowledge graph (composed of subject-relationobject (s, r, o) triplets) is as follows: P(Ir(s, o) = 1|e, er, θ) = scoreθ e(s), er(r), e(o)  (3) where Ir is an indicator function of a known relation r for two entities (s,o) (1: valid relation, 0: unknown relation), e is a function that extracts embeddings for entities, er extracts embeddings for relations, and scoreθ(·) is a deep neural network that produces a likelihood of a valid triplet. In our experiments, we use the 1M subset of the Freebase knowledge graph (Bast et al., 2014) to obtain label embeddings with the Holographic KB implementation by (Nickel et al., 2016). 2004 2.5 Deep Zeroshot MNED Network (DZMNED) Using the contextual information extracted from surrounding text and an accompanying image (Section 2.2) and lexical embeddings of a mention (Section 2.3), we build a Deep Zeroshot MNED network (DZMNED) which predicts a corresponding KB entity based on its knowledge graph embeddings (Section 2.4) and lexical similarity (Section 2.3) with the following objective: min W LKB(x, yKB;Ww,Wv,Wf)+Lc(xc, yc; Wc) where LKB(·)= 1 N N X i=1 X ˜y̸=y(i) KB max[0, ˜y· y(i) KB−f(x(i)) · (y(i) KB−˜y)⊤] Lc(·) = 1 N N X i=1 X ˜y̸=y(i) c max[0, ˜y· y(i) c −c(x(i) c ) · (y(i) c −˜y)⊤] R(W): regularization where LKB(·) is the supervised hinge rank loss for knowledge graph embeddings prediction, Lc(·) is the loss for lexical mapping between mentions and KB entities, x is a weighted average of three modalities x = {xw; xv; xc} via the modality attention module. f(·) is a transformation function with stacked layers that projects weighted input to the KB embeddings space, ˜y refers to the embeddings of negative samples randomly sampled from KB entities except the ground truth label of the instance, W = {Wf,Wc,Ww,Wv} are the learnable parameters for f, c, w, and v respectively, and R(W) is a weight decay regularization term. Similarly to (Moon et al., 2018), we formulate the modality attention module for our MNED network as follows, which selectively attenuates or amplifies modalities: [aw; ac; av] = σ Wm · [xw; xc; xv] + bm  (4) αm = exp(am) P m′∈{w,c,v} exp(am′) ∀m ∈{w, c, v} x = X m∈{w,c,v} αmxm (5) where α = [αw; αc; αv] ∈R3 is an attention vector, and x is a final context vector that maximizes information gain. Intuitively, the model is trained to produce a higher dot product similarity between the projected embeddings with its correct label than with an incorrect negative label in both the knowledge graph label embeddings and the lexical embeddings spaces, where the margin is defined as the similarity between a ground truth sample and a negative sample. At test time, the following label-producing nearest neighbor (1-NN) classifier is used for the target task (we cache all the label embeddings to avoid repetitive projections): 1-NN(x) = argmax (yKB,yc)∈YKB f(x)·yKB⊤+g(xc)·yc⊤ (6) In summary, the model produces (1) projection of input modalities (mention, surrounding text, image) into the knowledge graph embeddings space, and (2) lexical embeddings representation of mention, which then calculates a combined score of contextual (knowledge graph) and string similarities with each entity in YKB. 3 Empirical Evaluation Task: Given a caption and an accompanying image (if available), the goal is to disambiguate and link a target mention in a caption to a corresponding entity from the knowledge base (1M subset of the Freebase knowledge graph (Bast et al., 2014)). 3.1 Datasets Our SnapCaptionsKB dataset is composed of 12K user-generated image and textual caption pairs where named entities in captions and their links to KB entities are manually labeled by expert human annotators. These captions are collected exclusively from snaps submitted to public and crowd-sourced stories (aka Live Stories or Our Stories). Examples of such stories are “New York Story” or “Thanksgiving Story”, which are aggregated collections of snaps for various public venues, events, etc. Our data do not contain raw images, and we only provide textual captions and obfuscated visual descriptor features extracted from the pre-trained InceptionNet. We split the dataset randomly into train (70%), validation (15%), and test sets (15%). The captions data have average length of 29.5 characters (5.57 words) with vocabulary size 16,553, where 6,803 are considered unknown tokens from Stanford GloVE embeddings (Pennington et al., 2014). 2005 Named entities annotated in the dataset include many of new and emerging entities found in various surface forms. To the best of our knowledge, our SnapCaptionsKB is the only dataset that contains image-caption pairs with human-annotated named entities and their links to KB entities. 3.2 Baselines We report performance of the following state-ofthe-art NED models as baselines, with several candidate generation methods and variations of our proposed approach to examine contributions of each component (W: word, C: char, V: visual). Candidates generation: Note that our zeroshot approach allows for entity disambiguation without a fixed candidates generation process. In fact, we observe that the conventional method for fixed candidates generation harms the performance for noisy social media posts with many emerging entities. This is because the difficulty of entity linking at test time rises not only from multiple entities (e) linking to a single mention (m), but also from each entity found in multiple surface forms of mentions (often unseen at train time). To show the efficacy of our approach that does not require candidates generation, we compare with the following candidates generation methods: • m→e hash list: This method retrieves KB entity (e) candidates per mention (m) based on exact (m, e) pair occurrence statistics from a training corpora. This is the most predominantly used candidates generation method (He et al., 2013; Yamada et al., 2016; Eshel et al., 2017). Note that this approach is especially vulnerable at test time to noisy mentions or emerging entities with no or a few matching candidate entities from training set. • k-NN: We also consider using lexical neighbors of mentions from KB entities as candidates. This approach can be seen as soft normalization to relax the issue of having to match a variety of surface forms of a mention to KB entities. We use our Deep Levenshtein (Section 2.3) to compute lexical embeddings of KB entities and mentions, and retrieves Euclidean neighbors (and their polysemous entities) as candidates. NED models: We choose as baselines the following state-of-the-art NED models for noisy text, as well as several configurations of our proposed approach to examine contributions of each component (W: word, C: char, V: visual). • sDA-NED (W only) (He et al., 2013): uses a deep neural network with stacked denoising autoencoders (sDA) to encode bag-of-words representation of textual contexts and to directly compare mentions and entities. • ARNN (W only) (Eshel et al., 2017): uses an Attention RNN model that computes similarity between word and entity embeddings to disambiguate among fixed candidates. • Deep Zeroshot (W only): uses the deep zeroshot architecture similar to Figure 2, but uses word contexts (caption) only. • (proposed) DZMNED + Deep Levenshtein + InceptionNet with modality attention (W+C+V): is the proposed approach as described in Figure 2. • (proposed) DZMNED + Deep Levenshtein + InceptionNet w/o modality attention (W+C+V): concatenates all the modality vectors instead. • (proposed) DZMNED + Deep Levenshtein (W+C): only uses textual context. • (proposed) DZMNED + Deep Levenshtein w/o modality attention (W+C): does not use the modality attention module, and instead concatenates word and lexical embeddings. 3.3 Results Parameters: We tune the parameters of each model with the following search space (bold indicate the choice for our final model): character embeddings dimension: {25, 50, 100, 150, 200, 300}, word embeddings size: {25, 50, 100, 150, 200, 300}, knowledge graph embeddings size: {100, 200, 300}, LSTM hidden states: {50, 100, 150, 200, 300}, and x dimension: {25, 50, 100, 150, 200, 300}. We optimize the parameters with Adagrad (Duchi et al., 2011) with batch size 10, learning rate 0.01, epsilon 10−8, and decay 0.1. Main Results: Table 1 shows the Top-1, 3, 5, 10, and 50 candidates retrieval accuracy results on the Snap Captions dataset. We see that the proposed approach significantly outperforms the baselines which use fixed candidates generation 2006 Modalities Model Candidates Generation Accuracy (%) Top-1 Top-3 Top-5 Top-10 Top-50 W ARNN (Eshel et al., 2017) m→e list 51.2 60.4 66.5 66.9 66.9 W ARNN 5-NN (lexical) 35.2 43.3 45.0 W ARNN 10-NN (lexical) 31.9 40.1 44.5 50.7 W sDA-NED (He et al., 2013) m→e list 48.7 57.3 66.3 66.9 66.9 W Zeroshot N/A 43.6 63.8 67.1 70.5 77.2 W + C DZMNED N/A 67.0 72.7 74.8 76.8 85.0 W + C DZMNED + Modality Attention N/A 67.8 73.5 74.8 76.2 84.6 W + C + V DZMNED N/A 67.2 74.6 77.7 80.5 88.1 W + C + V DZMNED + Modality Attention N/A 68.1 75.5 78.2 80.9 87.9 Table 1: NED performance on the SnapCaptionsKB dataset at Top-1, 3, 5, 10, 50 accuracies. The classification is over 1M entities. Candidates generation methods: N/A, or over a fixed number of candidates generated with methods: m→e hash list and kNN (lexical neighbors). KB Embeddings Top-1 Top-5 Top-10 Trained with 1M entities 68.1 78.2 80.9 Trained with 10K entities 60.3 72.5 75.9 Random embeddings 41.4 45.8 48.0 Table 2: MNED performance (Top-1, 5, 10 accuracies) on SnapCaptionsKB with varying qualities of KB embeddings. Model: DZMNED (W+C+V) method. Note that m →e hash list-based methods, which retrieve as candidates the KB entities that appear in the training set of captions only, has upper performance limit at 66.9%, showing the limitance of fixed candidates generation method for unseen entities in social media posts. k-NN methods which retrieve lexical neighbors of mention (in an attempt to perform soft normalization on mentions) also do not perform well. Our proposed zeroshot approaches, however, do not fixate candidate generation, and instead compares combined contextual and lexical similarities among all 1M KB entities, achieving higher upper performance limit (Top-50 retrieval accuracy reaches 88.1%). This result indicates that the proposed zeroshot model is capable of predicting for unseen entities as well. The lexical sub-model can also be interpreted as functioning as soft neural mapping of mention to potential candidates, rather than heuristic matching to fixed candidates. In addition, when visual context is available (W+C+V), the performance generally improves over the textual models (W+C), showing that visual information can provide additional contexts for disambiguation. The modality attention module also adds performance gain by re-weighting the modalities based on their informativeness. Error Analysis: Table 3 shows example cases where incorporation of visual contexts affects disambiguation of mentions in textual captions. For example, polysemous entities such as ‘Jordan’ in the caption “Taking the new Jordan for a walk” or ‘CID’ as in “LETS GO CID” are hard to disambiguate due to the limited textual contexts provided, while visual information (e.g. visual tags ‘footwear’ for Jordan, ‘DJ’ for CID) provides similarities to each mention’s distributional semantics from other training examples. Mentions unseen at train time (‘STEPHHHH’, ‘murica’) often resort to lexical neighbors by (W+C), whereas visual contexts can help disambiguate better. A few cases where visual contexts are not helpful include visual tags that are not related to mentions, or do not complement already ambiguous contexts. Sensitivity to KB Embeddings Quality: The proposed approach relies its prediction on entity matching in the KB embeddings space, and hence the quality of KB embeddings is crucial for successful disambiguation. To characterize this aspect, we provide Table 2 which shows MNED performance with varying quality of embeddings as follows: KB embeddings learned from 1M knowledge graph entities (same as in the main experiments), from 10K subset of entities (less triplets to train with in Eq.3, hence lower quality), and random embeddings (poorest) - while all the other parameters are kept the same. It can be seen that the performance notably drops with lower quality of KB embeddings. When KB embeddings are replaced by random embeddings, the network effectively prevents the contextual zeroshot matching to KB entities and relies only on lexical similarities, achieving the poorest performance. 2007 Caption (target) Visual Tags GT Top-1 Prediction (W+C+V) (W+C) + “YA BOI STEPHHHH” sports equip, ball, parade, ... Stephen Curry (=GT) Stephenville “Taking the new Jordan for a walk” footwear, shoe, sock, ... Air Jordan (=GT) Michael Jordan “out for murica’s bday ” parade, flag, people, ... U.S.A. (=GT) Murcia (Spain) “Come on now, Dre” club, DJ, night, ... Dr. Dre (=GT) Dre Kirkpatrick “LETS GO CID” drum, DJ, drummer, ... CID (DJ) (=GT) CID (ORG) “kick back hmu for addy.” weather, fog, tile, ... Adderall GoDaddy (=GT) “@Sox to see get retired! ” sunglasses, stadium, ... Red Sox White Sox White Sox Table 3: Error analysis: when do images help NED? Ground-truth (GT) and predictions of our model with vision input (W+C+V) and the one without (W+C) for the underlined mention are shown. For interpretability, visual tags (label output of InceptionNet) are presented instead of actual feature vectors. 4 Related Work NED task: Most of the previous NED models leverage local textual information (He et al., 2013; Eshel et al., 2017) and/or document-wise global contexts (Hoffart et al., 2011; Chisholm and Hachey, 2015; Pershina et al., 2015; Globerson et al., 2016), in addition to other auxiliary contexts or priors for disambiguating a mention. Note that most of the NED datasets (e.g. TAC KBP (Ji et al., 2010), ACE (Bentivogli et al., 2010), CoNLL-YAGO (Hoffart et al., 2011), etc.) are extracted from standardized documents with web links such as Wikipedia (with relatively ample textual contexts), and that named entitiy disambiguation specifically for short and noisy social media posts are rarely discussed. Note also that most of the previous literature assume the availability of “candidates” or web links for disambiguation via mention-entity pair counts from training set, which is vulnerable to inconsistent surface forms of entities predominant in social media posts. Our model improves upon the state-of-the-art NED models in three very critical ways: (1) incorporation of visual contexts, (2) addition of the zeroshot learning layer, which allows for disambiguation of unseen entities during training, and (3) addition of the lexical model that computes lexical similarity entities to correctly recognize inconsistent surface forms of entities. Multimodal learning studies learning of a joint model that leverages contextual information from multiple modalities in parallel. Some of the relevant multimodal learning task to our MNED system include the multimodal named entity recognition task (Moon et al., 2018), which leverages both text and image to classify each token in a sentence to named entity or not. In their work, they employ an entity LSTM that takes as input each modality, and a softmax layer that outputs an entity label at each decoding step. Contrast to their work, our MNED addresses unique challenges characterized by zeroshot ranking of 1M knowledge-base entities (vs. categorical entity types prediction), incorporation of an external knowledge graph, lexical embeddings, etc. Another is the multimodal machine translation task (Elliott et al., 2015; Specia et al., 2016), which takes as input text in source language as well as an accompanying image to output a translated text in target language. These models usually employ a sequence-to-sequence architecture (e.g. target language decoder takes as input both encoded source language and images) often with traditional attention modules widely used in other image captioning systems (Xu et al., 2015; Sukhbaatar et al., 2015). To the best of our knowledge, our approach is the first multimodal learning work at incorporating visual contexts for the NED task. 5 Conclusions We introduce a new task called Multimodal Named Entity Disambiguation (MNED), which is applied on short user-generated social media posts that are composed of text and accompanying images. Our proposed MNED model improves upon the state-of-the-art models by 1) extracting visual contexts complementary to textual contexts, 2) by leveraging lexical embeddings into entity matching which accounts for various surface forms of entities, removing the need for fixed candidates generation process, and 3) by performing entity matching in the distributed knowledge graph embeddings space, allowing for matching of unseen mentions and entities by context resolutions. 2008 References Hannah Bast, Florian Baurle, Bjorn Buchhold, and Elmar Haussmann. 2014. Easy access to the freebase dataset. In WWW. Luisa Bentivogli, Pamela Forner, Claudio Giuliano, Alessandro Marchetti, Emanuele Pianta, and Kateryna Tymoshenko. 2010. Extending english ace 2005 corpus annotation with ground-truth links to wikipedia. In Proceedings of the 2nd Workshop on The Peoples Web Meets NLP: Collaboratively Constructed Semantic Resources, pages 19–27. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In NIPS, pages 2787–2795. Andrew Chisholm and Ben Hachey. 2015. Entity disambiguation with web links. Transactions of the Association of Computational Linguistics, 3(1):145– 156. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. ACL. Desmond Elliott, Stella Frank, and Eva Hasler. 2015. Multi-language image description with neural sequence models. CoRR, abs/1510.04709. Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuda Yamada, and Omer Levy. 2017. Named entity disambiguation for noisy text. CoNLL. Andrea Frome, Greg Corrado, Jon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visualsemantic embedding model. In NIPS. Amir Globerson, Nevena Lazic, Soumen Chakrabarti, Amarnag Subramanya, Michael Ringaard, and Fernando Pereira. 2016. Collective entity resolution with multi-focal attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 621–631. Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 782–792. Association for Computational Linguistics. Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, and Joe Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Third Text Analysis Conference (TAC 2010), volume 3, pages 3–13. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural computation. Seungwhan Moon, Leonard Neves, and Vitor Carvalho. 2018. Multimodal named entity recognition for short social media posts. NAACL. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. AAAI. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Maria Pershina, Yifan He, and Ralph Grishman. 2015. Personalized page rank for named entity disambiguation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 238–243. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. IJCV. Lucia Specia, Stella Frank, Khalil Sima’an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image description. In WMT, pages 543–553. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In NIPS, pages 2440–2448. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. 2015. Going deeper with convolutions. CVPR. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI, pages 1112–1119. Citeseer. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2(3):5. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. CoNLL.
2018
186
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2009–2019 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2009 Semi-supervised User Geolocation via Graph Convolutional Networks Afshin Rahimi Trevor Cohn Timothy Baldwin School of Computing and Information Systems The University of Melbourne [email protected] {t.cohn,tbaldwin}@unimelb.edu.au Abstract Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the stateof-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN. 1 Introduction User geolocation, the task of identifying the “home” location of a user, is an integral component of many applications ranging from public health monitoring (Paul and Dredze, 2011; Chon et al., 2015; Yepes et al., 2015) and regional studies of sentiment, to real-time emergency awareness systems (De Longueville et al., 2009; Sakaki et al., 2010), which use social media as an implicit information resource about people. Social media services such as Twitter rely on IP addresses, WiFi footprints, and GPS data to geolocate users. Third-party service providers don’t have easy access to such information, and have to rely on public sources of geolocation information such as the profile location field, which is noisy and difficult to map to a location (Hecht et al., 2011), or geotagged tweets, which are publicly available for only 1% of tweets (Cheng et al., 2010; Morstatter et al., 2013). The scarcity of publicly available location information motivates predictive user geolocation from information such as tweet text and social interaction data. Most previous work on user geolocation takes the form of either supervised text-based approaches (Wing and Baldridge, 2011; Han et al., 2012) relying on the geographical variation of language use, or graph-based semi-supervised label propagation relying on location homophily in user–user interactions (Davis Jr et al., 2011; Jurgens, 2013). Both text and network views are critical in geolocating users. Some users post a lot of local content, but their social network is lacking or is not representative of their location; for them, text is the dominant view for geolocation. Other users have many local social interactions, and mostly use social media to read other people’s comments, and for interacting with friends. Single-view learning would fail to accurately geolocate these users if the more information-rich view is not present. There has been some work that uses both the text and network views, but it either completely ignores unlabelled data (Li et al., 2012a; Miura et al., 2017), or just uses unlabelled data in the network view (Rahimi et al., 2015b; Do et al., 2017). Given that the 1% of geotagged tweets is often used for supervision, it is crucial for geolocation models to be able to leverage unlabelled data, and to perform well under a minimal supervision scenario. In this paper, we propose GCN, an end-to-end user geolocation model based on Graph Convolutional Networks (Kipf and Welling, 2017) that jointly learns from text and network information to classify a user timeline into a location. Our contributions are: (1) we evaluate our model under a minimal supervision scenario which is close to real world applications and show that GCN outperforms two strong baselines; (2) given sufficient supervision, we show that GCN is competitive, although the much simpler MLP-TXT+NET outper2010 forms state-of-the-art models; and (3) we show that highway gates play a significant role in controlling the amount of useful neighbourhood smoothing in GCN.1 2 Model We propose a transductive multiview geolocation model, GCN, using Graph Convolutional Networks (“GCN”: Kipf and Welling (2017)). We also introduce two multiview baselines: MLP-TXT+NET based on concatenation of text and network, and DCCA based on Deep Canonical Correlation Analysis (Andrew et al., 2013). 2.1 Multivew Geolocation Let X ∈R|U|×|V | be the text view, consisting of the bag of words for each user in U using vocabulary V , and A ∈1|U|×|U| be the network view, encoding user–user interactions. We partition U = US ∪UH into a supervised and heldout (unlabelled) set, US and UH, respectively. The goal is to infer the location of unlabelled samples YU, given the location of labelled samples YS, where each location is encoded as a one-hot classification label, yi ∈1c with c being the number of target regions. 2.2 GCN GCN defines a neural network model f(X, A) with each layer: ˆA = ˜D−1 2 (A + λI) ˜D−1 2 H(l+1) = σ  ˆAH(l)W (l) + b  , (1) where ˜D is the degree matrix of A + λI; hyperparameter λ controls the weight of a node against its neighbourhood, which is set to 1 in the original model (Kipf and Welling, 2017); H0 = X and the din × dout matrix W (l) and dout × 1 matrix b are trainable layer parameters; and σ is an arbitrary nonlinearity. The first layer takes an average of each sample and its immediate neighbours (labelled and unlabelled) using weights in ˆA, and performs a linear transformation using W and b followed by a nonlinear activation function (σ). In other words, for user ui, the output of layer l is computed by: ⃗hl+1 i = σ X j∈nhood(i) ˆAij⃗hl jW l + bl  , (2) 1Code and data available at https://github.com/ afshinrahimi/geographconv Highway GCN: Highway GCN: , Output GCN: X = BoWtext ˆ A ˆ A ˆ A tanh tanh softmax H0 H1 Hl−1 Hl predict location: ˆy W l−1, bl−1, W l−1 h , bl−1 h W 1, b1, W 1 h, b1 h W l, bl Figure 1: The architecture of GCN geolocation model with layer-wise highway gates (W i h, bi h). GCN is applied to a BoW model of user content over the @-mention graph to predict user location. where W l and bl are learnable layer parameters, and nhood(i) indicates the neighbours of user ui. Each extra layer in GCN extends the neighbourhood over which a sample is smoothed. For example a GCN with 3 layers smooths each sample with its neighbours up to 3 hops away, which is beneficial if location homophily extends to a neighbourhood of this size. 2.2.1 Highway GCN Expanding the neighbourhood for label propagation by adding multiple GCN layers can improve geolocation by accessing information from friends that are multiple hops away, but it might also lead to propagation of noisy information to users from an exponentially increasing number of expanded neighbourhood members. To control the required balance of how much neighbourhood information should be passed to a node, we use layer-wise gates similar to highway networks. In highway networks (Srivastava et al., 2015), the output of a layer is summed with its input with gating weights T(⃗hl): T(⃗hl) = σ  W l t⃗hl + bl t  ⃗hl+1 = ⃗hl+1 ◦T(⃗hl) + ⃗hl ◦(1 −T(⃗hl)) , (3) where ⃗hl is the incoming input to layer l + 1, (W l t, bl t) are gating weights and bias variables, ◦is elementwise multiplication, and σ is the Sigmoid function. 2011 2.3 DCCA Given two views X and ˆA (from Equation 1) of data samples, CCA (Hotelling, 1936), and its deep version (DCCA) (Andrew et al., 2013) learn functions f1(X) and f2( ˆA) such that the correlation between the output of the two functions is maximised: ρ = corr(f1(X), f2( ˆA)) . (4) The resulting representations of f1(X) and f2( ˆA) are the compressed representations of the two views where the uncorrelated noise between them is reduced. The new representations ideally represent user communities for the network view, and the language model of that community for the text view, and their concatenation is a multiview representation of data, which can be used as input for other tasks. In DCCA, the two views are first projected to a lower dimensionality using a separate multilayer perceptron for each view (the f1 and f2 functions of Equation 4), the output of which is used to estimate the CCA cost: maximise: tr(W T 1 Σ12W2) subject to: W T 1 Σ11W1 = W T 2 Σ22W2 = I (5) where Σ11 and Σ22 are the covariances of the two outputs, and Σ12 is the cross-covariance. The weights W1 and W2 are the linear projections of the MLP outputs, which are used in estimating the CCA cost. The optimisation problem is solved by SVD, and the error is backpropagated to train the parameters of the two MLPs and the final linear projections. After training, the two networks are used to predict new projections for unseen data. The two projections of unseen data — the outputs of the two networks — are then concatenated to form a multiview sample representation, as shown in Figure 2. 3 Experiments 3.1 Data We use three existing Twitter user geolocation datasets: (1) GEOTEXT (Eisenstein et al., 2010), (2) TWITTER-US (Roller et al., 2012), and (3) TWITTER-WORLD (Han et al., 2012). These datasets have been used widely for training and evaluation of geolocation models. They are all pre-partitioned into training, development and test maximally correlated FC sigmoid FC softmax X: text BoW ˆ A: Neighbours predicted location: ˆy FC linear Unsupervised DCCA Supervised Geolocation FC ReLU CCA loss backprop Figure 2: The DCCA model architecture: First the two text and network views X and ˆA are fed into two neural networks (left), which are unsupervisedly trained to maximise the correlation of their outputs; next the outputs of the networks are concatenated, and fed as input to another neural network (right), which is trained supervisedly to predict locations. sets. Each user is represented by the concatenation of their tweets, and labelled with the latitude/longitude of the first collected geotagged tweet in the case of GEOTEXT and TWITTER-US, and the centre of the closest city in the case of TWITTER-WORLD. GEOTEXT and TWITTER-US cover the continental US, and TWITTER-WORLD covers the whole world, with 9k, 449k and 1.3m users, respectively. The labels are the discretised geographical coordinates of the training points using a k-d tree following Roller et al. (2012), with the number of labels equal to 129, 256, and 930 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively. 3.2 Constructing the Views We build matrix ˆA as in Equation 1 using the collapsed @-mention graph between users, where two users are connected (Aij = 1) if one mentions the other, or they co-mention another user. The text view is a BoW model of user content with binary term frequency, inverse document frequency, and l2 normalisation of samples. 3.3 Model Selection For GCN, we use highway layers to control the amount of neighbourhood information passed to a node. We use 3 layers in GCN with size 300, 600, 900 for GEOTEXT, TWITTER-US and TWITTERWORLD respectively. Note that the final softmax layer is also graph convolutional, which sets the radius of the averaging neighbourhood to 4. The 2012 k-d tree bucket size hyperparameter which controls the maximum number of users in each cluster is set to 50, 2400, and 2400 for the respective datasets, based on tuning over the validation set. The architecture of GCN-LP is similar, with the difference that the text view is set to zero. In DCCA, for the unsupervised networks we use a single sigmoid hidden layer with size 1000 and a linear output layer with size 500 for the three datasets. The loss function is CCA loss, which maximises the output correlations. The supervised multilayer perceptron has one hidden layer with size 300, 600, 1000 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively, which we set by tuning over the development sets. We evaluate the models using Median error, Mean error, and Acc@161, accuracy of predicting a user within 161km or 100 miles from the known location. 3.4 Baselines We also compare DCCA and GCN with two baselines: GCN-LP is based on GCN, but for input, instead of text-based features , we use one-hot encoding of a user’s neighbours, which are then convolved with their k-hop neighbours using the GCN. This approach is similar to label propagation in smoothing the label distribution of a user with that of its neighbours, but uses graph convolutional networks which have extra layer parameters, and also a gating mechanism to control the smoothing neighbourhood radius. Note that for unlabelled samples, the predicted labels are used for input after training accuracy reaches 0.2. MLP-TXT+NET is a simple transductive supervised model based on a single layer multilayer perceptron where the input to the network is the concatenation of the text view X, the user content’s bag-of-words and ˆA (Equation 1), which represents the network view as a vector input. For the hidden layer we use a ReLU nonlinearity, and sizes 300, 600, and 600 for GEOTEXT, TWITTER-US, and TWITTER-WORLD, respectively. 4 Results and Analysis 4.1 Representation Deep CCA and GCN are able to provide an unsupervised data representation in different ways. Deep CCA takes the two text-based and networkbased views, and finds deep non-linear transformations that result in maximum correlation between the two views (Andrew et al., 2013). The representations can be visualised using t-SNE, where we hope that samples with the same label are clustered together. GCN, on the other hand, uses graph convolution. The representations of 50 samples from each of 4 randomly chosen labels of GEOTEXT are shown in Figure 3. As shown, Deep CCA seems to slightly improve the representations from pure concatenation of the two views. GCN, on the other hand, substantially improves the representations. Further application of GCN results in more samples clumping together, which might be desirable when there is strong homophily. 4.2 Labelled Data Size To achieve good performance in supervised tasks, often large amounts of labelled data are required, which is a big challenge for Twitter geolocation, where only a small fraction of the data is geotagged (about 1%). The scarcity of supervision indicates the importance of semi-supervised learning where unlabelled (e.g. non-geotagged) tweets are used for training. The three models we propose (MLP-TXT+NET, DCCA, and GCN) are all transductive semi-supervised models that use unlabelled data, however, they are different in terms of how much labelled data they require to achieve acceptable performance. Given that in a real-world scenario, only a small fraction of data is geotagged, we conduct an experiment to analyse the effect of labelled samples on the performance of the three geolocation models. We provided the three models with different fractions of samples that are labelled (in terms of % of dataset samples) while using the remainder as unlabelled data, and analysed their Median error performance over the development set of GEOTEXT, TWITTER-US, and TWITTER-WORLD. Note that the text and network view, and the development set, remain fixed for all the experiments. As shown in Figure 4, when the fraction of labelled samples is less than 10% of all the samples, GCN and DCCA outperform MLP-TXT+NET, as a result of having fewer parameters, and therefore, lower supervision requirement to optimise them. When enough training data is available (e.g. more than 20% of all the samples), GCN and MLP-TXT+NET clearly outperform DCCA, possibly as a result of directly modelling the 2013 (a) MLP-TXT+NET (b) DCCA (c) 1 GCN ˆA · X (d) 2 GCN ˆA · ˆA · X Figure 3: Comparing t-SNE visualisations of 50 training samples from each of 4 randomly chosen regions of GEOTEXT using various data representations: (a) concatenation of ˆA (Equation 1); (b) concatenation of DCCA transformation of text-based and network-based views X and ˆA; (c) applying graph convolution ˆA · X; and (d) applying graph convolution twice ˆA · ˆA · X 60 40 20 10 5 2 1 40 400 800 1,200 labelled data (%samples) median error (km) GCN DCCA MLP-TXT+NET (a) GEOTEXT 100 50 20 10 5 2 1 50 500 1,000 1,500 labelled data (%samples) median error (km) GCN DCCA MLP-TXT+NET (b) TWITTER-US 100 50 20 10 5 2 1 40 500 1,000 1,500 2,000 labelled data (%samples) median error (km) GCN DCCA MLP-TXT+NET (c) TWITTER-WORLD Figure 4: The effect of the amount of labelled data available as a fraction of all samples for GEOTEXT, TWITTER-US, and TWITTER-WORLD on the development performance of GCN, DCCA, and MLP-TXT+NET models in terms of Median error. The dataset sizes are 9k, 440k, and 1.4m for the three datasets, respectively. interactions between network and text views. When all the training samples of the two larger datasets (95% and 98% for TWITTER-US and TWITTERWORLD, respectively) are available to the models, MLP-TXT+NET outperforms GCN. Note that the number of parameters increases from DCCA to GCN and to MLP-TXT+NET. In 1% for GEOTEXT, DCCA outperforms GCN as a result of having fewer parameters and just a few labelled samples, insufficient to train the parameters of GCN. 4.3 Highway Gates Adding more layers to GCN expands the graph neighbourhood within which the user features are averaged, and so might introduce noise, and consequently decrease accuracy as shown in Figure 5 when no gates are used. We see that by adding highway network gates, the performance of GCN slightly improves until three layers are added, but then by adding more layers the performance doesn’t change that much as gates are allowing the layer inputs to pass through the network without much change. The performance peaks at 4 layers which is compatible with the distribution of shortest path lengths shown in Figure 6. 4.4 Performance The performance of the three proposed models (MLP-TXT+NET, DCCA and GCN) is shown in Table 1. The models are also compared with supervised text-based methods (Wing and Baldridge, 2014; Cha et al., 2015; Rahimi et al., 2017b), a network-based method (Rahimi et al., 2015a) and GCN-LP, and also joint text and network models (Rahimi et al., 2017b; Do et al., 2017; Miura et al., 2017). MLP-TXT+NET and GCN outperform all the text- or network-only models, and also the hybrid model of Rahimi et al. (2017b), indicating that joint modelling of text and network features is important. MLP-TXT+NET is competitive with Do et al. (2017), outperforming it on larger datasets, and underperforming on GEO2014 GEOTEXT TWITTER-US TWITTER-WORLD Acc@161↑ Mean↓ Median↓ Acc@161↑ Mean↓ Median↓ Acc@161↑ Mean↓ Median↓ Text (inductive) Rahimi et al. (2017b) 38 844 389 54 554 120 34 1456 415 Wing and Baldridge (2014) — — — 48 686 191 31 1669 509 Cha et al. (2015) — 581 425 — — — — — — Network (transductive) Rahimi et al. (2015a) 58 586 60 54 705 116 45 2525 279 GCN-LP 58 576 56 53 653 126 45 2357 279 Text+Network (transductive) Do et al. (2017) 62 532 32 66 433 45 53 1044 118 Miura et al. (2017) — — — 61 481 65 — — — Rahimi et al. (2017b) 59 578 61 61 515 77 53 1280 104 MLP-TXT+NET 58 554 58 66 420 56 58 1030 53 DCCA 56 627 79 58 516 90 21 2095 913 GCN 60 546 45 62 485 71 54 1130 108 Text+Network (transductive) MLP-TXT+NET 1% 8 1521 1295 14 1436 1411 8 3865 2041 DCCA 1% 7 1425 979 38 869 348 14 3014 1367 GCN 1% 6 1103 609 41 788 311 21 2071 853 Table 1: Geolocation results over the three Twitter datasets for the proposed models: joint text+network MLP-TXT+NET, DCCA, and GCN and network-based GCN-LP. The models are compared with text-only and network-only methods. The performance of the three joint models is also reported for minimal supervision scenario where only 1% of the total samples are labelled. “—” signifies that no results were reported for the given metric or dataset. Note that Do et al. (2017) use timezone, and Miura et al. (2017) use the description and location fields in addition to text and network. 1 2 3 4 5 6 7 8 9 10 10 60 160 360 760 Number of layers Median (km) −highway +highway Figure 5: The effect of adding more GCN layers (neighbourhood expansion) to GCN in terms of median error over the development set of GEOTEXT with and without the highway gates, and averaged over 5 runs. TEXT. However, it’s difficult to make a fair comparison as they use timezone data in their feature set. MLP-TXT+NET outperforms GCN over TWITTERUS and TWITTER-WORLD, which are very large, and have large amounts of labelled data. In a scenario with little supervision (1% of the total samples are labelled) DCCA and GCN clearly outperform MLP-TXT+NET, as they have fewer pa2 3 4 5 6 3% 32% 47% 13% 2% Shortest path length Proportion Figure 6: The distribution of shortest path lengths between all the nodes of the largest connected component of GEOTEXT’s graph that constitute more than 1% of total. rameters. Except for Acc@161 over GEOTEXT where the number of labelled samples in the minimal supervision scenario is very low, GCN outperforms DCCA by a large margin, indicating that for a medium dataset where only 1% of samples are labelled (as happens in random samples of Twitter) GCN is superior to MLP-TXT+NET and DCCA, consistent with Section 4.2. Both MLP-TXT+NET and GCN achieve state of the art results compared 2015 to network-only, text-only, and hybrid models. The network-based GCN-LP model, which does label propagation using Graph Convolutional Networks, outperforms Rahimi et al. (2015a), which is based on location propagation using Modified Adsorption (Talukdar and Crammer, 2009), possibly because the label propagation in GCN is parametrised. 4.5 Error Analysis Although the performance of MLP-TXT+NET is better than GCN and DCCA when a large amount of labelled data is available (Table 1), under a scenario where little labelled data is available (1% of data), DCCA and GCN outperform MLP-TXT+NET, mainly because the number of parameters in MLP-TXT+NET grows with the number of samples, and is much larger than GCN and DCCA. GCN outperforms DCCA and MLP-TXT+NET using 1% of data, however, the distribution of errors in the development set of TWITTER-US indicates higher error for smaller states such as Rhode Island (RI), Iowa (IA), North Dakota (ND), and Idaho (ID), which is simply because the number of labelled samples in those states is insufficient. Although we evaluate geolocation models with Median, Mean, and Acc@161, it doesn’t mean that the distribution of errors is uniform over all locations. Big cities often attract more local online discussions, making the geolocation of users in those areas simpler. For example users in LA are more likely to talk about LA-related issues such as their sport teams, Hollywood or local events than users in the state of Rhode Island (RI), which lacks large sport teams or major events. It is also possible that people in less densely populated areas are further apart from each other, and therefore, as a result of discretisation fall in different clusters. The non-uniformity in local discussions results in lower geolocation performance in less densely populated areas like Midwest U.S., and higher performance in densely populated areas such as NYC and LA as shown in Figure 7. The geographical distribution of error for GCN, DCCA and MLP-TXT+NET under the minimal supervision scenario is shown in the supplementary material. To get a better picture of misclassification between states, we built a confusion matrix based on known state and predicted state for development users of TWITTER-US using GCN using only 1% of labelled data. There is a tendency for users to be wrongly predicted to be in CA, NY, TX, and surprisingly OH. Particularly users from states such as TX, AZ, CO, and NV, which are located close to CA, are wrongly predicted to be in CA, and users from NJ, PA, and MA are misclassified as being in NY. The same goes for OH and TX where users from neighbouring smaller states are misclassified to be there. Users from CA and NY are also misclassified between the two states, which might be the result of business and entertainment connections that exist between NYC and LA/SF. Interestingly, there are a number of misclassifications to FL for users from CA, NY, and TX, which might be the effect of users vacationing or retiring to FL. The full confusion matrix between the U.S. states is provided in the supplementary material. 4.6 Local Terms In Table 2, local terms of a few regions detected by GCN under minimal supervision are shown. The terms that were present in the labelled data are excluded to show how graph convolutions over the social graph have extended the vocabulary. For example, in case of Seattle, #goseahawks is an important term not present in the 1% labelled data but present in the unlabelled data. The convolution over the social graph is able to utilise such terms that don’t exist in the labelled data. 5 Related Work Previous work on user geolocation can be broadly divided into text-based, network-based and multiview approaches. Text-based geolocation uses the geographical bias in language use to infer the location of users. There are three main text-based approaches to geolocation: (1) gazetteer-based models which map geographical references in text to location, but ignore non-geographical references and vernacular uses of language (Rauch et al., 2003; Amitay et al., 2004; Lieberman et al., 2010); (2) geographical topic models that learn region-specific topics, but don’t scale to the magnitude of social media (Eisenstein et al., 2010; Hong et al., 2012; Ahmed et al., 2013); and (3) supervised models which are often framed as text classification (Serdyukov et al., 2009; Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2014) or text regression (Iso et al., 2017; Rahimi et al., 2017a). Supervised models scale well and can achieve good performance with sufficient supervision, which is not available in a real world scenario. 2016 MN WA MT ID ND ME WI OR SD MI NH VT NY WY IA NE MA IL PA CT RI CA NV UT OH IN NJ CO WV MO KS DE MD VA KY DC AZ OK NM TN NC TX AR SC AL GA MS LA FL 100 200 300 400 500 600 700 800 median error (km) Figure 7: The geographical distribution of Median error of GCN using 1% of labelled data in each state over the development set of TWITTER-US. The colour indicates error and the size indicates the number of development users within the state. Seattle, WA Austin, TX Jacksonville, FL Columbus, OH Charlotte, NC Phoenix, AZ New Orleans, LA Baltimore, MD #goseahawks stubb unf laffayette #asheville clutterbuck mcneese bhop smock gsd ribault #weareohio #depinga waffels keela #dsu traffuck #meatsweats wahoowa #arcgis batesburg bahumbug pentecostals chestertown ferran lanterna wjct #slammin stewey iedereen lutcher aduh promissory pupper fscj #ouhc #bojangles rockharbor grogan umbc chowdown effaced floridian #cow #occupyraleigh redtail suela lmt ckrib #austin #jacksonville mommyhood gville gewoon cajuns assistly #uwhuskies lmfbo #mer beering sweezy jms bmu slurpies Table 2: Top terms for selected regions detected by GCN using only 1% of TWITTER-US for supervision. We present the terms that were present only in unlabelled data. The terms include city names, hashtags, food names and internet abbreviations. Network-based methods leverage the location homophily assumption: nearby users are more likely to befriend and interact with each other. There are four main network-based geolocation approaches: distance-based, supervised classification, graph-based label propagation, and node embedding methods. Distance-based methods model the probability of friendship given the distance (Backstrom et al., 2010; McGee et al., 2013; Gu et al., 2012; Kong et al., 2014), supervised models use neighbourhood features to classify a user into a location (Rout et al., 2013; Malmi et al., 2015), and graph-based label-propagation models propagate the location information through the user–user graph to estimate unknown labels (Davis Jr et al., 2011; Jurgens, 2013; Compton et al., 2014). Node embedding methods build heterogeneous graphs between user–user, user–location and location– location, and learn an embedding space to minimise the distance of connected nodes, and maximise the distance of disconnected nodes. The embeddings are then used in supervised models for geolocation (Wang et al., 2017). Network-based models fail to geolocate disconnected users: Jurgens et al. (2015) couldn’t geolocation 37% of users as a result of disconnectedness. Previous work on hybrid text and network methods can be broadly categorised into three main approaches: (1) incorporating text-based information such as toponyms or locations predicted from a textbased model as auxiliary nodes into the user–user graph, which is then used in network-based models (Li et al., 2012a,b; Rahimi et al., 2015b,a); (2) ensembling separately trained text- and networkbased models (Gu et al., 2012; Ren et al., 2012; Jayasinghe et al., 2016; Ribeiro and Pappa, 2017); and (3) jointly learning geolocation from several information sources such as text and network information (Miura et al., 2017; Do et al., 2017), which can capture the complementary information in text and network views, and also model the interactions between the two. None of the previous 2017 multiview approaches — with the exception of Li et al. (2012a) and Li et al. (2012b) that only use toponyms — effectively uses unlabelled data in the text view, and use only the unlabelled information of the network view via the user–user graph. There are three main shortcomings in the previous work on user geolocation that we address in this paper: (1) with the exception of few recent works (Miura et al., 2017; Do et al., 2017), previous models don’t jointly exploit both text and network information, and therefore the interaction between text and network views is not modelled; (2) the unlabelled data in both text and network views is not effectively exploited, which is crucial given the small amounts of available supervision; and (3) previous models are rarely evaluated under a minimal supervision scenario, a scenario which reflects real world conditions. 6 Conclusion We proposed GCN, DCCA and MLP-TXT+NET, three multiview, transductive, semi-supervised geolocation models, which use text and network information to infer user location in a joint setting. We showed that joint modelling of text and network information outperforms network-only, text-only, and hybrid geolocation models as a result of modelling the interaction between text and network information. We also showed that GCN and DCCA are able to perform well under a minimal supervision scenario similar to real world applications by effectively using unlabelled data. We ignored the context in which users interact with each other, and assumed all the connections to hold location homophily. In future work, we are interested in modelling the extent to which a social interaction is caused by geographical proximity (e.g. using user–user gates). References Amr Ahmed, Liangjie Hong, and Alexander J. Smola. 2013. Hierarchical geographical modeling of user locations from social media posts. In Proceedings of the 22nd International Conference on World Wide Web (WWW 2013), pages 25–36, Rio de Janeiro, Brazil. Einat Amitay, Nadav Har’El, Ron Sivan, and Aya Soffer. 2004. Web-a-where: geotagging web content. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2004), pages 273–280, Sheffield, UK. Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. 2013. Deep canonical correlation analysis. In International Conference on Machine Learning, pages 1247–1255, Atlanta, USA. Lars Backstrom, Eric Sun, and Cameron Marlow. 2010. Find me if you can: improving geographical prediction with social and spatial proximity. In Proceedings of the 19th International Conference on World Wide Web (WWW 2010), pages 61–70, Raleigh, USA. Miriam Cha, Youngjune Gwon, and H.T. Kung. 2015. Twitter geolocation and regional classification via sparse coding. In Proceedings of the 9th International Conference on Weblogs and Social Media (ICWSM 2015), pages 582–585, Oxford, UK. Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: a content-based approach to geo-locating Twitter users. In Proceedings of the 19th ACM International Conference Information and Knowledge Management (CIKM 2010), pages 759–768, Toronto, Canada. Jaime Chon, Ross Raymond, Haiyan Wang, and Feng Wang. 2015. Modeling flu trends with real-time geo-tagged twitter data streams. In Proceedings of the 10th International Conference on Wireless Algorithms, Systems, and Applications (WASA 2015), pages 60–69, Qufu, China. Ryan Compton, David Jurgens, and David Allen. 2014. Geotagging one hundred million twitter accounts with total variation minimization. In Proceedings of the IEEE International Conference on Big Data (IEEE BigData 2014), pages 393–401, Washington DC, USA. Clodoveu A Davis Jr, Gisele L Pappa, Diogo Renn´o Rocha de Oliveira, and Filipe de L Arcanjo. 2011. Inferring the location of twitter messages based on user relationships. Transactions in GIS, 15(6):735–751. Bertrand De Longueville, Robin S. Smith, and Gianluca Luraschi. 2009. ”omg, from here, i can see the flames!”: A use case of mining location based social networks to acquire spatio-temporal data on forest fires. In Proceedings of the 2009 International Workshop on Location Based Social Networks, pages 73– 80, New York, USA. Tien Huu Do, Duc Minh Nguyen, Evaggelia Tsiligianni, Bruno Cornelis, and Nikos Deligiannis. 2017. Multiview deep learning for predicting twitter users’ location. arXiv preprint arXiv:1712.08091. Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010), pages 1277– 1287, Boston, USA. 2018 Hansu Gu, Haojie Hang, Qin Lv, and Dirk Grunwald. 2012. Fusing text and frienships for location inference in online social networks. In Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 01, volume 1, pages 158–165, Macau, China. Bo Han, Paul Cook, and Timothy Baldwin. 2012. Geolocation prediction in social media data by finding location indicative words. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pages 1045– 1062, Mumbai, India. Bo Han, Paul Cook, and Timothy Baldwin. 2014. Textbased Twitter user geolocation prediction. Journal of Artificial Intelligence Research, 49:451–500. Brent Hecht, Lichan Hong, Bongwon Suh, and Ed H. Chi. 2011. Tweets from Justin Bieber’s heart: the dynamics of the location field in user profiles. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 237–246, Vancouver, Canada. Liangjie Hong, Amr Ahmed, Siva Gurumurthy, Alexander J. Smola, and Kostas Tsioutsiouliklis. 2012. Discovering geographical topics in the twitter stream. In Proceedings of the 21st international conference on World Wide Web, pages 769–778, Lyon, France. Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377. Hayate Iso, Shoko Wakamiya, and Eiji Aramaki. 2017. Density estimation for geolocation via convolutional mixture density network. arXiv preprint arXiv:1705.02750. Gaya Jayasinghe, Brian Jin, James Mchugh, Bella Robinson, and Stephen Wan. 2016. CSIRO Data61 at the WNUT geo shared task. In Proceedings of the COLING 2016 Workshop on Noisy User-generated Text (W-NUT 2016), pages 218–226, Osaka, Japan. David Jurgens. 2013. That’s what friends are for: Inferring location in online social media platforms based on social relationships. In Proceedings of the 7th International Conference on Weblogs and Social Media (ICWSM 2013), pages 273–282, Boston, USA. David Jurgens, Tyler Finethy, James McCorriston, Yi Tian Xu, and Derek Ruths. 2015. Geolocation prediction in twitter using social networks: A critical analysis and review of current practice. In Proceedings of the 9th International Conference on Weblogs and Social Media (ICWSM 2015), pages 188–197, Oxford, UK. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). Longbo Kong, Zhi Liu, and Yan Huang. 2014. Spot: Locating social media users based on social network context. Proceedings of the VLDB Endowment, 7(13):1681–1684. Rui Li, Shengjie Wang, and Kevin Chen-Chuan Chang. 2012a. Multiple location profiling for users and relationships from social network and content. Proceedings of the VLDB Endowment, 5(11):1603–1614. Rui Li, Shengjie Wang, Hongbo Deng, Rui Wang, and Kevin Chen-Chuan Chang. 2012b. Towards social user profiling: unified and discriminative influence model for inferring home locations. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD 2012), pages 1023–1031, Beijing, China. Michael D Lieberman, Hanan Samet, and Jagan Sankaranarayanan. 2010. Geotagging with local lexicons to build indexes for textually-specified spatial data. In Proceedings of the 26th International Conference on Data Engineering (ICDE 2010), pages 201–212, Long Beach, USA. Eric Malmi, Arno Solin, and Aristides Gionis. 2015. The blind leading the blind: Network-based location estimation under uncertainty. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2015 (ECML PKDD 2015), pages 406–421, Porto, Portugal. Jeffrey McGee, James Caverlee, and Zhiyuan Cheng. 2013. Location prediction in social media based on tie strength. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 459–468, San Fransisco, USA. ACM. Yasuhide Miura, Motoki Taniguchi, Tomoki Taniguchi, and Tomoko Ohkuma. 2017. Unifying text, metadata, and user network representations with a neural network for geolocation prediction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1260–1272, Vancouver, Canada. Fred Morstatter, J¨urgen Pfeffer, Huan Liu, and Kathleen M Carley. 2013. Is the sample good enough? Comparing data from Twitter’s streaming API with Twitter’s firehose. In Proceedings of the 7th International Conference on Weblogs and Social Media (ICWSM 2013), pages 400–408, Boston, USA. Michael J. Paul and Mark Dredze. 2011. You are what you tweet: Analyzing twitter for public health. In Proceedings of the Fifth International Conference on Weblogs and Social Media (ICSWM 2011), pages 265–272, Barcelona, Spain. Afshin Rahimi, Timothy Baldwin, and Trevor Cohn. 2017a. Continuous representation of location for geolocation and lexical dialectology using mixture density networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language 2019 Processing (EMNLP 2017), pages 167–176, Copenhagen, Denmark. Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2015a. Twitter user geolocation using a unified text and network prediction model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics — 7th International Joint Conference on Natural Language Processing (ACLIJCNLP 2015), pages 630–636, Beijing, China. Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2017b. A neural model for user geolocation and lexical dialectology. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pages 207–216, Vancouver, Canada. Afshin Rahimi, Duy Vu, Trevor Cohn, and Timothy Baldwin. 2015b. Exploiting text and network context for geolocation of social media users. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics — Human Language Technologies (NAACL HLT 2015), pages 1362–1367, Denver, USA. Erik Rauch, Michael Bukatin, and Kenneth Baker. 2003. A confidence-based framework for disambiguating geographic terms. In Proceedings of the HLT-NAACL 2003 workshop on Analysis of geographic references-Volume 1, pages 50–54, Edmonton, Canada. Kejiang Ren, Shaowu Zhang, and Hongfei Lin. 2012. Where are you settling down: Geo-locating Twitter users based on tweets and social networks. In Proceedings of the 8th Asia Information Retrieval Societies Conference (AIRS 2012), pages 150–161, Tianjin, China. Silvio Ribeiro and Gisele L. Pappa. 2017. Strategies for combining Twitter users geo-location methods. GeoInformatica, pages 1–25. Stephen Roller, Michael Speriosu, Sarat Rallapalli, Benjamin Wing, and Jason Baldridge. 2012. Supervised text-based geolocation using language models on an adaptive grid. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL 2012), pages 1500–1510, Jeju, South Korea. Dominic Rout, Kalina Bontcheva, Daniel Preot¸iucPietro, and Trevor Cohn. 2013. Where’s @wally?: A classification approach to geolocating users based on their social ties. In Proceedings of the 24th ACM Conference on Hypertext and Social Media (Hypertext 2013), pages 11–20, Paris, France. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes twitter users: Real-time event detection by social sensors. In Proceedings of the 19th International Conference on World Wide Web, pages 851–860, New York, USA. Pavel Serdyukov, Vanessa Murdock, and Roelof Van Zwol. 2009. Placing Flickr photos on a map. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 484–491, Boston, USA. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Partha Pratim Talukdar and Koby Crammer. 2009. New regularized algorithms for transductive learning. In Proceedings of the European Conference on Machine Learning (ECML-PKDD 2009), pages 442–457, Bled, Slovenia. Fengjiao Wang, Chun-Ta Lu, Yongzhi Qu, and S Yu Philip. 2017. Collective geographical embedding for geolocating social network users. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2017), pages 599–611, Jeju, South Korea. Benjamin P Wing and Jason Baldridge. 2011. Simple supervised document geolocation with geodesic grids. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1 (ACL-HLT 2011), pages 955–964, Portland, USA. Benjamin P Wing and Jason Baldridge. 2014. Hierarchical discriminative classification for text-based geolocation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 336–348, Doha, Qatar. Antonio Jimeno Yepes, Andrew MacKinlay, and Bo Han. 2015. Investigating public health surveillance using twitter. In Proceedings of the 2015 Workshop on Biomedical Natural Language Processing (BioNLP 2015), pages 164–170, Beijing, China.
2018
187
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2020–2030 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2020 Document Modeling with External Attention for Sentence Extraction Shashi Narayan∗ University of Edinburgh [email protected] Ronald Cardenas∗ Charles University in Prague [email protected] Nikos Papasarantopoulos∗ University of Edinburgh [email protected] Shay B. Cohen Mirella Lapata University of Edinburgh {scohen,mlap}@inf.ed.ac.uk Jiangsheng Yu Yi Chang Huawei Technologies {jiangsheng.yu,yi.chang}@huawei.com Abstract Document modeling is essential to a variety of natural language understanding tasks. We propose to use external information to improve document modeling for problems that can be framed as sentence extraction. We develop a framework composed of a hierarchical document encoder and an attention-based extractor with attention over external information. We evaluate our model on extractive document summarization (where the external information is image captions and the title of the document) and answer selection (where the external information is a question). We show that our model consistently outperforms strong baselines, in terms of both informativeness and fluency (for CNN document summarization) and achieves state-of-the-art results for answer selection on WikiQA and NewsQA.1 1 Introduction Recurrent neural networks have become one of the most widely used models in natural language processing (NLP). A number of variants of RNNs such as Long Short-Term Memory networks (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit networks (GRU; Cho et al., 2014) have been designed to model text capturing long-term dependencies in problems such as language modeling. However, document modeling, a key to many natural language ∗The first three authors made equal contributions to this paper. The work was done when the second author was visiting Edinburgh. 1Our TensorFlow code and datasets are publicly available at https://github.com/shashiongithub/ Document-Models-with-Ext-Information. understanding tasks, is still an open challenge. Recently, some neural network architectures were proposed to capture large context for modeling text (Mikolov and Zweig, 2012; Ghosh et al., 2016; Ji et al., 2015; Wang and Cho, 2016). Lin et al. (2015) and Yang et al. (2016) proposed a hierarchical RNN network for document-level modeling as well as sentence-level modeling, at the cost of increased computational complexity. Tran et al. (2016) further proposed a contextual language model that considers information at interdocument level. It is challenging to rely only on the document for its understanding, and as such it is not surprising that these models struggle on problems such as document summarization (Cheng and Lapata, 2016; Chen et al., 2016; Nallapati et al., 2017; See et al., 2017; Tan and Wan, 2017) and machine reading comprehension (Trischler et al., 2016; Miller et al., 2016; Weissenborn et al., 2017; Hu et al., 2017; Wang et al., 2017). In this paper, we formalize the use of external information to further guide document modeling for end goals. We present a simple yet effective document modeling framework for sentence extraction that allows machine reading with “external attention.” Our model includes a neural hierarchical document encoder (or a machine reader) and a hierarchical attention-based sentence extractor. Our hierarchical document encoder resembles the architectures proposed by Cheng and Lapata (2016) and Narayan et al. (2018) in that it derives the document meaning representation from its sentences and their constituent words. Our novel sentence extractor combines this document meaning representation with an attention mechanism (Bahdanau et al., 2015) over the external information to label sentences from the input document. Our model explicitly biases the extractor with external cues and 2021 implicitly biases the encoder through training. We demonstrate the effectiveness of our model on two problems that can be naturally framed as sentence extraction with external information. These two problems, extractive document summarization and answer selection for machine reading comprehension, both require local and global contextual reasoning about a given document. Extractive document summarization systems aim at creating a summary by identifying (and subsequently concatenating) the most important sentences in a document, whereas answer selection systems select the candidate sentence in a document most likely to contain the answer to a query. For document summarization, we exploit the title and image captions which often appear with documents (specifically newswire articles) as external information. For answer selection, we use word overlap features, such as the inverse sentence frequency (ISF, Trischler et al., 2016) and the inverse document frequency (IDF) together with the query, all formulated as external cues. Our main contributions are three-fold: First, our model ensures that sentence extraction is done in a larger (rich) context, i.e., the full document is read first before we start labeling its sentences for extraction, and each sentence labeling is done by implicitly estimating its local and global relevance to the document and by directly attending to some external information for importance cues. Second, while external information has been shown to be useful for summarization systems using traditional hand-crafted features (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001), our model is the first to exploit such information in deep learning-based summarization. We evaluate our models automatically (in terms of ROUGE scores) on the CNN news highlights dataset (Hermann et al., 2015). Experimental results show that our summarizer, informed with title and image captions, consistently outperforms summarizers that do not use this information. We also conduct a human evaluation to judge which type of summary participants prefer. Our results overwhelmingly show that human subjects find our summaries more informative and complete. Lastly, with the machine reading capabilities of our model, we confirm that a full document needs to be “read” to produce high quality extracts allowing a rich contextual reasoning, in contrast to previous answer selection approaches that often measure a score between each sentence in the document and the question and return the sentence with highest score in an isolated manner (Yin et al., 2016; dos Santos et al., 2016; Wang et al., 2016). Our model with ISF and IDF scores as external features achieves competitive results for answer selection. Our ensemble model combining scores from our model and word overlap scores using a logistic regression layer achieves state-ofthe-art results on the popular question answering datasets WikiQA (Yang et al., 2015) and NewsQA (Trischler et al., 2016), and it obtains comparable results to the state of the art for SQuAD (Rajpurkar et al., 2016). We also evaluate our approach on the MSMarco dataset (Nguyen et al., 2016) and elaborate on the behavior of our machine reader in a scenario where each candidate answer sentence is contextually independent of each other. 2 Document Modeling For Sentence Extraction Given a document D consisting of a sequence of n sentences (s1, s2, ..., sn) , we aim at labeling each sentence si in D with a label yi ∈{0, 1} where yi = 1 indicates that si is extraction-worthy and 0 otherwise. Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017). The main components include a sentence encoder, a document encoder, and a novel sentence extractor (see Figure 1) that we describe in more detail below. The novel characteristics of our model are that each sentence is labeled by implicitly estimating its (local and global) relevance to the document and by directly attending to some external information for importance cues. Sentence Encoder A core component of our model is a convolutional sentence encoder (Kim, 2014; Kim et al., 2016) which encodes sentences into continuous representations. We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature. This filter is applied to each possible window of words in s to produce a feature map f ∈Rk−h+1 where k is the sentence length. We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K. We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence. In Figure 1, ker2022 nels of size 2 (red) and 4 (blue) are applied three times each. The max-pooling over time operation yields two feature lists fK2 and fK4 ∈R3. The final sentence embeddings have six dimensions. Document Encoder The document encoder composes a sequence of sentences to obtain a document representation. We use a recurrent neural network with LSTM cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997). Given a document D consisting of a sequence of sentences (s1, s2, . . . , sn), we follow common practice and feed the sentences in reverse order (Sutskever et al., 2014; Li et al., 2015; Filippova et al., 2015). Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 or 0 by implicitly estimating its relevance in the document and by directly attending to the external information for importance cues. It is implemented with another RNN with LSTM cells with an attention mechanism (Bahdanau et al., 2015) and a softmax layer. Our attention mechanism differs from the standard practice of attending intermediate states of the input (encoder). Instead, our extractor attends to a sequence of p pieces of external information E : (e1, e2, ..., ep) relevant for the task (e.g., ei is a title or an image caption for summarization) for cues. At time ti, it reads sentence si and makes a binary prediction, conditioned on the document representation (obtained from the document encoder), the previously labeled sentences and the external information. This way, our labeler is able to identify locally and globally important sentences within the document which correlate well with the external information. Given sentence st at time step t, it returns a probability distribution over labels as: p(yt|st, D, E) = softmax(g(ht, h′ t)) (1) g(ht, h′ t) = Uo(Vhht + W ′ hh′ t) (2) ht = LSTM(st, ht−1) h′ t = p X i=1 α(t,i)ei, where α(t,i) = exp(htei) P j exp(htej) where g(·) is a single-layer neural network with parameters Uo, Vh and W ′ h. ht is an intermediDocument encoder s5 s4 s3 s2 s1 Sentence Extractor s1 s2 s3 s4 s5 y1 y2 y3 y4 y5 Convolutional Sentence encoder Document External s5 s4 s3 s2 s1 e1 e2 e3 L North Korea fired a missile over Japan [convolution] [max pooling] Figure 1: Hierarchical encoder-decoder model for sentence extraction with external attention. s1, . . . , s5 are sentences in the document and, e1, e2 and e3 represent external information. For the extractive summarization task, eis are external information such as title and image captions. For the answers selection task, eis are the query and word overlap features. ate RNN state at time step t. The dynamic context vector h′ t is essentially the weighted sum of the external information (e1, e2, . . . , ep). Figure 1 summarizes our model. 3 Sentence Extraction Applications We validate our model on two sentence extraction problems: extractive document summarization and answer selection for machine reading comprehension. Both these tasks require local and global contextual reasoning about a given document. As such, they test the ability of our model to facilitate document modeling using external information. 2023 Extractive Summarization An extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n). In this setting, our sentence extractor sequentially predicts label yi ∈{0, 1} (where 1 means that si should be included in the summary) by assigning score p(yi|si, D, E, θ) quantifying the relevance of si to the summary. We assemble a summary S by selecting m sentences with top p(yi = 1|si, D, E, θ) scores. We formulate external information E as the sequence of the title and the image captions associated with the document. We use the convolutional sentence encoder to get their sentence-level representations. Answer Selection Given a question q and a document D , the goal of the task is to select one candidate sentence si ∈D in which the answer exists. In this setting, our sentence extractor sequentially predicts label yi ∈{0, 1} (where 1 means that si contains the answer) and assign score p(yi|si, D, E, θ) quantifying si’s relevance to the query. We return as answer the sentence si with the highest p(yi = 1|si, D, E, θ) score. We treat the question q as external information and use the convolutional sentence encoder to get its sentence-level representation. This simplifies Eq. (1) and (2) as follow: p(yt|st, D, q) = softmax(g(ht, q)) (3) g(ht, q) = Uo(Vhht + Wqq), where Vh and Wq are network parameters. We exploit the simplicity of our model to further assimilate external features relevant for answer selection: the inverse sentence frequency (ISF, (Trischler et al., 2016)), the inverse document frequency (IDF) and a modified version of the ISF score which we call local ISF. Trischler et al. (2016) have shown that a simple ISF baseline (i.e., a sentence with the highest ISF score as an answer) correlates well with the answers. The ISF score αsi for the sentence si is computed as αsi = P w∈si∩q IDF(w), where IDF is the inverse document frequency score of word w, defined as: IDF(w) = log N Nw , where N is the total number of sentences in the training set and Nw is the number of sentences in which w appears. Note that, si ∩q refers to the set of words that appear both in si and in q. Local ISF is calculated in the same manner as the ISF score, only with setting the total number of sentences (N) to the number of sentences in the article that is being analyzed. More formally, this modifies Eq. (3) as follows: p(yt|st, D, q) = softmax(g(ht, q, αt, βt, γt)),(4) where αt, βt and γt are the ISF, IDF and local ISF scores (real values) of sentence st respectively . The function g is calculated as follows: g(ht, q, αt, βt, γt) =Uo (Vhht+ Wqq + Wisf(αt · 1)+ Widf(βt · 1) + Wlisf(γt · 1)  , where Wisf, Widf and Wlisf are new parameters added to the network and 1 is a vector of 1s of size equal to the sentence embedding size. In Figure 1, these external feature vectors are represented as 6-dimensional gray vectors accompanied with dashed arrows. 4 Experiments and Results This section presents our experimental setup and results assessing our model in both the extractive summarization and answer selection setups. In the rest of the paper, we refer to our model as XNET for its ability to exploit eXternal information to improve document representation. 4.1 Extractive Document Summarization Summarization Dataset We evaluated our models on the CNN news highlights dataset (Hermann et al., 2015).2 We used the standard splits of Hermann et al. (2015) for training, validation, and testing (90,266/1,220/1,093 documents). We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016, 2017; See et al., 2017; Tan and Wan, 2017) in assuming that the 2Hermann et al. (2015) have also released the DailyMail dataset, but we do not report our results on this dataset. We found that the script written by Hermann et al. to crawl DailyMail articles mistakenly extracts image captions as part of the main body of the document. As image captions often do not have sentence boundaries, they blend with the sentences of the document unnoticeably. This leads to the production of erroneous summaries. 2024 “story highlights” associated with each article are gold-standard abstractive summaries. We trained our network on a named-entity-anonymized version of news articles. However, we generated deanonymized summaries and evaluated them against gold summaries to facilitate human evaluation and to make human evaluation comparable to automatic evaluation. To train our model, we need documents annotated with sentence extraction information, i.e., each sentence in a document is labeled with 1 (summary-worthy) or 0 (not summary-worthy). We followed Nallapati et al. (2017) and automatically extracted ground truth labels such that all positively labeled sentences from an article collectively give the highest ROUGE (Lin and Hovy, 2003) score with respect to the gold summary. We used a modified script of Hermann et al. (2015) to extract titles and image captions, and we associated them with the corresponding articles. All articles get associated with their titles. The availability of image captions varies from 0 to 414 per article, with an average of 3 image captions. There are 40% CNN articles with at least one image caption. All sentences, including titles and image captions, were padded with zeros to a sentence length of 100. All input documents were padded with zeros to a maximum document length of 126. For each document, we consider a maximum of 10 image captions. We experimented with various numbers (1, 3, 5, 10 and 20) of image captions on the validation set and found that our model performed best with 10 image captions. We refer the reader to the supplementary material for more implementation details to replicate our results. Comparison Systems We compared the output of our model against the standard baseline of simply selecting the first three sentences from each document as the summary. We refer to this baseline as LEAD in the rest of the paper. We also compared our system against the sentence extraction system of Cheng and Lapata (2016). We refer to this system as POINTERNET as the neural attention architecture in Cheng and Lapata (2016) resembles the one of Pointer Networks (Vinyals et al., 2015).3 It does not exploit any external information.4 Cheng and Lap3The architecture of POINTERNET is closely related to our model without external information. 4Adding external information to POINTERNET is an inMODELS R1 R2 R3 R4 RL Avg. LEAD 49.2 18.9 9.8 6.0 43.8 25.5 POINTERNET 53.3 19.7 10.4 6.4 47.2 27.4 XNET+TITLE 55.0 21.6 11.7 7.5 48.9 28.9 XNET+CAPTION 55.3 21.3 11.4 7.2 49.0 28.8 XNET+FS 54.8 21.1 11.3 7.2 48.6 28.6 Combination Models (XNET+) TITLE+CAPTION 55.4 21.8 11.8 7.5 49.2 29.2 TITLE+FS 55.1 21.6 11.6 7.4 48.9 28.9 CAPTION+FS 55.3 21.5 11.5 7.3 49.0 28.9 TITLE+CAPTION+FS 55.4 21.5 11.6 7.4 49.1 29.0 Table 1: Ablation results on the validation set. We report R1, R2, R3, R4, RL and their average (Avg.). The first block of the table presents LEAD and POINTERNET which do not use any external information. LEAD is the baseline system selecting first three sentences. POINTERNET is the sentence extraction system of Cheng and Lapata. XNET is our model. The second and third blocks of the table present different variants of XNET. We experimented with three types of external information: title (TITLE), image captions (CAPTION) and the first sentence (FS) of the document. The bottom block of the table presents models with more than one type of external information. The best performing model (highlighted in boldface) is used on the test set. ata (2016) report only on the DailyMail dataset. We used their code (https://github.com/ cheng6076/NeuralSum) to produce results on the CNN dataset.5 Automatic Evaluation To automatically assess the quality of our summaries, we used ROUGE (Lin and Hovy, 2003), a recall-oriented metric, to compare our model-generated summaries to manually-written highlights.6 Previous work has reported ROUGE-1 (R1) and ROUGE-2 (R2) scores to access informativeness, and ROUGE-L (RL) to access fluency. In addition to R1, R2 and RL, we also report ROUGE-3 (R3) and ROUGE-4 (R4) capturing higher order n-grams overlap to assess informativeness and fluency simultaneously. teresting direction of research but we do not pursue it here. It requires decoding with multiple types of attentions and this is not the focus of this paper. 5We are unable to compare our results to the extractive system of Nallapati et al. (2017) because they report their results on the DailyMail dataset and their code is not available. The abstractive systems of Chen et al. (2016) and Tan and Wan (2017) report their results on the CNN dataset, however, their results are not comparable to ours as they report on the full-length F1 variants of ROUGE to evaluate their abstractive summaries. We report ROUGE recall scores which is more appropriate to evaluate our extractive summaries. 6We used pyrouge, a Python package, to compute all our ROUGE scores with parameters “-a -c 95 -m -n 4 -w 1.2.” 2025 We report our results on both full length (three sentences with the top scores as the summary) and fixed length (first 75 bytes and 275 bytes as the summary) summaries. For full length summaries, our decision of selecting three sentences is guided by the fact that there are 3.11 sentences on average in the gold highlights of the training set. We conduct our ablation study on the validation set with full length ROUGE scores, but we report both fixed and full length ROUGE scores for the test set. We experimented with two types of external information: title (TITLE) and image captions (CAPTION). In addition, we experimented with the first sentence (FS) of the document as external information. Note that the latter is not external information, it is a sentence in the document. However, we wanted to explore the idea that the first sentence of the document plays a crucial part in generating summaries (Rush et al., 2015; Nallapati et al., 2016). XNET with FS acts as a baseline for XNET with title and image captions. We report the performance of several variants of XNET on the validation set in Table 1. We also compare them against the LEAD baseline and POINTERNET. These two systems do not use any additional information. Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET. When the title (TITLE), image captions (CAPTION) and the first sentence (FS) are used separately as additional information, XNET performs best with TITLE as its external information. Our result demonstrates the importance of the title of the document in extractive summarization (Edmundson, 1969; Kupiec et al., 1995; Mani, 2001). The performance with TITLE and CAPTION is better than that with FS. We also tried possible combinations of TITLE, CAPTION and FS. All XNET models are superior to the ones without any external information. XNET performs best when TITLE and CAPTION are jointly used as external information (55.4%, 21.8%, 11.8%, 7.5%, and 49.2% for R1, R2, R3, R4, and RL respectively). It is better than the the LEAD baseline by 3.7 points on average and than POINTERNET by 1.8 points on average, indicating that external information is useful to identify the gist of the document. We use this model for testing purposes. Our final results on the test set are shown in Table 2. It turns out that for smaller summaries (75 bytes) LEAD and POINTERNET are superior MODELS R1 R2 R3 R4 RL Fixed length: 75b LEAD 20.1 7.1 3.5 2.1 14.6 POINTERNET 20.3 7.2 3.5 2.2 14.8 XNET 20.2 7.1 3.4 2.0 14.6 Fixed length: 275b LEAD 39.1 14.5 7.6 4.7 34.6 POINTERNET 38.6 13.9 7.3 4.4 34.3 XNET 39.7 14.7 7.9 5.0 35.2 Full length summaries LEAD 49.3 19.5 10.7 6.9 43.8 POINTERNET 51.7 19.7 10.6 6.6 45.7 XNET 54.2 21.6 12.0 7.9 48.1 Table 2: Final results on the test set. POINTERNET is the sentence extraction system of Cheng and Lapata. XNET is our best model from Table 1. Best ROUGE score in each block and each column is highlighted in boldface. Models 1st 2nd 3rd 4th LEAD 0.15 0.17 0.47 0.21 POINTERNET 0.16 0.05 0.31 0.48 XNET 0.28 0.53 0.15 0.04 HUMAN 0.41 0.25 0.07 0.27 Table 3: Human evaluations: Ranking of various systems. Rank 1st is best and rank 4th, worst. Numbers show the percentage of times a system gets ranked at a certain position. to XNET. This result could be because LEAD (always) and POINTERNET (often) include the first sentence in their summaries, whereas, XNET is better capable at selecting sentences from various document positions. This is not captured by smaller summaries of 75 bytes, but it becomes more evident with longer summaries (275 bytes and full length) where XNET performs best across all ROUGE scores. We note that POINTERNET outperforms LEAD for 75-byte summaries, then its performance drops behind LEAD for 275-byte summaries, but then it outperforms LEAD for full length summaries on the metrics R1, R2 and RL. It shows that POINTERNET with its attention over sentences in the document is capable of exploring more than first few sentences in the document, but it is still behind XNET which is better at identifying salient sentences in the document. XNET performs significantly better than POINTERNET by 0.8 points for 275-byte summaries and by 1.9 points for full length summaries, on average for all ROUGE scores. Human Evaluation We complement our automatic evaluation results with human evaluation. We randomly selected 20 articles from the test set. 2026 Annotators were presented with a news article and summaries from four different systems. These include the LEAD baseline, POINTERNET, XNET and the human authored highlights. We followed the guidelines in Cheng and Lapata (2016), and asked our participants to rank the summaries from best (1st) to worst (4th) in order of informativeness (does the summary capture important information in the article?) and fluency (is the summary written in well-formed English?). We did not allow any ties and we only sampled articles with nonidentical summaries. We assigned this task to five annotators who were proficient English speakers. Each annotator was presented with all 20 articles. The order of summaries to rank was randomized per article. An example of summaries our subjects ranked is provided in the supplementary material. The results of our human evaluation study are shown in Table 3. As one might imagine, HUMAN gets ranked 1st most of the time (41%). However, it is closely followed by XNET which ranked 1st 28% of the time. In comparison, POINTERNET and LEAD were mostly ranked at 3rd and 4th places. We also carried out pairwise comparisons between all models in Table 3 for their statistical significance using a one-way ANOVA with post-hoc Tukey HSD tests with (p < 0.01). It showed that XNET is significantly better than LEAD and POINTERNET, and it does not differ significantly from HUMAN. On the other hand, POINTERNET does not differ significantly from LEAD and it differs significantly from both XNET and HUMAN. The human evaluation results corroborates our empirical results in Table 1 and Table 2: XNET is better than LEAD and POINTERNET in producing informative and fluent summaries. 4.2 Answer Selection Question Answering Datasets We run experiments on four datasets collected for open domain question-answering tasks: WikiQA (Yang et al., 2015), SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), and MSMarco (Nguyen et al., 2016). NewsQA was especially designed to present lexical and syntactic divergence between questions and answers. It contains 119,633 questions posed by crowdworkers on 12,744 CNN articles previously collected by Hermann et al. (2015). In a similar manner, SQuAD associates 100,000+ question with a Wikipedia article’s first paragraph, for 500+ previously chosen articles. WikiQA was collected by mining web-searching query logs and then associating them with the summary section of the Wikipedia article presumed to be related to the topic of the query. A similar collection procedure was followed to create MSMarco with the difference that each candidate answer is a whole paragraph from a different browsed website associated with the query. We follow the widely used setup of leaving out unanswered questions (Trischler et al., 2016; Yang et al., 2015) and adapt the format of each dataset to our task of answer sentence selection by labeling a candidate sentence with 1 if any answer span is contained in that sentence. In the case of MSMarco, each candidate paragraph comes associated with a label, hence we treat each one as a single long sentence. Since SQuAD keeps the official test dataset hidden and MSMarco does not provide labels for its released test set, we report results on their official validation sets. For validation, we set apart 10% of each official training set. Our dataset splits consist of 92,525, 5,165 and 5,124 samples for NewsQA; 79,032, 8,567, and 10,570 for SQuAD; 873, 122, and 237 for WikiQA; and 79,704, 9,706, and 9,650 for MSMarco, for training, validation, and testing respectively. Comparison Systems We compared the output of our model against the ISF (Trischler et al., 2016) and LOCALISF baselines. Given an article, the sentence with the highest ISF score is selected as an answer for the ISF baseline and the sentence with the highest local ISF score for the LOCALISF baseline. We also compare our model against a neural network (PAIRCNN) that encodes (question, candidate) in an isolated manner as in previous work (Yin et al., 2016; dos Santos et al., 2016; Wang et al., 2016). The architecture uses the sentence encoder explained in earlier sections to learn the question and candidate representations. The distribution over labels is given by p(yt|q) = p(yt|st, q) = softmax(g(st, q)) where g(st, q) = ReLU(Wsq · [st; q] + bsq). In addition, we also compare our model against APCNN (dos Santos et al., 2016), ABCNN (Yin et al., 2016), L.D.C (Wang and Jiang, 2017), KVMemNN (Miller et al., 2016), and COMPAGGR, a state-of-the-art system by Wang et al. (2017). We experiment with several variants of our model. XNET is the vanilla version of our sen2027 SQuAD WikiQA NewsQA MSMarco ACC MAP MRR ACC MAP MRR ACC MAP MRR ACC MAP MRR WRD CNT 77.84 27.50 27.77 51.05 48.91 49.24 44.67 46.48 46.91 20.16 19.37 19.51 WGT WRD CNT 78.43 28.10 28.38 49.79 50.99 51.32 45.24 48.20 48.64 20.50 20.06 20.23 AP-CNN 68.86 69.57 ABCNN 69.21 71.08 L.D.C 70.58 72.26 KV-MemNN 70.69 72.65 LOCALISF 79.50 27.78 28.05 49.79 49.57 50.11 44.69 48.40 46.48 20.21 20.22 20.39 ISF 78.85 28.09 28.36 48.52 46.53 46.72 45.61 48.57 48.99 20.52 20.07 20.23 PAIRCNN 32.53 46.34 46.35 32.49 39.87 38.71 25.67 40.16 39.89 14.92 34.62 35.14 COMPAGGR 85.52 91.05 91.05 60.76 73.12 74.06 54.54 67.63 68.21 32.05 52.82 53.43 XNET 35.50 58.46 58.84 54.43 69.12 70.22 26.18 42.28 42.43 15.45 35.42 35.97 XNETTOPK 36.09 59.70 59.32 55.00 68.66 70.24 29.41 46.69 46.97 17.04 37.60 38.16 LRXNET 85.63 91.10 91.85 63.29 76.57 75.10 55.17 68.92 68.43 32.92 31.15 30.41 XNET+ 79.39 87.32 88.00 57.08 70.25 71.28 47.23 61.81 61.42 23.07 42.88 43.42 Table 4: Results (in percentage) for answer selection comparing our approaches (bottom part) to baselines (top): AP-CNN (dos Santos et al., 2016), ABCNN (Yin et al., 2016), L.D.C (Wang and Jiang, 2017), KV-MemNN (Miller et al., 2016), and COMPAGGR, a state-of-the-art system by Wang et al. (2017). (WGT) WRD CNT stands for the (weighted) word count baseline. See text for more details. tence extractor conditioned only on the query q as external information (Eq. (3)). XNET+ is an extension of XNET which uses ISF, IDF and local ISF scores in addition to the query q as external information (Eqn. (4)). We also experimented with a baseline XNETTOPK where we choose the top k sentences with highest ISF score, and then among them choose the one with the highest probability according to XNET. In our experiments, we set k = 5. In the end, we experimented with an ensemble network LRXNET which combines the XNET score, the COMPAGGR score and other word-overlap-based scores (tweaked and optimized for each dataset separately) for each sentence using a logistic regression classifier. It uses ISF and LocalISF scores for NewsQA, IDF and ISF scores for SQuAD, sentence length, IDF and ISF scores for WikiQA, and word overlap and ISF score for MSMarco. We refer the reader to the supplementary material for more implementation and optimization details to replicate our results. Evaluation Metrics We consider metrics that evaluate systems that return a ranked list of candidate answers: mean average precision (MAP), mean reciprocal rank (MRR), and accuracy (ACC). Results Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco. Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in isolation. Secondly, we can observe that ISF is indeed a strong baseline that outperforms XNET. This means that just “reading” the document using a vanilla version of XNET is not sufficient, and help is required through a coarse filtering. Indeed, we observe that XNET+ outperforms all baselines except for COMPAGGR. Our ensemble model LRXNET can ultimately surpass COMPAGGR on majority of the datasets. This consistent behavior validates the machine reading capabilities and the improved document representation with external features of our model for answer selection. Specifically, the combination of document reading and word overlap features is required to be done in a soft manner, using a classification technique. Using it as a hard constraint, with XNETTOPK, does not achieve the best result. We believe that often the ISF score is a better indicator of answer presence in the vicinity of certain candidate instead of in the candidate itself. As such, XNET+ is capable of using this feature in datasets with richer context. It is worth noting that the improvement gained by LRXNET over the state-of-the-art follows a pattern. For the SQuAD dataset, the results are comparable (less than 1%). However, the improvement for WikiQA reaches ∼3% and then the gap shrinks again for NewsQA, with an improvement of ∼1%. This could be explained by the fact that each sample of the SQuAD is a paragraph, compared to an article summary for WikiQA, and 2028 to an entire article for NewsQA. Hence, we further strengthen our hypothesis that a richer context is needed to achieve better results, in this case expressed as document length, but as the length of the context increases the limitation of sequential models to learn from long rich sequences arises.7 Interestingly, our model lags behind COMPAGGR on the MSMarco dataset. It turns out this is due to contextual independence between candidates in the MSMarco dataset, i.e., each candidate is a stand-alone paragraph in this dataset, in contrast to contextually dependent candidate sentences from a document in the NewsQA, SQuAD and WikiQA datasets. As a result, our models (XNET+ and LRXNET) with document reading abilities perform poorly. This can be observed by the fact that XNET and PAIRCNN obtain comparable results. COMPAGGR performs better because comparing each candidate independently is a better strategy. 5 Conclusion We describe an approach to model documents while incorporating external information that informs the representations learned for the sentences in the document. We implement our approach through an attention mechanism of a neural network architecture for modeling documents. Our experiments with extractive document summarization and answer selection tasks validates our model in two ways: first, we demonstrate that external information is important to guide document modeling for natural language understanding tasks. Our model uses image captions and the title of the document for document summarization, and the query with word overlap features for answer selection and outperforms its counterparts that do not use this information. Second, our external attention mechanism successfully guides the learning of the document representation for the relevant end goal. For answer selection, we show that inserting the query with word overlap features using our external attention mechanism outperforms state-of-the-art systems that naturally also have access to this information. Acknowledgments We thank Jianpeng Cheng for providing us with the CNN dataset and the implementation of Point7See the supplementary material for an example supporting our hypothesis. erNet. We also thank the members of the Edinburgh NLP group for participating in our human evaluation experiments. This work greatly benefitted from discussions with Jianpeng Cheng, Annie Louis, Pedro Balage, Alfonso Mendes, Sebasti˜ao Miranda, and members of the Edinburgh NLP group. We gratefully acknowledge the support of the European Research Council (Lapata; award number 681760), the European Union under the Horizon 2020 SUMMA project (Narayan, Cohen; grant agreement 688139), and Huawei Technologies (Cohen). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations. San Diego, California, USA. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In Proceedings of the 25th International Joint Conference on Artificial Intelligence. New York, USA, pages 2754–2760. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany, pages 484–494. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing. Doha, Qatar, pages 1724–1734. Cıcero Nogueira dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. CoRR abs/1602.03609. Harold P. Edmundson. 1969. New methods in automatic extracting. Journal of the Association for Computing Machinery 16(2):264–285. Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with LSTMs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 360–368. Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual LSTM (CLSTM) models for large scale NLP tasks. CoRR abs/1602.06291. 2029 Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28. pages 1693– 1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9(8):1735–1780. Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017. Reinforced mnemonic reader for machine comprehension. CoRR abs/1705.02798. Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document context language models. CoRR abs/1511.03962. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Doha, Qatar, pages 1746–1751. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the 30th AAAI Conference on Artificial Intelligence. Phoenix, Arizona USA, pages 2741–2749. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Seattle, Washington, USA, pages 406–407. Jiwei Li, Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Beijing, China, pages 1106–1115. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using N-gram cooccurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Edmonton, Canada, pages 71–78. Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of the 2015 Conference on Empirical Methods on Natural Language Processing. Lisbon, Portugal, pages 899–907. Inderjeet Mani. 2001. Automatic Summarization. Natural language processing. John Benjamins Publishing Company. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In Proceedings of the Spoken Language Technology Workshop. IEEE, pages 234–239. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods on Natural Language Processing. Austin, Texas, pages 1400– 1409. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, California USA, pages 3075–3081. Ramesh Nallapati, Bowen Zhou, C´ıcero Nogueira dos Santos, C¸ aglar G¨ulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. Berlin, Germany, pages 280– 290. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, US. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS Marco: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches, co-located with the 30th Annual Conference on Neural Information Processing Systems. Barcelona, Spain. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 2383–2392. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 379–389. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada, pages 1073–1083. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27. pages 3104–3112. 2030 Jiwei Tan and Xiaojun Wan. 2017. Abstractive document summarization with a graph-based attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada, pages 1171–1181. Quan Hung Tran, Ingrid Zukerman, and Gholamreza Haffari. 2016. Inter-document contextual language model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, California, pages 762–766. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. CoRR abs/1611.09830. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28. pages 2692–2700. Shuohang Wang and Jing Jiang. 2017. A compareaggregate model for matching text sequences. In Proceedings of the 5th International Conference on Learning Representations. Toulon, France. Tian Wang and Kyunghyun Cho. 2016. Larger-context language modelling with recurrent neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany, pages 1319–1329. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada, pages 189–198. Zhiguo Wang, Haitao Mi, and Abraham Ittycheriah. 2016. Sentence similarity learning by lexical decomposition and composition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 1340–1349. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural QA as simple as possible but not simpler. In Proceedings of the 21st Conference on Computational Natural Language Learning. Vancouver, Canada, pages 271–280. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 2013–2018. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego, California, pages 1480–1489. Wenpeng Yin, Hinrich Schtze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics 4:259–272.
2018
188
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2031–2040 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2031 Neural Models for Documents with Metadata Dallas Card1 Chenhao Tan2 Noah A. Smith3 1Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, 15213, USA 2Department of Computer Science, University of Colorado, Boulder, CO, 80309, USA 3Paul G. Allen School of CSE, University of Washington, Seattle, WA, 98195, USA [email protected] [email protected] [email protected] Abstract Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customization typically requires derivation of a custom inference algorithm. In this paper, we build on recent advances in variational inference methods and propose a general neural framework, based on topic models, to enable flexible incorporation of metadata and allow for rapid exploration of alternative models. Our approach achieves strong performance, with a manageable tradeoff between perplexity, coherence, and sparsity. Finally, we demonstrate the potential of our framework through an exploration of a corpus of articles about US immigration. 1 Introduction Topic models comprise a family of methods for uncovering latent structure in text corpora, and are widely used tools in the digital humanities, political science, and other related fields (Boyd-Graber et al., 2017). Latent Dirichlet allocation (LDA; Blei et al., 2003) is often used when there is no prior knowledge about a corpus. In the real world, however, most documents have non-textual attributes such as author (Rosen-Zvi et al., 2004), timestamp (Blei and Lafferty, 2006), rating (McAuliffe and Blei, 2008), or ideology (Eisenstein et al., 2011; Nguyen et al., 2015b), which we refer to as metadata. Many customizations of LDA have been developed to incorporate document metadata. Two models of note are supervised LDA (SLDA; McAuliffe and Blei, 2008), which jointly models words and labels (e.g., ratings) as being generated from a latent representation, and sparse additive generative models (SAGE; Eisenstein et al., 2011), which assumes that observed covariates (e.g., author ideology) have a sparse effect on the relative probabilities of words given topics. The structural topic model (STM; Roberts et al., 2014), which adds correlations between topics to SAGE, is also widely used, but like SAGE it is limited in the types of metadata it can efficiently make use of, and how that metadata is used. Note that in this work we will distinguish labels (metadata that are generated jointly with words from latent topic representations) from covariates (observed metadata that influence the distribution of labels and words). The ability to create variations of LDA such as those listed above has been limited by the expertise needed to develop custom inference algorithms for each model. As a result, it is rare to see such variations being widely used in practice. In this work, we take advantage of recent advances in variational methods (Kingma and Welling, 2014; Rezende et al., 2014; Miao et al., 2016; Srivastava and Sutton, 2017) to facilitate approximate Bayesian inference without requiring model-specific derivations, and propose a general neural framework for topic models with metadata, SCHOLAR.1 SCHOLAR combines the abilities of SAGE and SLDA, and allows for easy exploration of the following options for customization: 1. Covariates: as in SAGE and STM, we incorporate explicit deviations for observed covariates, as well as effects for interactions with topics. 2. Supervision: as in SLDA, we can use metadata as labels to help infer topics that are relevant in predicting those labels. 1Sparse Contextual Hidden and Observed Language AutoencodeR. 2032 3. Rich encoder network: we use the encoding network of a variational autoencoder (VAE) to incorporate additional prior knowledge in the form of word embeddings, and/or to provide interpretable embeddings of covariates. 4. Sparsity: as in SAGE, a sparsity-inducing prior can be used to encourage more interpretable topics, represented as sparse deviations from a background log-frequency. We begin with the necessary background and motivation (§2), and then describe our basic framework and its extensions (§3), followed by a series of experiments (§4). In an unsupervised setting, we can customize the model to trade off between perplexity, coherence, and sparsity, with improved coherence through the introduction of word vectors. Alternatively, by incorporating metadata we can either learn topics that are more predictive of labels than SLDA, or learn explicit deviations for particular parts of the metadata. Finally, by combining all parts of our model we can meaningfully incorporate metadata in multiple ways, which we demonstrate through an exploration of a corpus of news articles about US immigration. In presenting this particular model, we emphasize not only its ability to adapt to the characteristics of the data, but the extent to which the VAE approach to inference provides a powerful framework for latent variable modeling that suggests the possibility of many further extensions. Our implementation is available at https://github. com/dallascard/scholar. 2 Background and Motivation LDA can be understood as a non-negative Bayesian matrix factorization model: the observed document-word frequency matrix, X ∈ZD×V (D is the number of documents, V is the vocabulary size) is factored into two low-rank matrices, ΘD×K and BK×V , where each row of Θ, θi ∈∆K is a latent variable representing a distribution over topics in document i, and each row of B, βk ∈∆V , represents a single topic, i.e., a distribution over words in the vocabulary.2 While it is possible to factor the count data into unconstrained 2Z denotes nonnegative integers, and ∆K denotes the set of K-length nonnegative vectors that sum to one. For a proper probabilistic interpretation, the matrix to be factored is actually the matrix of latent mean parameters of the assumed data generating process, Xij ∼Poisson(Λij). See Cemgil (2009) or Paisley et al. (2014) for details. matrices, the particular priors assumed by LDA are important for interpretability (Wallach et al., 2009). For example, the neural variational document model (NVDM; Miao et al., 2016) allows θi ∈RK and achieves normalization by taking the softmax of θ⊤ i B. However, the experiments in Srivastava and Sutton (2017) found the performance of the NVDM to be slightly worse than LDA in terms of perplexity, and dramatically worse in terms of topic coherence. The topics discovered by LDA tend to be parsimonious and coherent groupings of words which are readily identifiable to humans as being related to each other (Chang et al., 2009), and the resulting mode of the matrix Θ provides a representation of each document which can be treated as a measurement for downstream tasks, such as classification or answering social scientific questions (Wallach, 2016). LDA does not require — and cannot make use of — additional prior knowledge. As such, the topics that are discovered may bear little connection to metadata of a corpus that is of interest to a researcher, such as sentiment, ideology, or time. In this paper, we take inspiration from two models which have sought to alleviate this problem. The first, supervised LDA (SLDA; McAuliffe and Blei, 2008), assumes that documents have labels y which are generated conditional on the corresponding latent representation, i.e., yi ∼p(y | θi).3 By incorporating labels into the model, it is forced to learn topics which allow documents to be represented in a way that is useful for the classification task. Such models can be used inductively as text classifiers (Balasubramanyan et al., 2012). SAGE (Eisenstein et al., 2011), by contrast, is an exponential-family model, where the key innovation was to replace topics with sparse deviations from the background log-frequency of words (d), i.e., p(word | softmax(d+θ⊤ i B)). SAGE can also incorporate deviations for observed covariates, as well as interactions between topics and covariates, by including additional terms inside the softmax. In principle, this allows for inferring, for example, the effect on an author’s ideology on their choice of words, as well as ideological variations on each underlying topic. Unlike the NVDM, SAGE still constrains θi to lie on the simplex, as in LDA. SLDA and SAGE provide two different ways that users might wish to incorporate prior knowl3Technically, the model conditions on the mean of the perword latent variables, but we elide this detail in the interest of concision. 2033 edge as a way of guiding the discovery of topics in a corpus: SLDA incorporates labels through a distribution conditional on topics; SAGE includes explicit sparse deviations for each unique value of a covariate, in addition to topics.4 Because of the Dirichlet-multinomial conjugacy in the original model, efficient inference algorithms exist for LDA. Each variation of LDA, however, has required the derivation of a custom inference algorithm, which is a time-consuming and errorprone process. In SLDA, for example, each type of distribution we might assume for p(y | θ) would require a modification of the inference algorithm. SAGE breaks conjugacy, and as such, the authors adopted L-BFGS for optimizing the variational bound. Moreover, in order to maintain computational efficiency, it assumed that covariates were limited to a single categorical label. More recently, the variational autoencoder (VAE) was introduced as a way to perform approximate posterior inference on models with otherwise intractable posteriors (Kingma and Welling, 2014; Rezende et al., 2014). This approach has previously been applied to models of text by Miao et al. (2016) and Srivastava and Sutton (2017). We build on their work and show how this framework can be adapted to seamlessly incorporate the ideas of both SAGE and SLDA, while allowing for greater flexibility in the use of metadata. Moreover, by exploiting automatic differentiation, we allow for modification of the model without requiring any change to the inference procedure. The result is not only a highly adaptable family of models with scalable inference and efficient prediction; it also points the way to incorporation of many ideas found in the literature, such as a gradual evolution of topics (Blei and Lafferty, 2006), and hierarchical models (Blei et al., 2010; Nguyen et al., 2013, 2015b). 3 SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsity We begin by presenting the generative story for our model, and explain how it generalizes both SLDA and SAGE (§3.1). We then provide a general explanation of inference using VAEs and how it applies to our model (§3.2), as well as how to infer docu4A third way of incorporating metadata is the approach used by various “upstream” models, such as Dirichletmultinomial regression (Mimno and McCallum, 2008), which uses observed metadata to inform the document prior. We hypothesize that this approach could be productively combined with our framework, but we leave this as future work. ment representations and predict labels at test time (§3.3). Finally, we discuss how we can incorporate additional prior knowledge (§3.4). 3.1 Generative Story Consider a corpus of D documents, where document i is a list of Ni words, wi, with V words in the vocabulary. For each document, we may have observed covariates ci (e.g., year of publication), and/or one or more labels, yi (e.g., sentiment). Our model builds on the generative story of LDA, but optionally incorporates labels and covariates, and replaces the matrix product of Θ and B with a more flexible generative network, fg, followed by a softmax transform. Instead of using a Dirichlet prior as in LDA, we employ a logistic normal prior on θ as in Srivastava and Sutton (2017) to facilitate inference (§3.2): we draw a latent variable, r,5 from a multivariate normal, and transform it to lie on the simplex using a softmax transform.6 The generative story is shown in Figure 1a and described in equations below: For each document i of length Ni: # Draw a latent representation on the simplex from a logistic normal prior: ri ∼N(r | µ0(α), diag(σ2 0(α))) θi = softmax(ri) # Generate words, incorporating covariates: ηi = fg(θi, ci) For each word j in document i: wij ∼p(w | softmax(ηi)) # Similarly generate labels: yi ∼p(y | fy(θi, ci)), where p(w | softmax(ηi)) is a multinomial distribution and p(y | fy(θi, ci)) is a distribution appropriate to the data (e.g., multinomial for categorical labels). fg is a model-specific combination of latent variables and covariates, fy is a multi-layer neural network, and µ0(α) and σ2 0(α) are the mean and diagonal covariance terms of a multivariate normal prior. To approximate a symmetric Dirichlet 5r is equivalent to z in the original VAE. To avoid confusion with topic assignment of words in the topic modeling literature, we use r instead of z. 6Unlike the correlated topic model (CTM; Lafferty and Blei, 2006), which also uses a logistic-normal prior, we fix the parameters of the prior and use a diagonal covariance matrix, rather than trying to infer correlations among topics. However, it would be a straightforward extension of our framework to place a richer prior on the latent document representations, and learn correlations by updating the parameters of this prior after each epoch, analogously to the variational EM approach used for the CTM. 2034 prior with hyperparameter α, these are given by the Laplace approximation (Hennig et al., 2012) to be µ0,k(α) = 0 and σ2 0,k = (K −1)/(αK). If we were to ignore covariates, place a Dirichlet prior on B, and let η = θ⊤ i B, this model is equivalent to SLDA with a logistic normal prior. Similarly, we can recover a model that is like SAGE, but lacks sparsity, if we ignore labels, and let ηi = d + θ⊤ i B + c⊤ i Bcov + (θi ⊗ci)⊤Bint, (1) where d is the V -dimensional background term (representing the log of the overall word frequency), θi ⊗ci is a vector of interactions between topics and covariates, and Bcov and Bint are additional weight (deviation) matrices. The background is included to account for common words with approximately the same frequency across documents, meaning that the B∗weights now represent both positive and negative deviations from this background. This is the form of fg which we will use in our experiments. To recover the full SAGE model, we can place a sparsity-inducing prior on each B∗. As in Eisenstein et al. (2011), we make use of the compound normal-exponential prior for each element of the weight matrices, B∗ m,n, with hyperparameter γ,7 τm,n ∼Exponential(γ), (2) B∗ m,n ∼N(0, τm,n). (3) We can choose to ignore various parts of this model, if, for example, we don’t have any labels or observed covariates, or we don’t wish to use interactions or sparsity.8 Other generator networks could also be considered, with additional layers to represent more complex interactions, although this might involve some loss of interpretability. In the absence of metadata, and without sparsity, our model is equivalent to the ProdLDA model of Srivastava and Sutton (2017) with an explicit background term, and ProdLDA is, in turn, a 7To avoid having to tune γ, we employ an improper Jeffery’s prior, p(τm,n) ∝1/τm,n, as in SAGE. Although this causes difficulties in posterior inference for the variance terms, τ, in practice, we resort to a variational EM approach, with MAP-estimation for the weights, B, and thus alternate between computing expectations of the τ parameters, and updating all other parameters using some variant of stochastic gradient descent. For this, we only require the expectation of each τmn for each E-step, which is given by 1/B2 m,n. We refer the reader to Eisenstein et al. (2011) for additional details. 8We could also ignore latent topics, in which case we would get a naïve Bayes-like model of text with deviations for each covariate p(wij | ci) ∝exp(d + c⊤ i Bcov). w y η θ r α c B Ni D (a) Generative model r µ σ π c y w ϵ linear linear D (b) Inference model Figure 1: Figure 1a presents the generative story of our model. Figure 1b illustrates the inference network using the reparametrization trick to perform variational inference on our model. Shaded nodes are observed; double circles indicate deterministic transformations of parent nodes. special case of SAGE, without background logfrequencies, sparsity, covariates, or labels. In the next section we generalize the inference method used for ProdLDA; in our experiments we validate its performance and explore the effects of regularization and word-vector initialization (§3.4). The NVDM (Miao et al., 2016) uses the same approach to inference, but does not not restrict document representations to the simplex. 3.2 Learning and Inference As in past work, each document i is assumed to have a latent representation ri, which can be interpreted as its relative membership in each topic (after exponentiating and normalizing). In order to infer an approximate posterior distribution over ri, we adopt the sampling-based VAE framework developed in previous work (Kingma and Welling, 2014; Rezende et al., 2014). As in conventional variational inference, we assume a variational approximation to the posterior, qΦ(ri | wi, ci, yi), and seek to minimize the KL divergence between it and the true posterior, p(ri | wi, ci, yi), where Φ is the set of variational parameters to be defined below. After some manipulations (given in supplementary materials), we obtain the evidence lower bound (ELBO) for a sin2035 gle document, L(wi) = EqΦ(ri|wi,ci,yi)   Ni X j=1 log p(wij | ri, ci)   + EqΦ(ri|wi,ci,yi) [log p(yi | ri, ci)] −DKL [qΦ(ri | wi, ci, yi) || p(ri | α)] . (4) As in the original VAE, we will encode the parameters of our variational distributions using a shared multi-layer neural network. Because we have assumed a diagonal normal prior on r, this will take the form of a network which outputs a mean vector, µi = fµ(wi, ci, yi) and diagonal of a covariance matrix, σ2 i = fσ(wi, ci, yi), such that qΦ(ri | wi, ci, yi) = N(µi, σ2 i ). Incorporating labels and covariates to the inference network used by Miao et al. (2016) and Srivastava and Sutton (2017), we use: πi = fe([Wxxi; Wcci; Wyyi]), (5) µi = Wµπi + bµ, (6) log σ2 i = Wσπi + bσ, (7) where xi is a V -dimensional vector representing the counts of words in wi, and fe is a multilayer perceptron. The full set of encoder parameters, Φ, thus includes the parameters of fe and all weight matrices and bias vectors in Equations 5–7 (see Figure 1b). This approach means that the expectations in Equation 4 are intractable, but we can approximate them using sampling. In order to maintain differentiability with respect to Φ, even after sampling, we make use of the reparameterization trick (Kingma and Welling, 2014),9 which allows us to reparameterize samples from qΦ(r | wi, ci, yi) in terms of samples from an independent source of noise, i.e., ϵ(s) ∼N(0, I), r(s) i = gΦ(wi, ci, yi, ϵ(s)) = µi + σi · ϵ(s). We thus replace the bound in Equation 4 with a Monte Carlo approximation using a single sam9 The Dirichlet distribution cannot be directly reparameterized in this way, which is why we use the logistic normal prior on θ to approximate the Dirichlet prior used in LDA. ple10 of ϵ (and thereby of r): L(wi) ≈ Ni X j=1 log p(wij | r(s) i , ci) + log p(yi | r(s) i , ci) −DKL [qΦ(ri | wi, ci, yi) || p(ri | α)] . (8) We can now optimize this sampling-based approximation of the variational bound with respect to Φ, B∗, and all parameters of fg and fy using stochastic gradient descent. Moreover, because of this stochastic approach to inference, we are not restricted to covariates with a small number of unique values, which was a limitation of SAGE. Finally, the KL divergence term in Equation 8 can be computed in closed form (see supplementary materials). 3.3 Prediction on Held-out Data In addition to inferring latent topics, our model can both infer latent representations for new documents and predict their labels, the latter of which was the motivation for SLDA. In traditional variational inference, inference at test time requires fixing global parameters (topics), and optimizing the per-document variational parameters for the test set. With the VAE framework, by contrast, the encoder network (Equations 5–7) can be used to directly estimate the posterior distribution for each test document, using only a forward pass (no iterative optimization or sampling). If not using labels, we can use this approach directly, passing the word counts of new documents through the encoder to get a posterior qΦ(ri | wi, ci). When we also include labels to be predicted, we can first train a fully-observed model, as above, then fix the decoder, and retrain the encoder without labels. In practice, however, if we train the encoder network using one-hot encodings of document labels, it is sufficient to provide a vector of all zeros for the labels of test documents; this is what we adopt for our experiments (§4.2), and we still obtain good predictive performance. The label network, fy, is a flexible component which can be used to predict a wide range of outcomes, from categorical labels (such as star ratings; McAuliffe and Blei, 2008) to real-valued outputs (such as number of citations or box-office returns; 10Alternatively, one can average over multiple samples. 2036 Yogatama et al., 2011). For categorical labels, predictions are given by ˆyi = argmax y ∈Y p(y | ri, ci). (9) Alternatively, when dealing with a small set of categorical labels, it is also possible to treat them as observed categorical covariates during training. At test time, we can then consider all possible one-hot vectors, e, in place of ci, and predict the label that maximizes the probability of the words, i.e., ˆyi = argmax y ∈Y Ni X j=1 log p(wij | ri, ey). (10) This approach works well in practice (as we show in §4.2), but does not scale to large numbers of labels, or other types of prediction problems, such as multi-class classification or regression. The choice to include metadata as covariates, labels, or both, depends on the data. The key point is that we can incorporate metadata in two very different ways, depending on what we want from the model. Labels guide the model to infer topics that are relevant to those labels, whereas covariates induce explicit deviations, leaving the latent variables to account for the rest of the content. 3.4 Additional Prior Information A final advantage of the VAE framework is that the encoder network provides a way to incorporate additional prior information in the form of word vectors. Although we can learn all parameters starting from a random initialization, it is also possible to initialize and fix the initial embeddings of words in the model, Wx, in Equation 5. This leverages word similarities derived from large amounts of unlabeled data, and may promote greater coherence in inferred topics. The same could also be done for some covariates; for example, we could embed the source of a news article based on its place on the ideological spectrum. Conversely, if we choose to learn these parameters, the learned values (Wy and Wc) may provide meaningful embeddings of these metadata (see section §4.3). Other variants on topic models have also proposed incorporating word vectors, both as a parallel part of the generative process (Nguyen et al., 2015a), and as an alternative parameterization of topic distributions (Das et al., 2015), but inference is not scalable in either of these models. Because of the generality of the VAE framework, we could also modify the generative story so that word embeddings are emitted (rather than tokens); we leave this for future work. 4 Experiments and Results To evaluate and demonstrate the potential of this model, we present a series of experiments below. We first test SCHOLAR without observed metadata, and explore the effects of using regularization and/or word vector initialization, compared to LDA, SAGE, and NVDM (§4.1). We then evaluate our model in terms of predictive performance, in comparison to SLDA and an l2-regularized logistic regression baseline (§4.2). Finally, we demonstrate the ability to incorporate covariates and/or labels in an exploratory data analysis (§4.3). The scores we report are generalization to heldout data, measured in terms of perplexity; coherence, measured in terms of non-negative point-wise mutual information (NPMI; Chang et al., 2009; Newman et al., 2010), and classification accuracy on test data. For coherence we evaluate NPMI using the top 10 words of each topic, both internally (using test data), and externally, using a decade of articles from the English Gigaword dataset (Graff and Cieri, 2003). Since our model employs variational methods, the reported perplexity is an upper bound based on the ELBO. As datasets we use the familiar 20 newsgroups, the IMDB corpus of 50,000 movie reviews (Maas et al., 2011), and the UIUC Yahoo answers dataset with 150,000 documents in 15 categories (Chang et al., 2008). For further exploration, we also make use of a corpus of approximately 4,000 timestamped news articles about US immigration, each annotated with pro- or anti-immigration tone (Card et al., 2015). We use the original author-provided implementations of SAGE11 and SLDA,12 while for LDA we use Mallet.13. Our implementation of SCHOLAR is in TensorFlow, but we have also provided a preliminary PyTorch implementation of the core of our model.14 For additional details about datasets and implementation, please refer to the supplementary material. It is challenging to fairly evaluate the relative computational efficiency of our approach compared to past work (due to the stochastic nature of our ap11github.com/jacobeisenstein/SAGE 12github.com/blei-lab/class-slda 13mallet.cs.umass.edu 14github.com/dallascard/scholar 2037 proach to inference, choices about hyperparameters such as tolerance, and because of differences in implementation). Nevertheless, in practice, the performance of our approach is highly appealing. For all experiments in this paper, our implementation was much faster than SLDA or SAGE (implemented in C and Matlab, respectively), and competitive with Mallet. 4.1 Unsupervised Evaluation Although the emphasis of this work is on incorporating observed labels and/or covariates, we briefly report on experiments in the unsupervised setting. Recall that, without metadata, SCHOLAR equates to ProdLDA, but with an explicit background term.15 We therefore use the same experimental setup as Srivastava and Sutton (2017) (learning rate, momentum, batch size, and number of epochs) and find the same general patterns as they reported (see Table 1 and supplementary material): our model returns more coherent topics than LDA, but at the cost of worse perplexity. SAGE, by contrast, attains very high levels of sparsity, but at the cost of worse perplexity and coherence than LDA. As expected, the NVDM produces relatively low perplexity, but very poor coherence, due to its lack of constraints on θ. Further experimentation revealed that the VAE framework involves a tradeoff among the scores; running for more epochs tends to result in better perplexity on held-out data, but at the cost of worse coherence. Adding regularization to encourage sparse topics has a similar effect as in SAGE, leading to worse perplexity and coherence, but it does create sparse topics. Interestingly, initializing the encoder with pretrained word2vec embeddings, and not updating them returned a model with the best internal coherence of any model we considered for IMDB and Yahoo answers, and the second-best for 20 newsgroups. The background term in our model does not have much effect on perplexity, but plays an important role in producing coherent topics; as in SAGE, the background can account for common words, so they are mostly absent among the most heavily weighted words in the topics. For instance, words like film and movie in the IMDB corpus are relatively unimportant in the topics learned by our 15Note, however, that a batchnorm layer in ProdLDA may play a similar role to a background term, and there are small differences in implementation; please see supplementary material for more discussion of this. Ppl. NPMI NPMI Sparsity Model ↓ (int.) ↑ (ext.) ↑ ↑ LDA 1508 0.13 0.14 0 SAGE 1767 0.12 0.12 0.79 NVDM 1748 0.06 0.04 0 SCHOLAR −B.G. 1889 0.09 0.13 0 SCHOLAR 1905 0.14 0.13 0 SCHOLAR + W.V. 1991 0.18 0.17 0 SCHOLAR + REG. 2185 0.10 0.12 0.58 Table 1: Performance of our various models in an unsupervised setting (i.e., without labels or covariates) on the IMDB dataset using a 5,000-word vocabulary and 50 topics. The supplementary materials contain additional results for 20 newsgroups and Yahoo answers. model, but would be much more heavily weighted without the background term, as they are in topics learned by LDA. 4.2 Text Classification We next consider the utility of our model in the context of categorical labels, and consider them alternately as observed covariates and as labels generated conditional on the latent representation. We use the same setup as above, but tune number of training epochs for our model using a random 20% of training data as a development set, and similarly tune regularization for logistic regression. Table 2 summarizes the accuracy of various models on three datasets, revealing that our model offers competitive performance, both as a joint model of words and labels (Eq. 9), and a model which conditions on covariates (Eq. 10). Although SCHOLAR is comparable to the logistic regression baseline, our purpose here is not to attain state-of-the-art performance on text classification. Rather, the high accuracies we obtain demonstrate that we are learning low-dimensional representations of documents that are relevant to the label of interest, outperforming SLDA, and have the same attractive properties as topic models. Further, any neural network that is successful for text classification could be incorporated into fy and trained end-to-end along with topic discovery. 4.3 Exploratory Study We demonstrate how our model might be used to explore an annotated corpus of articles about immigration, and adapt to different assumptions about the data. We only use a small number of topics in this part (K = 8) for compact presentation. 2038 20news IMDB Yahoo Vocabulary size 2000 5000 5000 Number of topics 50 50 250 SLDA 0.60 0.64 0.65 SCHOLAR (labels) 0.67 0.86 0.73 SCHOLAR (covariates) 0.71 0.87 0.72 Logistic regression 0.70 0.87 0.76 Table 2: Accuracy of various models on three datasets with categorical labels. Tone as a label. We first consider using the annotations as a label, and train a joint model to infer topics relevant to the tone of the article (pro- or anti-immigration). Figure 2 shows a set of topics learned in this way, along with the predicted probability of an article being pro-immigration conditioned on the given topic. All topics are coherent, and the predicted probabilities have strong face validity, e.g., “arrested charged charges agents operation” is least associated with pro-immigration. Tone as a covariate. Next we consider using tone as a covariate, and build a model using both tone and tone-topic interactions. Table 3 shows a set of topics learned from the immigration data, along with the most highly-weighted words in the corresponding tone-topic interaction terms. As can be seen, these interaction terms tend to capture different frames (e.g., “criminal” vs. “detainees”, and “illegals” vs. “newcomers”, etc). Combined model with temporal metadata. Finally, we incorporate both the tone annotations and the year of publication of each article, treating the former as a label and the latter as a covariate. In this model, we also include an embedding matrix, Wc, to project the one-hot year vectors down to a two-dimensional continuous space, with a learned deviation for each dimension. We omit the topics in the interest of space, but Figure 3 shows the learned embedding for each year, along with the top terms of the corresponding deviations. As can be seen, the model learns that adjacent years tend to produce similar deviations, even though we have not explicitly encoded this information. The leftright dimension roughly tracks a temporal trend with positive deviations shifting from the years of Clinton and INS on the left, to Obama and ICE on the right.16 Meanwhile, the events of 9/11 dominate the vertical direction, with the words sept, 16The Immigration and Naturalization Service (INS) was transformed into Immigration and Customs Enforcement (ICE) and other agencies in 2003. 0 1 p(pro-immigration | topic) arrested charged charges agents operation state gov benefits arizona law bill bills bush border president bill republicans labor jobs workers percent study wages asylum judge appeals deportation court visas visa applications students citizenship boat desert died men miles coast haitian english language city spanish community Figure 2: Topics inferred by a joint model of words and tone, and the corresponding probability of proimmigration tone for each topic. A topic is represented by the top words sorted by word probability throughout the paper. 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 obama clinton deportations sept hijackers elian attacks INS refugees asylum clinton obama arizona ICE path Figure 3: Learned embeddings of year-ofpublication (treated as a covariate) from combined model of news articles about immigration. hijackers, and attacks increasing in probability as we move up in the space. If we wanted to look at each year individually, we could drop the embedding of years, and learn a sparse set of topic-year interactions, similar to tone in Table 3. 5 Additional Related Work The literature on topic models is vast; in addition to papers cited throughout, other efforts to incorporate metadata into topic models include Dirichletmultinomial regression (DMR; Mimno and McCallum, 2008), Labeled LDA (Ramage et al., 2009), and MedLDA (Zhu et al., 2009). A recent paper also extended DMR by using deep neural networks to embed metadata into a richer document prior (Benton and Dredze, 2018). A separate line of work has pursued parameterizing unsupervised models of documents using neural networks (Hinton and Salakhutdinov, 2039 Base topics (each row is a topic) Anti-immigration interactions Pro-immigration interactions ice customs agency enforcement homeland criminal customs arrested detainees detention center agency population born percent americans english jobs million illegals taxpayers english newcomers hispanic city judge case court guilty appeals attorney guilty charges man charged asylum court judge case appeals patrol border miles coast desert boat guard patrol border agents boat died authorities desert border bodies licenses drivers card visa cards applicants foreign sept visas system green citizenship card citizen apply island story chinese ellis international smuggling federal charges island school ellis english story guest worker workers bush labor bill bill border house senate workers tech skilled farm labor benefits bill welfare republican state senate republican california gov state law welfare students tuition Table 3: Top words for topics (left) and the corresponding anti-immigration (middle) and pro-immigration (right) variations when treating tone as a covariate, with interactions. 2009; Larochelle and Lauly, 2012), including nonBayesian approaches (Cao et al., 2015). More recently, Lau et al. (2017) proposed a neural language model that incorporated topics, and He et al. (2017) developed a scalable alternative to the correlated topic model by simultaneously learning topic embeddings. Others have attempted to extend the reparameterization trick to the Dirichlet and Gamma distributions, either through transformations (Kucukelbir et al., 2016) or a generalization of reparameterization (Ruiz et al., 2016). Black-box and VAE-style inference have been implemented in at least two general purpose tools designed to allow rapid exploration and evaluation of models (Kucukelbir et al., 2015; Tran et al., 2016). 6 Conclusion We have presented a neural framework for generalized topic models to enable flexible incorporation of metadata with a variety of options. We take advantage of stochastic variational inference to develop a general algorithm for our framework such that variations do not require any model-specific algorithm derivations. Our model demonstrates the tradeoff between perplexity, coherence, and sparsity, and outperforms SLDA in predicting document labels. Furthermore, the flexibility of our model enables intriguing exploration of a text corpus on US immigration. We believe that our model and code will facilitate rapid exploration of document collections with metadata. Acknowledgments We would like to thank Charles Sutton, anonymous reviewers, and all members of Noah’s ARK for helpful discussions and feedback. This work was made possible by a University of Washington Innovation award and computing resources provided by XSEDE. References Ramnath Balasubramanyan, William W. Cohen, Doug Pierce, and David P. Redlawsk. 2012. Modeling polarizing topics: When do different political communities respond differently to the same news? In Proceedings of ICWSM. Adrian Benton and Mark Dredze. 2018. Deep Dirichlet multinomial regression. In Proceedings of NAACL. David M. Blei, Thomas L. Griffiths, and Michael I. Jordan. 2010. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. JACM, 57(2). David M. Blei and John D. Lafferty. 2006. Dynamic topic models. In Proceedings of ICML. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. JMLR, 3:993– 1022. Jordan Boyd-Graber, Yuening Hu, and David Mimno. 2017. Applications of topic models. Foundations and Trends in Information Retrieval, 11(2-3):143– 296. Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji. 2015. A novel neural topic model and its supervised extension. In Proceedings of AAAI. Dallas Card, Amber E. Boydstun, Justin H. Gross, Philip Resnik, and Noah A. Smith. 2015. The media frames corpus: Annotations of frames across issues. In Proceedings of ACL. Ali Taylan Cemgil. 2009. Bayesian inference for nonnegative matrix factorisation models. Computational Intelligence and Neuroscience, pages 4:1– 4:17. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-graber, and David M Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of NIPS. Ming-Wei Chang, Lev-Arie Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: Dataless classification. In Proceedings of AAAI. Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian LDA for topic models with word embeddings. In Proceedings of ACL. 2040 Jacob Eisenstein, Amr Ahmed, and Eric P. Xing. 2011. Sparse additive generative models of text. In Proceedings of ICML. David Graff and C Cieri. 2003. English gigaword corpus. Linguistic Data Consortium. Junxian He, Zhiting Hu, Taylor Berg-Kirkpatrick, Ying Huang, and Eric P. Xing. 2017. Efficient correlated topic modeling with topic embedding. In Proceedings of KDD. Philipp Hennig, David Stern, Ralf Herbrich, and Thore Graepel. 2012. Kernel topic models. In Proceedings of AISTATS. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2009. Replicated softmax: An undirected topic model. In Proceedings of NIPS. Diederik P Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of ICLR. Alp Kucukelbir, Rajesh Ranganath, Andrew Gelman, and David Blei. 2015. Automatic variational inference in stan. In Proceedings of NIPS. Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M. Blei. 2016. Automatic differentiation variational inference. ArXiv:1603.00788. John D. Lafferty and David M. Blei. 2006. Correlated topic models. In Proceedings of NIPS. Hugo Larochelle and Stanislas Lauly. 2012. A neural autoregressive topic model. In Proceedings of NIPS. Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017. Topically driven neural language model. In Proceedings of ACL. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL. Jon D. McAuliffe and David M. Blei. 2008. Supervised topic models. In Proceedings of NIPS. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of ICML. David Mimno and Andrew McCallum. 2008. Topic models conditioned on arbitrary features with Dirichlet-multinomial regression. In Proceedings of UAI. David Newman, Jey Han Lau, Karl Grieser, and Timothy Baldwin. 2010. Automatic evaluation of topic coherence. In Proceedings of ACL. Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015a. Improving topic models with latent feature word representations. In Proceedings of ACL. Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. 2015b. Tea party in the house: A hierarchical ideal point topic model and its application to Republican legislators in the 112th congress. In Proceedings of ACL. Viet-An Nguyen, Jordan L. Boyd-Graber, and Philip Resnik. 2013. Lexical and hierarchical topic regression. In Proceedings of NIPS. John William Paisley, David M. Blei, and Michael I. Jordan. 2014. Bayesian nonnegative matrix factorization with stochastic variational inference. In Handbook of Mixed Membership Models and Their Applications, pages 205–224. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled LDA: A supervised topic model for credit attribution in multilabeled corpora. In Proceedings of EMNLP. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of ICML. Molly Roberts, Brandon Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Gadarian, Bethany Albertson, and David Rand. 2014. Structural topic models for open ended survey responses. American Journal of Political Science, 58:1064–1082. Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In Proceedings of UAI. Francisco J R Ruiz, Michalis K Titsias, and David M Blei. 2016. The generalized reparameterization gradient. In Proceedings of NIPS. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In Proceedings of ICLR. Dustin Tran, Alp Kucukelbir, Adji B. Dieng, Maja Rudolph, Dawen Liang, and David M. Blei. 2016. Edward: A library for probabilistic modeling, inference, and criticism. ArXiv:1610.09787. Hanna Wallach. 2016. Interpretability and measurement. EMNLP Workshop on Natural Language Processing and Computational Social Science. Hanna Wallach, David M. Mimno, and Andrew McCallum. 2009. Rethinking LDA: Why priors matter. In Proceedings of NIPS. Dani Yogatama, Michael Heilman, Brendan O’Connor, Chris Dyer, Bryan R Routledge, and Noah A Smith. 2011. Predicting a scientific community’s response to an article. In Proceedings of EMNLP. Jun Zhu, Amr Ahmed, and Eric P. Xing. 2009. MedLDA: Maximum margin supervised topic models for regression and classification. In Proceedings of ICML.
2018
189
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 197–207 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 197 A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature Benjamin Nye Northeastern University [email protected] Junyi Jessy Li UT Austin [email protected] Roma Patel Rutgers University [email protected] Yinfei Yang∗ No affiliation [email protected] Iain J. Marshall King’s College London [email protected] Ani Nenkova UPenn [email protected] Byron C. Wallace Northeastern University [email protected] Abstract We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the ‘PICO’ elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine. 1 Introduction In 2015 alone, about 100 manuscripts describing randomized controlled trials (RCTs) for medical interventions were published every day. It is thus practically impossible for physicians to know which is the best medical intervention for a given patient group and condition (Borah et al., 2017; Fraser and Dunstan, 2010; Bastian et al., 2010). This inability to easily search and organize the published literature impedes the aims of evidence based medicine (EBM), which aspires to inform patient care using the totality of relevant evidence. ∗* now at Google Inc. Computational methods could expedite biomedical evidence synthesis (Tsafnat et al., 2013; Wallace et al., 2013) and natural language processing (NLP) in particular can play a key role in the task. Prior work has explored the use of NLP methods to automate biomedical evidence extraction and synthesis (Boudin et al., 2010; Marshall et al., 2017; Ferracane et al., 2016; Verbeke et al., 2012).1 But the area has attracted less attention than it might from the NLP community, due primarily to a dearth of publicly available, annotated corpora with which to train and evaluate models. Here we address this gap by introducing EBMNLP, a new corpus to power NLP models in support of EBM. The corpus, accompanying documentation, baseline model implementations for the proposed tasks, and all code are publicly available.2 EBM-NLP comprises ∼5,000 medical abstracts describing clinical trials, multiply annotated in detail with respect to characteristics of the underlying trial Populations (e.g., diabetics), Interventions (insulin), Comparators (placebo) and Outcomes (blood glucose levels). Collectively, these key informational pieces are referred to as PICO elements; they form the basis for wellformed clinical questions (Huang et al., 2006). We adopt a hybrid crowdsourced labeling strategy using heterogeneous annotators with varying expertise and cost, from laypersons to MDs. Annotators were first tasked with marking text spans that described the respective PICO elements. Identified spans were subsequently anno1There is even, perhaps inevitably, a systematic review of such approaches (Jonnalagadda et al., 2015). 2http://www.ccs.neu.edu/home/bennye/ EBM-NLP 198 tated in greater detail: this entailed finer-grained labeling of PICO elements and mapping these onto a normalized vocabulary, and indicating redundancy in the mentions of PICO elements. In addition, we outline several NLP tasks that would directly support the practice of EBM and that may be explored using the introduced resource. We present baseline models and associated results for these tasks. 2 Related Work We briefly review two lines of research relevant to the current effort: work on NLP to facilitate EBM, and research in crowdsourcing for NLP. 2.1 NLP for EBM Prior work on NLP for EBM has been limited by the availability of only small corpora, which have typically provided on the order of a couple hundred annotated abstracts or articles for very complex information extraction tasks. For example, the ExaCT system (Kiritchenko et al., 2010) applies rules to extract 21 aspects of the reported trial. It was developed and validated on a dataset of 182 marked full-text articles. The ACRES system (Summerscales et al., 2011) produces summaries of several trial characteristic, and was trained on 263 annotated abstracts. Hinting at more challenging tasks that can build upon foundational information extraction, Alamri and Stevenson (2015) developed methods for detecting contradictory claims in biomedical papers. Their corpus of annotated claims contains 259 sentences (Alamri and Stevenson, 2016). Larger corpora for EBM tasks have been derived using (noisy) automated annotation approaches. This approach has been used to build, e.g., datasets to facilitate work on Information Retrieval (IR) models for biomedical texts (Scells et al., 2017; Chung, 2009; Boudin et al., 2010). Similar approaches have been used to ‘distantly supervise’ annotation of full-text articles describing clinical trials (Wallace et al., 2016). In contrast to the corpora discussed above, these automatically derived datasets tend to be relatively large, but they include only shallow annotations. Other work attempts to bypass basic extraction tasks and address more complex biomedical QA and (multi-document) summarization problems to support EBM (Demner-Fushman and Lin, 2007; Moll´a and Santiago-Martinez, 2011; Abacha and Zweigenbaum, 2015). Such systems would directly benefit from more accurate extraction of the types codified in the corpus we present here. 2.2 Crowdsourcing Crowdsourcing, which we here define operationally as the use of distributed lay annotators, has shown encouraging results in NLP (Novotney and Callison-Burch, 2010; Sabou et al., 2012). Such annotations are typically imperfect, but methods that aggregate redundant annotations can mitigate this problem (Dalvi et al., 2013; Hovy et al., 2014; Nguyen et al., 2017). Medical articles contain relatively technical content, which intuitively may be difficult for persons without domain expertise to annotate. However, recent promising preliminary work has found that crowdsourced approaches can yield surprisingly high-quality annotations in the domain of EBM specifically (Mortensen et al., 2017; Thomas et al., 2017; Wallace et al., 2017). 3 Data Collection PubMed provides access to the MEDLINE database3 which indexes titles, abstracts and metadata for articles from selected medical journals dating back to the 1970s. MEDLINE indexes over 24 million abstracts; the majority of these have been manually assigned metadata which we used to retrieved a set of 5,000 articles describing RCTs with an emphasis on cardiovascular diseases, cancer, and autism. These particular topics were selected to cover a range of common conditions. We decomposed the annotation process into two steps, performed in sequence. First, we acquired labels demarcating spans in the text describing the clinically salient abstract elements mentioned above: the trial Population, the Interventions and Comparators studied, and the Outcomes measured. We collapse Interventions and Comparators into a single category (I). In the second annotation step, we tasked workers with providing more granular (sub-span) annotations on these spans. For each PIO element, all abstracts were annotated with the following four types of information. 1. Spans exhaustive marking of text spans containing information relevant to the respective PIO categories (Stage 1 annotation). 3https://www.nlm.nih.gov/bsd/ pmresources.html 199 Figure 1: Annotation interface for assigning MeSH terms to snippets. 2. Hierarchical labels assignment of more specific labels to subsequences comprising the marked relevant spans (Stage 2 annotation). 3. Repetition grouping of labeled tokens to indicate repeated occurrences of the same information (Stage 2 annotation). 4. MeSH terms assignment of the metadata MeSH terms associated with the abstract to labeled subsequences (Stage 2 annotation).4 We collected annotations for each P, I and O element individually to avoid the cognitive load imposed by switching between label sets, and to reduce the amount of instruction required to begin the task. All annotation was performed using a modified version of the Brat Rapid Annotation Tool (BRAT) (Stenetorp et al., 2012). We include all annotation instructions provided to workers for all tasks in the Appendix. 3.1 Non-Expert (Layperson) Workers For large scale crowdsourcing via recruitment of layperson annotators, we used Amazon Mechanical Turk (AMT). All workers were required to have an overall job approval rate of at least 90%. Each job presented to the workers required the annotation of three randomly selected abstracts from our pool of documents. As we received initial results, we blocked workers who were clearly not following instructions, and we actively recruited the best workers to continue working on our task at a higher pay rate. 4MeSH is a controlled, structured medical vocabulary maintained by the National Library of Medicine. We began by collecting the least technical annotations, moving on to more difficult tasks only after restricting our pool of workers to those with a demonstrated aptitude for the jobs. We obtained annotations from ≥3 different workers for each of the 5,000 abstracts to enable robust inference of reliable labels from noisy data. After performing filtering passes to remove non-RCT documents or those missing relevant data for the second annotation task, we are left with between 4,000 and 5,000 sets of annotations for each PIO element after the second phase of annotation. 3.2 Expert Workers To supplement our larger-scale data collection via AMT, we collected annotations for 200 abstracts for each PIO element from workers with advanced medical training. The idea is for these to serve as reference annotations, i.e., a test set with which to evaluate developed NLP systems. We plan to enlarge this test set in the near future, at which point we will update the website accordingly. For the initial span labeling task, two medical students from the University of Pennsylvania and Drexel University provided the reference labels. In addition, for both stages of annotation and for the detailed subspan annotation in Stage 2, we hired three medical professionals via Upwork,5 an online platform for hiring skilled freelancers. After reviewing several dozen suggested profiles, we selected three workers that had the following characteristics: Advanced medical training (the majority of hired workers were Medical Doc5http://www.upwork.com 200 tors, the one exception being a fourth-year medical student); Strong technical reading and writing skills; And an interest in medical research. In addition to providing high-quality annotations, individuals hired via Upwork also provided feedback regarding the instructions to help make the task as clear as possible for the AMT workers. 4 The Corpus We now present corpus details, paying special attention to worker performance and agreement. We discuss and present statistics for acquired annotations on spans, tokens, repetition and MeSH terms in Sections 4.1, 4.2, 4.3, and 4.4, respectively. 4.1 Spans For each P, I and O element, workers were asked to read the abstract and highlight all spans of text including any pertinent information. Annotations for 5,000 articles were collected from a total of 579 AMT workers across the three annotation types, and expert annotations were collected for 200 articles from two medical students. We first evaluate the quality of the annotations by calculating token-wise label agreement between the expert annotators; this is reported in Table 2. Due to the difficulty and technicality of the material, agreement between even well-trained domain experts is imperfect. The effect is magnified by the unreliability of AMT workers, motivating our strategy of collecting several noisy annotations and aggregating over them to produce a single cleaner annotation. We tested three different aggregation strategies: a simple majority vote, the Dawid-Skene model (Dawid and Skene, 1979) which estimates worker reliability, and HMMCrowd, a recent extension to Dawid-Skene that includes a HMM component, thus explicitly leveraging the sequential structure of contiguous spans of words (Nguyen et al., 2017). For each aggregation strategy, we compute the token-wise precision and recall of the output labels against the unioned expert labels. As shown in Table 3, the HMMCrowd model afforded modest improvement in F-1 scores over the standard Dawid-Skene model, and was thus used to generate the inputs for the second annotation phase. The limited overlap in the document subsets annotated by any given pair of workers, and wide variation in the number of annotations per worker make interpretation of standard agreement statisOutcomes Physical Health Pain Adverse Effects Mortality Mental/Behavioral Impact Mental Health Participant Behavior Satisfaction With Care Non-health Outcome Quality of Intervention Resource Use Withdrawals from Study Figure 2: Outcome task label hierarchy tics tricky. We quantify the centrality of the AMT span annotations by calculating token-wise precision and recall for each annotation against the aggregated version of the labels (Table 4). When comparing the average precision and recall for individual crowdworkers against the aggregated labels in Table 4, scores are poor showing very low agreement between the workers. Despite this, the aggregated labels compare favorably against the expert labels. This further supports the intuition that it is feasible to collect multiple lowquality annotations for a document and synthesize them to extract the signal from the noise. On the dataset website, we provide a variant of the corpus that includes all individual worker span annotations (e.g., for researchers interested in crowd annotation aggregated methods), and also a version with pre-aggregated annotations for convenience. 4.2 Hierarchical Labels For each P, I, and O category we developed a hierarchy of labels intended to capture important sub categories within these. Our labels are aligned to (and thus compatible with) the concepts codified by the Medical Subject Headings (MeSH) vocabulary of medical terms maintained by the National Library of Medicine (NLM).6 In consulta6https://www.nlm.nih.gov/mesh/ 201 P Fourteen children (12 infantile autism full syndrome present, 2 atypical pervasive developmental disorder) between 5 and 13 years of age Text Label MeSH terms – Fourteen SAMPLE SIZE (FULL) – children AGE (YOUNG) – 12 SAMPLE SIZE (PARTIAL) – autism CONDITION (DISEASE) Autistic Disorder, Child Development Disorders Pervasive – 2 SAMPLE SIZE (PARTIAL) – 5 and 13 AGE (YOUNG) I 20 mg Org 2766 (synthetic analog of ACTH 4-9)/day during 4 weeks, or placebo in a randomly assigned sequence. Text Label MeSH terms – 20 mg Org 2766 PHARMACOLOGICAL Adrenocorticotropic Hormone, Double-Blind Method, Child Development Disorders Pervasive – placebo CONTROL Double-Blind Method O Drug effects and Aberrant Behavior Checklist ratings Text Label MeSH terms – Drug effects QUALITY OF INTERVENTION – Aberrant Behavior Checklist ratings MENTAL (BEHAVIOR) Attention, Stereotyped Behavior Table 1: Partial example annotation for Participants, Interventions, and Outcomes. The full annotation includes multiple top-level spans for each PIO element as well as labels for repetition. Agreement Participants 0.71 Interventions 0.69 Outcomes 0.62 Table 2: Cohen’s κ between medical students for the 200 reference documents. Participants Precision Recall F-1 Majority Vote 0.903 0.507 0.604 Dawid Skene 0.840 0.641 0.686 HMMCrowd 0.719 0.761 0.698 Interventions Precision Recall F-1 Majority Vote 0.843 0.432 0.519 Dawid Skene 0.755 0.623 0.650 HMMCrowd 0.644 0.800 0.683 Outcomes Precision Recall F-1 Majority Vote 0.711 0.577 0.623 Dawid Skene 0.652 0.648 0.629 HMMCrowd 0.498 0.807 0.593 Table 3: Precision, recall and F-1 for aggregated AMT spans evaluated against the union of expert span labels, for all three P, I, and O elements. tion with domain experts, we selected subsets of MeSH terms for each PIO category that captured relatively precise information without being overwhelming. For illustration, we show the outcomes label hierarchy we used in Figure 2. We reproduce the label hierarchies used for all PIO categories in the Appendix. At this stage, workers were presented with abstracts in which relevant spans were highlighted, based on the annotations collected in the first annotation phase (and aggregated via the HMMPrecision Recall F-1 Participants 0.34 0.29 0.30 Interventions 0.20 0.16 0.18 Outcomes 0.11 0.10 0.10 Table 4: Token-wise statistics for individual AMT annotations evaluated against the aggregated versions. Span frequency AMT Experts Participants 34.5 21.4 Interventions 26.5 14.3 Outcomes 33.0 26.9 Table 5: Average per-document frequency of different token labels. Crowd model). This two-step approach served dual purposes: (i) increasing the rate at which workers could complete tasks, and (ii) improving recall by directing workers to all areas in abstracts where they might find the structured information of interest. Our choice of a high recall aggregation strategy for the starting spans ensured that the large majority of relevant sections of the article were available as inputs to this task. The three trained medical personnel hired via Upwork each annotated 200 documents and reported that spans sufficiently captured the target information. These domain experts received feedback and additional training after labeling an initial round of documents, and all annotations were reviewed for compliance. The average inter202 annotator agreement is reported in Table 6. Agreement Participants 0.50 Interventions 0.59 Outcomes 0.51 Table 6: Average pair-wise Cohen’s κ between three medical experts for the 200 reference documents. With respect to crowdsourcing on AMT, the task for Participants was published first, allowing us to target higher quality workers for the more technical Interventions and Outcomes annotations. We retained labels from 118 workers for Participants, the top 67 of whom were invited to continue on to the following tasks. Of these, 37 continued to contribute to the project. Several workers provided ≥1,000 annotations and continued to work on the task over a period of several months. To produce final per-token labels, we again turned to aggregation. The subspans annotated in this second pass were by construction shorter than the starting spans, and (perhaps as a result) informal experiments revealed little benefit from HMMCrowd’s sequential modeling aspect. The introduction of many label types significantly increased the complexity of the task, resulting in both lower expert inter-annotator agreement (Table 6 and decreased performance when comparing the crowdsourced labels against those of the experts (Table 7. Participants Precision Recall F-1 Majority Vote 0.46 0.58 0.51 Dawid Skene 0.66 0.60 0.63 Interventions Precision Recall F-1 Majority Vote 0.56 0.49 0.52 Dawid Skene 0.56 0.52 0.54 Outcomes Precision Recall F-1 Majority Vote 0.73 0.69 0.71 Dawid Skene 0.73 0.80 0.76 Table 7: Precision, recall, and F-1 for AMT labels against expert labels using different aggregation strategies. Most observed token-level disagreements (and errors, with respect to reference annotations) involve differences in the span lengths demarcated by individuals. For example, many abstracts contain an information-dense description of the patient population, focusing on their medical condition but also including information about their sex and/or age. Workers would also sometimes fail Figure 3: Confusion matrix for token-level labels provided by experts. to capture repeated mentions of the same information, producing Type 2 errors more frequently than Type 1. This tendency can be seen in the overall token-level confusion matrix for AMT workers on the Participants task, shown in Figure 3. In a similar though more benign category of error, workers differed in the amount of context they included surrounding each subspan. Although the instructions asked workers to highlight minimal subspans, there was variance in what workers considered relevant. Precision Recall F-1 Participants 0.39 0.71 0.50 Interventions 0.59 0.60 0.60 Outcomes 0.70 0.68 0.69 Table 8: Statistics for individual AMT annotations evaluated against the aggregated versions, macroaveraged over different labels. For the same reasons mentioned above (little pairwise overlap in annotations, high variance with respect to annotations per worker), quantifying agreement between AMT workers is again difficult using traditional measures. We thus again take as a measure of agreement the precision, recall, and F-1 of the individual annotations against the aggregated labels and present the results in Table 8. 4.3 Repetition Medical abstracts often mention the same information in multiple places. In particular, interventions and outcomes are typically described at the beginning of an abstract when introducing the purpose of the underlying study, and then again when discussing methods and results. It is important to 203 Span frequency Participants AMT Experts TOTAL 3.45 6.25 Age 0.49 0.66 Condition 1.77 3.69 Gender 0.36 0.34 Sample Size 0.83 1.55 Interventions AMT Experts TOTAL 6.11 9.31 Behavioral 0.22 0.37 Control 0.83 0.94 Educational 0.04 0.07 No Label 0.00 0.00 Other 0.23 1.12 Pharmacological 3.37 5.19 Physical 0.87 0.88 Psychological 0.29 0.19 Surgical 0.24 0.62 Outcomes AMT Experts TOTAL 6.36 10.00 Adverse effects 0.45 0.66 Mental 0.69 0.79 Mortality 0.23 0.33 Other 1.77 3.70 Pain 0.18 0.27 Physical 3.03 4.25 Table 9: Average per-document frequency of different label types. be able to differentiate between novel and reiterated information, especially in cases such as complex interventions, distinct measured outcomes, or multi-armed trials. Merely identifying all occurrences of, for example, a pharmacological intervention leaves ambiguity as to how many distinct interventions were applied. Workers identified repeated information as follows. After completing detailed labeling of abstract spans, they were asked to group together subspans that were instances of the same information (for example, redundant mentions of a particular drug evaluated as one of the interventions in the trial). This process produces labels for repetition between short spans of tokens. Due to the differences in the lengths of annotated subspans discussed in the preceding section, the labels are not naturally comparable between workers without directly modeling the entities contained in each subspan. The labels assigned by workers produce repetition labels between sets of tokens but a more sophisticated notion of co-reference is required to identify which tokens correctly represent the entity contained in the span, and which tokens are superfluous noise. As a proxy for formally enumerating these entities, we observe that a large majority of startPrecision Recall F-1 Participants 0.40 0.77 0.53 Interventions 0.63 0.90 0.74 Outcomes 0.47 0.73 0.57 Table 10: Comparison against the majority vote for span-level repetition labels. ing spans only contain a single target relevant to the subspan labeling task, and so identifying repetition between the starting spans is sufficient. For example, consider the starting intervention span ”underwent conventional total knee arthroplasty”; there is only one intervention in the span but some annotators assigned the SURGICAL label to all five tokens while others opted for only ”total knee arthroplasty.” By analyzing repetition at the level of the starting spans, we can compute agreement without concern for the confounds of slight misalignments or differences in length of the subspans. Overall agreement between AMT workers for span-level repetition, measured by computing precision and recall against the majority vote for each pair of spans, is reported in Table 10. 4.4 MeSH Terms The National Library of Medicine maintains an extensive hierarchical ontology of medical concepts called Medical Subject Headings (MeSH terms); this is part of the overarching Metathesaurus of the Unified Medical Language System (UMLS). Personnel at the NLM manually assign citations (article titles, abstracts and meta-data) indexed in MEDLINE relevant MeSH terms. These terms have been used extensively to evaluate the content of articles, and are frequently used to facilitate document retrieval (Lu et al., 2009; Lowe and Barnett, 1994). In the case of randomized controlled trials, MeSH terms provide structured information regarding key aspects of the underlying studies, ranging from participant demographics to methodologies to co-morbidities. A drawback to these annotations, however, is that they are applied at the document (rather than snippet or token) level. To capture where MeSH terms are instantiated within a given abstract text, we provided a list of all terms associated with said article and instructed workers to select the subset of these that applied to each set of token labels that they annotated. MeSH terms are domain specific and many re204 Figure 4: Histogram of the number of documents containing each MeSH term. Inst. Freq 10% 25% 50% Participants 65 24 7 Interventions 106 68 32 Outcomes 118 108 75 Table 11: The number of common MeSH terms (out of 135) that were assigned to a span of text in at least 10%, 25%, and 50% of the possible documents. quire a medical background to understand, thus rendering this facet of the annotation process particularly difficult for untrained (lay) workers. Perhaps surprisingly, several AMT workers voluntarily mentioned relevant background training; our pool of workers included (self-identified) nurses and other trained medical professionals. A few workers with such training stated this background as a reason for their interest in our tasks. The technical specificity of the more obscure MeSH terms is also exacerbated by their sparsity. Of the 6,963 unique MeSH terms occurring in our set of abstracts, 87% of them are only found in 10 documents or fewer and only 2.0% occur in at least 1% of the total documents. The full distribution of document frequency for MeSH terms is show in Figure 4. To evaluate how often salient MeSH terms were instantiated in the text by annotators we consider only the 135 MeSH terms that occur in at least 1% of abstracts (we list these in the supplementary material). For each term, we calculate its ”instantiation frequency” as the percentage of abstracts containing the term in which at least one annotator assigned it to a span of text. The total numbers of MeSH terms with an instantiation rate above different thresholds for the respective PIO elements are shown in Table 11. 5 Tasks & Baselines We outline a few NLP tasks that are central to the aim of processing medical literature generally and to aiding practitioners of EBM specifically. First, we consider the task of identifying spans in abstracts that describe the respective PICO elements (Section 5.1). This would, e.g., improve medical literature search and retrieval systems. Next, we outline the problem of extracting structured information from abstracts (Section 5.2). Such models would further aid search, and might eventually facilitate automated knowledge-base construction for the clinical trials literature. Furthermore, automatic extraction of structured data would enable automation of the manual evidence synthesis process (Marshall et al., 2017). Finally, we consider the challenging task of identifying redundant mentions of the same PICO element (Section 5.3). This happens, e.g., when an intervention is mentioned by the authors repeatedly in an abstract, potentially with different terms. Achieving such disambiguation is important for systems aiming to induce structured representations of trials and their results, as this would require recognizing and normalizing the unique interventions and outcomes studied in a trial. For each of these tasks we present baseline models and corresponding results. Note that we have pre-defined train, development and test sets across PIO elements for this corpus, comprising 4300, 500 and 200 abstracts, respectively. The latter set is annotated by domain experts (i.e., persons with medical training). These splits will, of course, be distributed along with the dataset to facilitate model comparisons. 5.1 Identifying P, I and O Spans We consider two baseline models: a linear Conditional Random Field (CRF) (Lafferty et al., 2001) and a Long Short-Term Memory (LSTM) neural tagging model, an LSTM-CRF (Lample et al., 2016; Ma and Hovy, 2016). In both models, we treat tokens as being either Inside (I) or Outside (O) of spans. For the CRF, features include: indicators for the current, previous and next words; part of speech tags inferred using the Stanford CoreNLP tagger (Manning et al., 2014); and character information, e.g., whether a token contains digits, uppercase letters, symbols and so on. For the neural model, the model induces features via a bi-directional LSTM that consumes distributed vector representations of input tokens sequentially. The bi-LSTM yields a hidden vector at 205 CRF Precision Recall F-1 Participants 0.55 0.51 0.53 Interventions 0.65 0.21 0.32 Outcomes 0.83 0.17 0.29 LSTM-CRF Precision Recall F-1 Participants 0.78 0.66 0.71 Interventions 0.61 0.70 0.65 Outcomes 0.69 0.58 0.63 Table 12: Baseline models (on the test set) for the PIO span tagging task. LogReg Precision Recall F-1 Participants 0.41 0.20 0.26 Interventions 0.79 0.44 0.57 Outcomes 0.24 0.21 0.22 CRF Precision Recall F-1 Participants 0.41 0.25 0.31 Interventions 0.59 0.15 0.21 Outcomes 0.60 0.51 0.55 Table 13: Baseline models for the token-level, detailed labeling task. each token index, which is then passed to a CRF layer for prediction. We also exploit characterlevel information by passing a bi-LSTM over the characters comprising each word (Lample et al., 2016); these are appended to the word embedding representations before being passed through the bi-LSTM. 5.2 Extracting Structured Information Beyond identifying the spans of text containing information pertinent to each of the PIO elements, we consider the task of predicting which of the detailed labels occur in each span, and where they are located. Specifically, we begin with the starting spans and predict a single label from the corresponding PIO hierarchy for each token, evaluating against the test set of 200 documents. Initial experiments with neural models proved unfruitful but bear further investigation. For the CRF model we include the same features as in the previous model, supplemented with additional features encoding if the adjacent tokens include any parenthesis or mathematical operators (specifically: %, +, −). For the logistic regression model, we use a one-vs-rest approach. Features include token n-grams, part of speech indicators, and the same character-level information as in the CRF model. 5.3 Detecting Repetition To formalize repetition, we consider every pair of starting PIO spans from each abstract, and assign Precision Recall F-1 Participants 0.39 0.52 0.44 Interventions 0.41 0.50 0.45 Outcomes 0.10 0.16 0.12 Table 14: Baseline model for predicting whether pairs of spans contain redundant information. binary labels that indicate whether they share at least one instance of the same information. Although this makes prediction easier for long and information-dense spans, a large enough majority of the spans contain only a single instance of relevant information that the task serves as a reasonable baseline. Again, the model is trained on the aggregated labels collected from AMT and evaluated against the high-quality test set. We train a logistic regression model that operates over standard features, including bag-ofwords representations and sentence-level features such as length and position in the document. All baseline model implementations are available on the corpus website. 6 Conclusions We have presented EBM-NLP: a new, publicly available corpus comprising 5,000 richly annotated abstracts of articles describing clinical randomized controlled trials. This dataset fills a need for larger scale corpora to facilitate research on NLP methods for processing the biomedical literature, which have the potential to aid the conduct of EBM. The need for such technologies will only become more pressing as the literature continues its torrential growth. The EBM-NLP corpus, accompanying documentation, code for working with the data, and baseline models presented in this work are all publicly available at: http://www.ccs.neu. edu/home/bennye/EBM-NLP. 7 Acknowledgements This work was supported in part by the National Cancer Institute (NCI) of the National Institutes of Health (NIH), award number UH2CA203711. References Asma Ben Abacha and Pierre Zweigenbaum. 2015. Means: A medical question-answering system combining nlp techniques and semantic web technologies. Information processing & management, 51(5):570–594. 206 Abdulaziz Alamri and Mark Stevenson. 2015. Automatic detection of answers to research questions from medline. Proceedings of the workshop on Biomedical Natural Language Processing (BioNLP), pages 141–146. Abdulaziz Alamri and Mark Stevenson. 2016. A corpus of potentially contradictory research claims from cardiovascular research abstracts. Journal of biomedical semantics, 7(1):36. Hilda Bastian, Paul Glasziou, and Iain Chalmers. 2010. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS medicine, 7(9):e1000326. Rohit Borah, Andrew W Brown, Patrice L Capers, and Kathryn A Kaiser. 2017. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the prospero registry. BMJ open, 7(2):e012545. Florian Boudin, Jian-Yun Nie, and Martin Dawes. 2010. Positional language models for clinical information retrieval. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 108–115. Association for Computational Linguistics. Grace Y Chung. 2009. Sentence retrieval for abstracts of randomized controlled trials. BMC medical informatics and decision making, 9(1):10. Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vibhor Rastogi. 2013. Aggregating crowdsourced binary ratings. In Proceedings of the International Conference on World Wide Web (WWW), pages 285– 294. ACM. Alexander Philip Dawid and Allan M Skene. 1979. Maximum likelihood estimation of observer errorrates using the em algorithm. Applied statistics, pages 20–28. Dina Demner-Fushman and Jimmy Lin. 2007. Answering clinical questions with knowledge-based and statistical techniques. Computational Linguistics, 33(1):63–103. Elisa Ferracane, Iain Marshall, Byron C Wallace, and Katrin Erk. 2016. Leveraging coreference to identify arms in medical abstracts: An experimental study. In Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis, pages 86–95. Alan G Fraser and Frank D Dunstan. 2010. On the impossibility of being expert. British Medical Journal, 341:c6815. Dirk Hovy, Barbara Plank, and Anders Søgaard. 2014. Experiments with crowdsourced re-annotation of a pos tagging data set. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (volume 2: Short Papers), volume 2, pages 377–382. Xiaoli Huang, Jimmy Lin, and Dina Demner-Fushman. 2006. Evaluation of PICO as a knowledge representation for clinical questions. In AMIA annual symposium proceedings, volume 2006, page 359. American Medical Informatics Association. Siddhartha R Jonnalagadda, Pawan Goyal, and Mark D Huffman. 2015. Automating data extraction in systematic reviews: a systematic review. Systematic reviews, 4(1):78. Svetlana Kiritchenko, Berry de Bruijn, Simona Carini, Joel Martin, and Ida Sim. 2010. Exact: automatic extraction of clinical trial characteristics from journal publications. BMC medical informatics and decision making, 10(1):56. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. Henry J Lowe and G Octo Barnett. 1994. Understanding and using the medical subject headings (mesh) vocabulary to perform literature searches. Jama, 271(14):1103–1108. Zhiyong Lu, Won Kim, and W John Wilbur. 2009. Evaluation of query expansion using mesh in pubmed. Information retrieval, 12(1):69–80. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demonstrations, pages 55–60. Iain Marshall, Jo¨el Kuiper, Edward Banner, and Byron C. Wallace. 2017. Automating Biomedical Evidence Synthesis: RobotReviewer. In Proceedings of the Association for Computational Linguistics (ACL), System Demonstrations, pages 7–12. Association for Computational Linguistics (ACL). Diego Moll´a and Maria Elena Santiago-Martinez. 2011. Development of a corpus for evidence based medicine summarisation. Michael L Mortensen, Gaelen P Adam, Thomas A Trikalinos, Tim Kraska, and Byron C Wallace. 2017. An exploration of crowdsourcing citation screening for systematic reviews. Research synthesis methods, 8(3):366–386. 207 An T Nguyen, Byron C Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annotations. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2017, page 299. NIH Public Access. Scott Novotney and Chris Callison-Burch. 2010. Cheap, fast and good enough: Automatic speech recognition with non-expert transcription. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 207–215. Association for Computational Linguistics. Marta Sabou, Kalina Bontcheva, and Arno Scharl. 2012. Crowdsourcing research opportunities: lessons from natural language processing. In Proceedings of the 12th International Conference on Knowledge Management and Knowledge Technologies, page 17. ACM. Harrisen Scells, Guido Zuccon, Bevan Koopman, Anthony Deacon, Leif Azzopardi, and Shlomo Geva. 2017. A test collection for evaluating retrieval of studies for inclusion in systematic reviews. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1237–1240. ACM. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. Brat: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102–107. Association for Computational Linguistics. Rodney L Summerscales, Shlomo Argamon, Shangda Bai, Jordan Hupert, and Alan Schwartz. 2011. Automatic summarization of results from clinical trials. In Bioinformatics and Biomedicine (BIBM), 2011 IEEE International Conference on, pages 372–377. IEEE. James Thomas, Anna Noel-Storr, Iain Marshall, Byron Wallace, Steven McDonald, Chris Mavergames, Paul Glasziou, Ian Shemilt, Anneliese Synnot, Tari Turner, et al. 2017. Living systematic reviews: 2. combining human and machine effort. Journal of clinical epidemiology, 91:31–37. Guy Tsafnat, Adam Dunn, Paul Glasziou, Enrico Coiera, et al. 2013. The automation of systematic reviews. BMJ, 346(f139):1–2. Mathias Verbeke, Vincent Van Asch, Roser Morante, Paolo Frasconi, Walter Daelemans, and Luc De Raedt. 2012. A statistical relational learning approach to identifying evidence based medicine categories. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 579–589. Association for Computational Linguistics. Byron C Wallace, Issa J Dahabreh, Christopher H Schmid, Joseph Lau, and Thomas A Trikalinos. 2013. Modernizing the systematic review process to inform comparative effectiveness: tools and methods. Journal of comparative effectiveness research, 2(3):273–282. Byron C Wallace, Jo¨el Kuiper, Aakash Sharma, Mingxi Brian Zhu, and Iain J Marshall. 2016. Extracting PICO sentences from clinical trial reports using supervised distant supervision. Journal of Machine Learning Research, 17(132):1–25. Byron C Wallace, Anna Noel-Storr, Iain J Marshall, Aaron M Cohen, Neil R Smalheiser, and James Thomas. 2017. Identifying reports of randomized controlled trials (rcts) via a hybrid machine learning and crowdsourcing approach. Journal of the American Medical Informatics Association, 24(6):1165– 1168.
2018
19
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2041–2050 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2041 NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing Dinghan Shen1∗, Qinliang Su2∗, Paidamoyo Chapfuwa1, Wenlin Wang1, Guoyin Wang1, Lawrence Carin1, Ricardo Henao1 1 Duke University 2 Sun Yat-sen University [email protected] Abstract Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios. 1 Introduction The problem of similarity search, also called nearest-neighbor search, consists of finding documents from a large collection of documents, or corpus, which are most similar to a query document of interest. Fast and accurate similarity search is at the core of many information retrieval applications, such as plagiarism analysis (Stein et al., 2007), collaborative filtering (Koren, 2008), content-based multimedia retrieval (Lew et al., 2006) and caching (Pandey et al., 2009). Semantic hashing is an effective approach for fast similarity search (Salakhutdinov and Hinton, 2009; Zhang ∗Equal contribution. et al., 2010; Wang et al., 2014). By representing every document in the corpus as a similaritypreserving discrete (binary) hashing code, the similarity between two documents can be evaluated by simply calculating pairwise Hamming distances between hashing codes, i.e., the number of bits that are different between two codes. Given that today, an ordinary PC is able to execute millions of Hamming distance computations in just a few milliseconds (Zhang et al., 2010), this semantic hashing strategy is very computationally attractive. While considerable research has been devoted to text (semantic) hashing, existing approaches typically require two-stage training procedures. These methods can be generally divided into two categories: (i) binary codes for documents are first learned in an unsupervised manner, then l binary classifiers are trained via supervised learning to predict the l-bit hashing code (Zhang et al., 2010; Xu et al., 2015); (ii) continuous text representations are first inferred, which are binarized as a second (separate) step during testing (Wang et al., 2013; Chaidaroon and Fang, 2017). Because the model parameters are not learned in an end-to-end manner, these two-stage training strategies may result in suboptimal local optima. This happens because different modules within the model are optimized separately, preventing the sharing of information between them. Further, in existing methods, binary constraints are typically handled adhoc by truncation, i.e., the hashing codes are obtained via direct binarization from continuous representations after training. As a result, the information contained in the continuous representations is lost during the (separate) binarization process. Moreover, training different modules (mapping and classifier/binarization) separately often requires additional hyperparameter tuning for each training stage, which can be laborious and timeconsuming. 2042 In this paper, we propose a simple and generic neural architecture for text hashing that learns binary latent codes for documents in an end-toend manner. Inspired by recent advances in neural variational inference (NVI) for text processing (Miao et al., 2016; Yang et al., 2017; Shen et al., 2017b), we approach semantic hashing from a generative model perspective, where binary (hashing) codes are represented as either deterministic or stochastic Bernoulli latent variables. The inference (encoder) and generative (decoder) networks are optimized jointly by maximizing a variational lower bound to the marginal distribution of input documents (corpus). By leveraging a simple and effective method to estimate the gradients with respect to discrete (binary) variables, the loss term from the generative (decoder) network can be directly backpropagated into the inference (encoder) network to optimize the hash function. Motivated by the rate-distortion theory (Berger, 1971; Theis et al., 2017), we propose to inject data-dependent noise into the latent codes during the decoding stage, which adaptively accounts for the tradeoff between minimizing rate (number of bits used, or effective code length) and distortion (reconstruction error) during training. The connection between the proposed method and ratedistortion theory is further elucidated, providing a theoretical foundation for the effectiveness of our framework. Summarizing, the contributions of this paper are: (i) to the best of our knowledge, we present the first semantic hashing architecture that can be trained in an end-to-end manner; (ii) we propose a neural variational inference framework to learn compact (regularized) binary codes for documents, achieving promising results on both unsupervised and supervised text hashing; (iii) the connection between our method and rate-distortion theory is established, from which we demonstrate the advantage of injecting data-dependent noise into the latent variable during training. 2 Related Work Models with discrete random variables have attracted much attention in the deep learning community (Jang et al., 2016; Maddison et al., 2016; van den Oord et al., 2017; Li et al., 2017; Shu and Nakayama, 2017). Some of these structures are more natural choices for language or speech data, which are inherently discrete. More specifically, gφ(x) <latexit sha1_base64="4gsoFBpBAbmyfn2ZeNA3fTqK6U=">AB73icbVBNTwIxEJ3FL8Qv1KOXRmKCF7JrSNQb0YtHTFzBwIZ0Sx ca2u6m7RrJhl/hxYMar/4db/4bC+xBwZdM8vLeTGbmhQln2rjut1NYWV1b3yhulra2d3b3yvsH9zpOFaE+iXms2iHWlDNJfcMp+1EUSxCTlvh6Hrqtx6p0iyWd2ac0EDgWQRI9hY6WHQ6yZDVn067ZUrbs2dAS0TLycVyNHslb+6/ZikgkpDONa647mJCTKsDCOcTkrdVNMEkxEe 0I6lEguqg2x28ASdWKWPoljZkgbN1N8TGRZaj0VoOwU2Q73oTcX/vE5qosgYzJDZVkvihKOTIxmn6P+kxRYvjYEkwUs7ciMsQKE2MzKtkQvMWXl4l/Vrusubf1SuMqT6MIR3AMVfDgHBpwA03wgYCAZ3iFN0c5L8678zFvLTj5zCH8gfP5A5/Qj9M=</latexit> <latexit sha1_base64="4gsoFBpBAbmyfn2ZeNA3fTqK6U=">AB73icbVBNTwIxEJ3FL8Qv1KOXRmKCF7JrSNQb0YtHTFzBwIZ0Sx ca2u6m7RrJhl/hxYMar/4db/4bC+xBwZdM8vLeTGbmhQln2rjut1NYWV1b3yhulra2d3b3yvsH9zpOFaE+iXms2iHWlDNJfcMp+1EUSxCTlvh6Hrqtx6p0iyWd2ac0EDgWQRI9hY6WHQ6yZDVn067ZUrbs2dAS0TLycVyNHslb+6/ZikgkpDONa647mJCTKsDCOcTkrdVNMEkxEe 0I6lEguqg2x28ASdWKWPoljZkgbN1N8TGRZaj0VoOwU2Q73oTcX/vE5qosgYzJDZVkvihKOTIxmn6P+kxRYvjYEkwUs7ciMsQKE2MzKtkQvMWXl4l/Vrusubf1SuMqT6MIR3AMVfDgHBpwA03wgYCAZ3iFN0c5L8678zFvLTj5zCH8gfP5A5/Qj9M=</latexit> <latexit sha1_base64="4gsoFBpBAbmyfn2ZeNA3fTqK6U=">AB73icbVBNTwIxEJ3FL8Qv1KOXRmKCF7JrSNQb0YtHTFzBwIZ0Sx ca2u6m7RrJhl/hxYMar/4db/4bC+xBwZdM8vLeTGbmhQln2rjut1NYWV1b3yhulra2d3b3yvsH9zpOFaE+iXms2iHWlDNJfcMp+1EUSxCTlvh6Hrqtx6p0iyWd2ac0EDgWQRI9hY6WHQ6yZDVn067ZUrbs2dAS0TLycVyNHslb+6/ZikgkpDONa647mJCTKsDCOcTkrdVNMEkxEe 0I6lEguqg2x28ASdWKWPoljZkgbN1N8TGRZaj0VoOwU2Q73oTcX/vE5qosgYzJDZVkvihKOTIxmn6P+kxRYvjYEkwUs7ciMsQKE2MzKtkQvMWXl4l/Vrusubf1SuMqT6MIR3AMVfDgHBpwA03wgYCAZ3iFN0c5L8678zFvLTj5zCH8gfP5A5/Qj9M=</latexit> z <latexit sha1_base64="WIlbTbBFLcqOvt81zBc03GagJU=">AB53icbVBNS8N AEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Qg19Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evH9wr+NUMfRYLGLVDqhGwSV6h huB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWS0nFcjR6JW/uv2YpRFKwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/ X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAlitbCRtSRZmx2ZRsCLXFl5eJd1a9qrN80r9Ok+jCEdwDKdQgwuowy0wAMGCM/wCm/Og/Pi vDsf89aCk8cwh84nz9XU4zR</latexit> <latexit sha1_base64="WIlbTbBFLcqOvt81zBc03GagJU=">AB53icbVBNS8N AEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Qg19Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evH9wr+NUMfRYLGLVDqhGwSV6h huB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWS0nFcjR6JW/uv2YpRFKwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/ X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAlitbCRtSRZmx2ZRsCLXFl5eJd1a9qrN80r9Ok+jCEdwDKdQgwuowy0wAMGCM/wCm/Og/Pi vDsf89aCk8cwh84nz9XU4zR</latexit> <latexit sha1_base64="WIlbTbBFLcqOvt81zBc03GagJU=">AB53icbVBNS8N AEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Qg19Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evH9wr+NUMfRYLGLVDqhGwSV6h huB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWS0nFcjR6JW/uv2YpRFKwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/ X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAlitbCRtSRZmx2ZRsCLXFl5eJd1a9qrN80r9Ok+jCEdwDKdQgwuowy0wAMGCM/wCm/Og/Pi vDsf89aCk8cwh84nz9XU4zR</latexit> ˆx <latexit sha1_base64="9fy0Mz7X/Akug6I9ARal+HBkn4=">AB7 XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsTsQS+iO8eFDx6v/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmldW19o7xZ2 dre2d2r7h8mCTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXJtRKLucZzyIKYDJSLBKFqp1R1SzJ8mvWrNrbszkGXiFaQGBZq96le3n7As5g qZpMZ0PDfFIKcaBZN8UulmhqeUjeiAdyxVNOYmyGfnTsiJVfokSrQthWSm/p7IaWzMOA5tZ0xaBa9qfif18kwugxyodIMuWLzRVEmCSZk+jvpC80 ZyrElGlhbyVsSDVlaBOq2BC8xZeXiX9Wv6q7d+e1xnWRhmO4BhOwYMLaMAtNMEHBiN4hld4c1LnxXl3PuatJaeYOYQ/cD5/ABu5j5w=</latex it> <latexit sha1_base64="9fy0Mz7X/Akug6I9ARal+HBkn4=">AB7 XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsTsQS+iO8eFDx6v/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmldW19o7xZ2 dre2d2r7h8mCTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXJtRKLucZzyIKYDJSLBKFqp1R1SzJ8mvWrNrbszkGXiFaQGBZq96le3n7As5g qZpMZ0PDfFIKcaBZN8UulmhqeUjeiAdyxVNOYmyGfnTsiJVfokSrQthWSm/p7IaWzMOA5tZ0xaBa9qfif18kwugxyodIMuWLzRVEmCSZk+jvpC80 ZyrElGlhbyVsSDVlaBOq2BC8xZeXiX9Wv6q7d+e1xnWRhmO4BhOwYMLaMAtNMEHBiN4hld4c1LnxXl3PuatJaeYOYQ/cD5/ABu5j5w=</latex it> <latexit sha1_base64="9fy0Mz7X/Akug6I9ARal+HBkn4=">AB7 XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGFtoQ9lsN+3SzSbsTsQS+iO8eFDx6v/x5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmldW19o7xZ2 dre2d2r7h8mCTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXJtRKLucZzyIKYDJSLBKFqp1R1SzJ8mvWrNrbszkGXiFaQGBZq96le3n7As5g qZpMZ0PDfFIKcaBZN8UulmhqeUjeiAdyxVNOYmyGfnTsiJVfokSrQthWSm/p7IaWzMOA5tZ0xaBa9qfif18kwugxyodIMuWLzRVEmCSZk+jvpC80 ZyrElGlhbyVsSDVlaBOq2BC8xZeXiX9Wv6q7d+e1xnWRhmO4BhOwYMLaMAtNMEHBiN4hld4c1LnxXl3PuatJaeYOYQ/cD5/ABu5j5w=</latex it> x <latexit sha1_base64="wrYRrS9nqr2/jTKdHNfdRLtLB0k=">AB53icbVBNS8N AEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Ygl9Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evH9wr+NUMfRYLGLVDqhGwSV6h huB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWS0nFcjR6JW/uv2YpRFKwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/ X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAlitbCRtSRZmx2ZRsCLXFl5eJd1a9qrN80r9Ok+jCEdwDKdQgwuowy0wAMGCM/wCm/Og/Pi vDsf89aCk8cwh84nz9UTYzP</latexit> <latexit sha1_base64="wrYRrS9nqr2/jTKdHNfdRLtLB0k=">AB53icbVBNS8N AEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Ygl9Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evH9wr+NUMfRYLGLVDqhGwSV6h huB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWS0nFcjR6JW/uv2YpRFKwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/ X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAlitbCRtSRZmx2ZRsCLXFl5eJd1a9qrN80r9Ok+jCEdwDKdQgwuowy0wAMGCM/wCm/Og/Pi vDsf89aCk8cwh84nz9UTYzP</latexit> <latexit sha1_base64="wrYRrS9nqr2/jTKdHNfdRLtLB0k=">AB53icbVBNS8N AEJ3Ur1q/qh69LBbBU0lFUG9FLx5bMLbQhrLZTtq1m03Y3Ygl9Bd48aDi1b/kzX/jts1BWx8MPN6bYWZekAiujet+O4WV1bX1jeJmaWt7Z3evH9wr+NUMfRYLGLVDqhGwSV6h huB7UQhjQKBrWB0M/Vbj6g0j+WdGSfoR3QgecgZNVZqPvXKFbfqzkCWS0nFcjR6JW/uv2YpRFKwTVulNzE+NnVBnOBE5K3VRjQtmIDrBjqaQRaj+bHTohJ1bpkzBWtqQhM/ X3REYjrcdRYDsjaoZ60ZuK/3md1ISXfsZlkhqUbL4oTAUxMZl+TfpcITNibAlitbCRtSRZmx2ZRsCLXFl5eJd1a9qrN80r9Ok+jCEdwDKdQgwuowy0wAMGCM/wCm/Og/Pi vDsf89aCk8cwh84nz9UTYzP</latexit> log σ2 <latexit sha1_base64="7fXReuSi2AGXHQbFX8oagcVUXco=">AB83icbVBNS8NAEJ34WetX1aOXxSJ4KkR1FvRi8cKxhaW DbTbp0Nxt3N4VS+ju8eFDx6p/x5r9x2+agrQ8GHu/NMDMvyjTxnW/nZXVtfWNzdJWeXtnd2+/cnD4oGWuCPWJ5FK1I6wpZyn1DTOctjNFsYg4bUWDm6nfGlKlmUzvzSijocBJymJGsLFSGHCZoECzRODHerdSdWvuDGiZeAWpQoFmt/IV9CTJBU0N4VjrjudmJhxjZRj hdFIOck0zTAY4oR1LUyoDsezoyfo1Co9FEtlKzVopv6eGOh9UhEtlNg09eL3lT8z+vkJr4MxyzNckNTMl8U5xwZiaYJoB5TlBg+sgQTxeytiPSxwsTYnMo2BG/x5WXi12tXNfuvNq4LtIowTGcwBl4cAENuIUm+EDgCZ7hFd6cofPivDsf89YVp5g5gj9wPn8AnmWRi g=</latexit> <latexit sha1_base64="7fXReuSi2AGXHQbFX8oagcVUXco=">AB83icbVBNS8NAEJ34WetX1aOXxSJ4KkR1FvRi8cKxhaW DbTbp0Nxt3N4VS+ju8eFDx6p/x5r9x2+agrQ8GHu/NMDMvyjTxnW/nZXVtfWNzdJWeXtnd2+/cnD4oGWuCPWJ5FK1I6wpZyn1DTOctjNFsYg4bUWDm6nfGlKlmUzvzSijocBJymJGsLFSGHCZoECzRODHerdSdWvuDGiZeAWpQoFmt/IV9CTJBU0N4VjrjudmJhxjZRj hdFIOck0zTAY4oR1LUyoDsezoyfo1Co9FEtlKzVopv6eGOh9UhEtlNg09eL3lT8z+vkJr4MxyzNckNTMl8U5xwZiaYJoB5TlBg+sgQTxeytiPSxwsTYnMo2BG/x5WXi12tXNfuvNq4LtIowTGcwBl4cAENuIUm+EDgCZ7hFd6cofPivDsf89YVp5g5gj9wPn8AnmWRi g=</latexit> <latexit sha1_base64="7fXReuSi2AGXHQbFX8oagcVUXco=">AB83icbVBNS8NAEJ34WetX1aOXxSJ4KkR1FvRi8cKxhaW DbTbp0Nxt3N4VS+ju8eFDx6p/x5r9x2+agrQ8GHu/NMDMvyjTxnW/nZXVtfWNzdJWeXtnd2+/cnD4oGWuCPWJ5FK1I6wpZyn1DTOctjNFsYg4bUWDm6nfGlKlmUzvzSijocBJymJGsLFSGHCZoECzRODHerdSdWvuDGiZeAWpQoFmt/IV9CTJBU0N4VjrjudmJhxjZRj hdFIOck0zTAY4oR1LUyoDsezoyfo1Co9FEtlKzVopv6eGOh9UhEtlNg09eL3lT8z+vkJr4MxyzNckNTMl8U5xwZiaYJoB5TlBg+sgQTxeytiPSxwsTYnMo2BG/x5WXi12tXNfuvNq4LtIowTGcwBl4cAENuIUm+EDgCZ7hFd6cofPivDsf89YVp5g5gj9wPn8AnmWRi g=</latexit> MLP z0 ⇠N(z, σI) <latexit sha1_base64="oL/kS60CrA7r8ceuyOQmklSUP/Y=">ACD3icbVBNS8NAEN3Ur1q/oh69LBaxgpREBPVW9KIXqWBs oYls92S3eTsLsR2pCf4MW/4sWDilev3vw3btoctPXBwO9GWbm+RGjUlnWt1GYm19YXCoul1ZW19Y3zM2tOxnGAhMHhywUTR9JwmhAHEUVI81IEMR9Rhr+4CLzGw9ESBoGt2oYEY+jXkC7FCOlpba5P7pP3EhQTlJXUg5djlQfI5Zcp5XRIdRajyN4dA2y1bVGgP OEjsnZCj3ja/3E6IY04ChRmSsmVbkfISJBTFjKQlN5YkQniAeqSlaYA4kV4yfiFe1rpwG4odAUKjtXfEwniUg65rzuze+W0l4n/ea1YdU+9hAZRrEiAJ4u6MYMqhFk6sEMFwYoNUFYUH0rxH0kEFY6w5IOwZ5+eZY4R9WzqnVzXK6d52kUwQ7YBRVgxNQA5egDh yAwSN4Bq/gzXgyXox342PSWjDymW3wB8bnD54+nHs=</latexit> <latexit sha1_base64="oL/kS60CrA7r8ceuyOQmklSUP/Y=">ACD3icbVBNS8NAEN3Ur1q/oh69LBaxgpREBPVW9KIXqWBs oYls92S3eTsLsR2pCf4MW/4sWDilev3vw3btoctPXBwO9GWbm+RGjUlnWt1GYm19YXCoul1ZW19Y3zM2tOxnGAhMHhywUTR9JwmhAHEUVI81IEMR9Rhr+4CLzGw9ESBoGt2oYEY+jXkC7FCOlpba5P7pP3EhQTlJXUg5djlQfI5Zcp5XRIdRajyN4dA2y1bVGgP OEjsnZCj3ja/3E6IY04ChRmSsmVbkfISJBTFjKQlN5YkQniAeqSlaYA4kV4yfiFe1rpwG4odAUKjtXfEwniUg65rzuze+W0l4n/ea1YdU+9hAZRrEiAJ4u6MYMqhFk6sEMFwYoNUFYUH0rxH0kEFY6w5IOwZ5+eZY4R9WzqnVzXK6d52kUwQ7YBRVgxNQA5egDh yAwSN4Bq/gzXgyXox342PSWjDymW3wB8bnD54+nHs=</latexit> <latexit sha1_base64="oL/kS60CrA7r8ceuyOQmklSUP/Y=">ACD3icbVBNS8NAEN3Ur1q/oh69LBaxgpREBPVW9KIXqWBs oYls92S3eTsLsR2pCf4MW/4sWDilev3vw3btoctPXBwO9GWbm+RGjUlnWt1GYm19YXCoul1ZW19Y3zM2tOxnGAhMHhywUTR9JwmhAHEUVI81IEMR9Rhr+4CLzGw9ESBoGt2oYEY+jXkC7FCOlpba5P7pP3EhQTlJXUg5djlQfI5Zcp5XRIdRajyN4dA2y1bVGgP OEjsnZCj3ja/3E6IY04ChRmSsmVbkfISJBTFjKQlN5YkQniAeqSlaYA4kV4yfiFe1rpwG4odAUKjtXfEwniUg65rzuze+W0l4n/ea1YdU+9hAZRrEiAJ4u6MYMqhFk6sEMFwYoNUFYUH0rxH0kEFY6w5IOwZ5+eZY4R9WzqnVzXK6d52kUwQ7YBRVgxNQA5egDh yAwSN4Bq/gzXgyXox342PSWjDymW3wB8bnD54+nHs=</latexit> 0.1 0.9 0.7 0.3 0 1 1 0 Figure 1: NASH for end-to-end semantic hashing. The inference network maps x →z using an MLP and the generative network recovers x as z →ˆx. van den Oord et al. (2017) combined VAEs with vector quantization to learn discrete latent representation, and demonstrated the utility of these learned representations on images, videos, and speech data. Li et al. (2017) leveraged both pairwise label and classification information to learn discrete hash codes, which exhibit state-of-the-art performance on image retrieval tasks. For natural language processing (NLP), although significant research has been made to learn continuous deep representations for words or documents (Mikolov et al., 2013; Kiros et al., 2015; Shen et al., 2018), discrete neural representations have been mainly explored in learning word embeddings (Shu and Nakayama, 2017; Chen et al., 2017). In these recent works, words are represented as a vector of discrete numbers, which are very efficient storage-wise, while showing comparable performance on several NLP tasks, relative to continuous word embeddings. However, discrete representations that are learned in an endto-end manner at the sentence or document level have been rarely explored. Also there is a lack of strict evaluation regarding their effectiveness. Our work focuses on learning discrete (binary) representations for text documents. Further, we employ semantic hashing (fast similarity search) as a mechanism to evaluate the quality of learned binary latent codes. 3 The Proposed Method 3.1 Hashing under the NVI Framework Inspired by the recent success of variational autoencoders for various NLP problems (Miao et al., 2016; Bowman et al., 2015; Yang et al., 2017; Miao et al., 2017; Shen et al., 2017b; Wang et al., 2018), we approach the training of discrete (binary) latent variables from a generative perspec2043 tive. Let x and z denote the input document and its corresponding binary hash code, respectively. Most of the previous text hashing methods focus on modeling the encoding distribution p(z|x), or hash function, so the local/global pairwise similarity structure of documents in the original space is preserved in latent space (Zhang et al., 2010; Wang et al., 2013; Xu et al., 2015; Wang et al., 2014). However, the generative (decoding) process of reconstructing x from binary latent code z, i.e., modeling distribution p(x|z), has been rarely considered. Intuitively, latent codes learned from a model that accounts for the generative term should naturally encapsulate key semantic information from x because the generation/reconstruction objective is a function of p(x|z). In this regard, the generative term provides a natural training objective for semantic hashing. We define a generative model that simultaneously accounts for both the encoding distribution, p(z|x), and decoding distribution, p(x|z), by defining approximations qφ(z|x) and qθ(x|z), via inference and generative networks, gφ(x) and gθ(z), parameterized by φ and θ, respectively. Specifically, x ∈Z|V | + is the bag-of-words (count) representation for the input document, where |V | is the vocabulary size. Notably, we can also employ other count weighting schemes as input features x, e.g., the term frequency-inverse document frequency (TFIDF) (Manning et al., 2008). For the encoding distribution, a latent variable z is first inferred from the input text x, by constructing an inference network gφ(x) to approximate the true posterior distribution p(z|x) as qφ(z|x). Subsequently, the decoder network gθ(z) maps z back into input space to reconstruct the original sequence x as ˆx, approximating p(x|z) as qθ(x|z) (as shown in Figure 1). This cyclic strategy, x → z →ˆx ≈x, provides the latent variable z with a better ability to generalize (Miao et al., 2016). To tailor the NVI framework for semantic hashing, we cast z as a binary latent variable and assume a multivariate Bernoulli prior on z: p(z) ∼ Bernoulli(γ) = Ql i=1 γzi i (1 −γi)1−zi, where γi ∈[0, 1] is component i of vector γ. Thus, the encoding (approximate posterior) distribution qφ(z|x) is restricted to take the form qφ(z|x) = Bernoulli(h), where h = σ(gφ(x)), σ(·) is the sigmoid function, and gφ(·) is the (nonlinear) inference network specified as a multilayer perceptron (MLP). As illustrated in Figure 1, we can obtain samples from the Bernoulli posterior either deterministically or stochastically. Suppose z is a l-bit hash code, for the deterministic binarization, we have, for i = 1, 2, ......, l: zi = 1σ(gi φ(x))>0.5 = sign(σ(gi φ(x) −0.5) + 1 2 , (1) where z is the binarized variable, and zi and gi φ(x) denote the i-th dimension of z and gφ(x), respectively. The standard Bernoulli sampling in (1) can be understood as setting a hard threshold at 0.5 for each representation dimension, therefore, the binary latent code is generated deterministically. Another strategy to obtain the discrete variable is to binarize h in a stochastic manner: zi = 1σ(gi φ(x))>µi = sign(σ(gi φ(x)) −µi) + 1 2 , (2) where µi ∼Uniform(0, 1). Because of this sampling process, we do not have to assume a predefined threshold value like in (1). 3.2 Training with Binary Latent Variables To estimate the parameters of the encoder and decoder networks, we would ideally maximize the marginal distribution p(x) = R p(z)p(x|z)dz. However, computing this marginal is intractable in most cases of interest. Instead, we maximize a variational lower bound. This approach is typically employed in the VAE framework (Kingma and Welling, 2013): Lvae = Eqφ(z|x)  log qθ(x|z)p(z) qφ(z|x)  , (3) = Eqφ(z|x)[log qθ(x|z)] −DKL(qφ(z|x)||p(z)), where the Kullback-Leibler (KL) divergence DKL(qφ(z|x)||p(z)) encourages the approximate posterior distribution qφ(z|x) to be close to the multivariate Bernoulli prior p(z). In this case, DKL(qφ(z|x)|p(z)) can be written in closed-form as a function of gφ(x): DKL = gφ(x) log gφ(x) γ + (1 −gφ(x)) log 1 −gφ(x) 1 −γ . (4) Note that the gradient for the KL divergence term above can be evaluated easily. 2044 For the first term in (3), we should in principle estimate the influence of µi in (2) on qθ(x|z) by averaging over the entire (uniform) noise distribution. However, a closed-form distribution does not exist since it is not possible to enumerate all possible configurations of z, especially when the latent dimension is large. Moreover, discrete latent variables are inherently incompatible with backpropagation, since the derivative of the sign function is zero for almost all input values. As a result, the exact gradients of Lvae wrt the inputs before binarization would be essentially all zero. To estimate the gradients for binary latent variables, we utilize the straight-through (ST) estimator, which was first introduced by Hinton (2012). So motivated, the strategy here is to simply backpropagate through the hard threshold by approximating the gradient ∂z/∂φ as 1. Thus, we have: dEqφ(z|x)[log qθ(x|z)] ∂φ = dEqφ(z|x)[log qθ(x|z)] dz dz dσ(gi φ(x)) dσ(gi φ(x)) dφ ≈ dEqφ(z|x)[log qθ(x|z)] dz dσ(gi φ(x)) dφ (5) Although this is clearly a biased estimator, it has been shown to be a fast and efficient method relative to other gradient estimators for discrete variables, especially for the Bernoulli case (Bengio et al., 2013; Hubara et al., 2016; Theis et al., 2017). With the ST gradient estimator, the first loss term in (3) can be backpropagated into the encoder network to fine-tune the hash function gφ(x). For the approximate generator qθ(x|z) in (3), let xi denote the one-hot representation of ith word within a document. Note that x = P i xi is thus the bag-of-words representation for document x. To reconstruct the input x from z, we utilize a softmax decoding function written as: q(xi = w|z) = exp(zT Exw + bw) P|V | j=1 exp(zT Exj + bj) , (6) where q(xi = w|z) is the probability that xi is word w ∈V , qθ(x|z) = Q i q(xi = w|z) and θ = {E, b1, . . . , b|V |}. Note that E ∈Rd×|V | can be interpreted as a word embedding matrix to be learned, and {bi}|V | i=1 denote bias terms. Intuitively, the objective in (6) encourages the discrete vector z to be close to the embeddings for every word that appear in the input document x. As shown in Section 5.3.1, meaningful semantic structures can be learned and manifested in the word embedding matrix E. 3.3 Injecting Data-dependent Noise to z To reconstruct text data x from sampled binary representation z, a deterministic decoder is typically utilized (Miao et al., 2016; Chaidaroon and Fang, 2017). Inspired by the success of employing stochastic decoders in image hashing applications (Dai et al., 2017; Theis et al., 2017), in our experiments, we found that injecting random Gaussian noise into z makes the decoder a more favorable regularizer for the binary codes, which in practice leads to stronger retrieval performance. Below, we invoke the rate-distortion theory to perform some further analysis, which leads to interesting findings. Learning binary latent codes z to represent a continuous distribution p(x) is a classical information theory concept known as lossy source coding. From this perspective, semantic hashing, which compresses an input document into compact binary codes, can be casted as a conventional ratedistortion tradeoff problem (Theis et al., 2017; Ball´e et al., 2016): min −log2 R(z) | {z } Rate +β ·D(x, ˆx) | {z } Distortion , (7) where rate and distortion denote the effective code length, i.e., the number of bits used, and the distortion introduced by the encoding/decoding sequence, respectively. Further, ˆx is the reconstructed input and β is a hyperparameter that controls the tradeoff between the two terms. Considering the case where we have a Bernoulli prior on z as p(z) ∼ Bernoulli(γ), and x conditionally drawn from a Gaussian distribution p(x|z) ∼N(Ez, σ2I). Here, E = {ei}|V | i=1, where ei ∈Rd can be interpreted as a codebook with |V | codewords. In our case, E corresponds to the word embedding matrix as in (6). For the case of stochastic latent variable z, the objective function in (3) can be written in a form similar to the rate-distortion tradeoff: min Eqφ(z|x)  −log qφ(z|x) | {z } Rate + 1 2σ2 |{z} β ||x −Ez||2 2 | {z } Distortion +C  , (8) 2045 where C is a constant that encapsulates the prior distribution p(z) and the Gaussian distribution normalization term. Notably, the trade-off hyperparameter β = σ−2/2 is closely related to the variance of the distribution p(x|z). In other words, by controlling the variance σ, the model can adaptively explore different trade-offs between the rate and distortion objectives. However, the optimal trade-offs for distinct samples may be different. Inspired by the observations above, we propose to inject data-dependent noise into latent variable z, rather than to setting the variance term σ2 to a fixed value (Dai et al., 2017; Theis et al., 2017). Specifically, log σ2 is obtained via a one-layer MLP transformation from gφ(x). Afterwards, we sample z′ from N(z, σ2I), which then replace z in (6) to infer the probability of generating individual words (as shown in Figure 1). As a result, the variances are different for every input document x, and thus the model is provided with additional flexibility to explore various trade-offs between rate and distortion for different training observations. Although our decoder is not a strictly Gaussian distribution, as in (6), we found empirically that injecting data-dependent noise into z yields strong retrieval results, see Section 5.1. 3.4 Supervised Hashing The proposed Neural Architecture for Semantic Hashing (NASH) can be extended to supervised hashing, where a mapping from latent variable z to labels y is learned, here parametrized by a twolayer MLP followed by a fully-connected softmax layer. To allow the model to explore and balance between maximizing the variational lower bound in (3) and minimizing the discriminative loss, the following joint training objective is employed: L = −Lvae(θ, φ; x) + αLdis(η; z, y). (9) where η refers to parameters of the MLP classifier and α controls the relative weight between the variational lower bound (Lvae) and discriminative loss (Ldis), defined as the cross-entropy loss. The parameters {θ, φ, η} are learned end-to-end via Monte Carlo estimation. 4 Experimental Setup 4.1 Datasets We use the following three standard publicly available datasets for training and evaluation: (i) Reuters21578, containing 10,788 news documents, which have been classified into 90 different categories. (ii) 20Newsgroups, a collection of 18,828 newsgroup documents, which are categorized into 20 different topics. (iii) TMC (stands for SIAM text mining competition), containing air traffic reports provided by NASA. TMC consists 21,519 training documents divided into 22 different categories. To make direct comparison with prior works, we employed the TFIDF features on these datasets supplied by (Chaidaroon and Fang, 2017), where the vocabulary sizes for the three datasets are set to 10,000, 7,164 and 20,000, respectively. 4.2 Training Details For the inference networks, we employ a feedforward neural network with 2 hidden layers (both with 500 units) using the ReLU non-linearity activation function, which transform the input documents, i.e., TFIDF features in our experiments, into a continuous representation. Empirically, we found that stochastic binarization as in (2) shows stronger performance than deterministic binarization, and thus use the former in our experiments. However, we further conduct a systematic ablation study in Section 5.2 to compare the two binarization strategies. Our model is trained using Adam (Kingma and Ba, 2014), with a learning rate of 1 × 10−3 for all parameters. We decay the learning rate by a factor of 0.96 for every 10,000 iterations. Dropout (Srivastava et al., 2014) is employed on the output of encoder networks, with the rate selected from {0.7, 0.8, 0.9} on the validation set. To facilitate comparisons with previous methods, we set the dimension of z, i.e., the number of bits within the hashing code) as 8, 16, 32, 64, or 128. 4.3 Baselines We evaluate the effectiveness of our framework on both unsupervised and supervised semantic hashing tasks. We consider the following unsupervised baselines for comparisons: Locality Sensitive Hashing (LSH) (Datar et al., 2004), Stack Restricted Boltzmann Machines (S-RBM) (Salakhutdinov and Hinton, 2009), Spectral Hashing (SpH) (Weiss et al., 2009), Self-taught Hashing (STH) (Zhang et al., 2010) and Variational Deep Semantic Hashing (VDSH) (Chaidaroon and Fang, 2017). 2046 Method 8 bits 16 bits 32 bits 64 bits 128 bits LSH 0.2802 0.3215 0.3862 0.4667 0.5194 S-RBM 0.5113 0.5740 0.6154 0.6177 0.6452 SpH 0.6080 0.6340 0.6513 0.6290 0.6045 STH 0.6616 0.7351 0.7554 0.7350 0.6986 VDSH 0.6859 0.7165 0.7753 0.7456 0.7318 NASH 0.7113 0.7624 0.7993 0.7812 0.7559 NASH-N 0.7352 0.7904 0.8297 0.8086 0.7867 NASH-DN 0.7470 0.8013 0.8418 0.8297 0.7924 Table 1: Precision of the top 100 retrieved documents on Reuters dataset (Unsupervised hashing). For supervised semantic hashing, we also compare NASH against a number of baselines: Supervised Hashing with Kernels (KSH) (Liu et al., 2012), Semantic Hashing using Tags and Topic Modeling (SHTTM) (Wang et al., 2013) and Supervised VDSH (Chaidaroon and Fang, 2017). It is worth noting that unlike all these baselines, our NASH model is trained end-to-end in one-step. 4.4 Evaluation Metrics To evaluate the hashing codes for similarity search, we consider each document in the testing set as a query document. Similar documents to the query in the corresponding training set need to be retrieved based on the Hamming distance of their hashing codes, i.e. number of different bits. To facilitate comparison with prior work (Wang et al., 2013; Chaidaroon and Fang, 2017), the performance is measured with precision. Specifically, during testing, for a query document, we first retrieve the 100 nearest/closest documents according to the Hamming distances of the corresponding hash codes (i.e., the number of different bits). We then examine the percentage of documents among these 100 retrieved ones that belong to the same label (topic) with the query document (we consider documents having the same label as relevant pairs). The ratio of the number of relevant documents to the number of retrieved documents (fixed value of 100) is calculated as the precision score. The precision scores are further averaged over all test (query) documents. 5 Experimental Results We experimented with four variants for our NASH model: (i) NASH: with deterministic decoder; (ii) NASH-N: with fixed random noise injected to decoder; (iii) NASH-DN: with data-dependent noise injected to decoder; (iv) NASH-DN-S: NASH-DN with supervised information during training. 816 32 64 128 Number of Bits 0.8 0.9 1.0 Precison (%) KSH SHTTM VDSH-S VDSH-SP NASH-DN-S Figure 2: Precision of the top 100 retrieved documents on Reuters dataset (Supervised hashing), compared with other supervised baselines. 5.1 Semantic Hashing Evaluation Table 1 presents the results of all models on Reuters dataset. Regarding unsupervised semantic hashing, all the NASH variants consistently outperform the baseline methods by a substantial margin, indicating that our model makes the most effective use of unlabeled data and manage to assign similar hashing codes, i.e., with small Hamming distance to each other, to documents that belong to the same label. It can be also observed that the injection of noise into the decoder networks has improved the robustness of learned binary representations, resulting in better retrieval performance. More importantly, by making the variances of noise adaptive to the specific input, our NASH-DN achieves even better results, compared with NASH-N, highlighting the importance of exploring/learning the trade-off between rate and distortion objectives by the data itself. We observe the same trend and superiority of our NASH-DN models on the other two benchmarks, as shown in Tables 3 and 4. Another observation is that the retrieval results tend to drop a bit when we set the length of hashing codes to be 64 or larger, which also happens for some baseline models. This phenomenon has been reported previously in Wang et al. (2012); Liu et al. (2012); Wang et al. (2013); Chaidaroon and Fang (2017), and the reasons could be twofold: (i) for longer codes, the number of data points that are assigned to a certain binary code decreases exponentially. As a result, many queries may fail to return any neighbor documents (Wang et al., 2012); (ii) considering the size of training data, it is likely that the model may overfit with long hash codes (Chaidaroon and Fang, 2017). However, even with longer hashing codes, 2047 Word weapons medical companies define israel book NASH gun treatment company definition israeli books guns disease market defined arabs english weapon drugs afford explained arab references armed health products discussion jewish learning assault medicine money knowledge jews reference NVDM guns medicine expensive defined israeli books weapon health industry definition arab reference gun treatment company printf arabs guide militia disease market int lebanon writing armed patients buy sufficient lebanese pages Table 2: The five nearest words in the semantic space learned by NASH, compared with the results from NVDM (Miao et al., 2016). Method 8 bits 16 bits 32 bits 64 bits 128 bits Unsupervised Hashing LSH 0.0578 0.0597 0.0666 0.0770 0.0949 S-RBM 0.0594 0.0604 0.0533 0.0623 0.0642 SpH 0.2545 0.3200 0.3709 0.3196 0.2716 STH 0.3664 0.5237 0.5860 0.5806 0.5443 VDSH 0.3643 0.3904 0.4327 0.1731 0.0522 NASH 0.3786 0.5108 0.5671 0.5071 0.4664 NASH-N 0.3903 0.5213 0.5987 0.5143 0.4776 NASH-DN 0.4040 0.5310 0.6225 0.5377 0.4945 Supervised Hashing KSH 0.4257 0.5559 0.6103 0.6488 0.6638 SHTTM 0.2690 0.3235 0.2357 0.1411 0.1299 VDSH-S 0.6586 0.6791 0.7564 0.6850 0.6916 VDSH-SP 0.6609 0.6551 0.7125 0.7045 0.7117 NASH-DN-S 0.6247 0.6973 0.8069 0.8213 0.7840 Table 3: Precision of the top 100 retrieved documents on 20Newsgroups dataset. Method 8 bits 16 bits 32 bits 64 bits 128 bits Unsupervised Hashing LSH 0.4388 0.4393 0.4514 0.4553 0.4773 S-RBM 0.4846 0.5108 0.5166 0.5190 0.5137 SpH 0.5807 0.6055 0.6281 0.6143 0.5891 STH 0.3723 0.3947 0.4105 0.4181 0.4123 VDSH 0.4330 0.6853 0.7108 0.4410 0.5847 NASH 0.5849 0.6573 0.6921 0.6548 0.5998 NASH-N 0.6233 0.6759 0.7201 0.6877 0.6314 NASH-DN 0.6358 0.6956 0.7327 0.7010 0.6325 Supervised Hashing KSH 0.6608 0.6842 0.7047 0.7175 0.7243 SHTTM 0.6299 0.6571 0.6485 0.6893 0.6474 VDSH-S 0.7387 0.7887 0.7883 0.7967 0.8018 VDSH-SP 0.7498 0.7798 0.7891 0.7888 0.7970 NASH-DN-S 0.7438 0.7946 0.7987 0.8014 0.8139 Table 4: Precision of the top 100 retrieved documents on TMC dataset. our NASH models perform stronger than the baselines in most cases (except for the 20Newsgroups dataset), suggesting that NASH can effectively allocate documents to informative/meaningful hashing codes even with limited training data. We also evaluate the effectiveness of NASH in a supervised scenario on the Reuters dataset, where the label or topic information is utilized during training. As shown in Figure 2, our NASHDN-S model consistently outperforms several supervised semantic hashing baselines, with various choices of hashing bits. Notably, our model exhibits higher Top-100 retrieval precision than VDSH-S and VDSH-SP, proposed by Chaidaroon and Fang (2017). This may be attributed to the fact that in VDSH models, the continuous embeddings are not optimized with their future binarization in mind, and thus could hurt the relevance of learned binary codes. On the contrary, our model is optimized in an end-to-end manner, where the gradients are directly backpropagated to the inference network (through the binary/discrete latent variable), and thus gives rise to a more robust hash function. 5.2 Ablation study 5.2.1 The effect of stochastic sampling As described in Section 3, the binary latent variables z in NASH can be either deterministically (via (1)) or stochastically (via (2)) sampled. We compare these two types of binarization functions in the case of unsupervised hashing. As illustrated in Figure 3, stochastic sampling shows stronger retrieval results on all three datasets, indicating that endowing the sampling process of latent variables with more stochasticity improves the learned representations. 5.2.2 The effect of encoder/decoder networks Under the variational framework introduced here, the encoder network, i.e., hash function, and decoder network are jointly optimized to abstract semantic features from documents. An interesting question concerns what types of network should be leveraged for each part of our NASH model. In this regard, we further investigate the effect of 2048 Category Title/Subject 8-bit code 16-bit code Baseball Dave Kingman for the hall of fame 1 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 0 Time of game 1 1 1 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 1 1 Game score report 1 1 1 0 1 0 0 1 0 0 1 0 1 1 0 1 0 0 0 0 0 1 1 0 Why is Barry Bonds not batting 4th? 1 1 1 0 1 1 0 1 0 0 1 1 1 1 0 1 0 0 0 0 0 1 1 0 Electronics Building a UV flashlight 1 0 1 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 How to drive an array of LEDs 1 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 1 2% silver solder 1 1 0 1 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 Subliminal message flashing on TV 1 0 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 Table 5: Examples of learned compact hashing codes on 20Newsgroups dataset. Reuters 20Newsgroups TMC Dataset 0.55 0.60 0.65 0.70 0.75 0.80 0.85 Precison Stochastic Deterministic Figure 3: The precisions of the top 100 retrieved documents for NASH-DN with stochastic or deterministic binary latent variables. using an encoder or decoder with different nonlinearity, ranging from a linear transformation to two-layer MLPs. We employ a base model with an encoder of two-layer MLPs and a linear decoder (the setup described in Section 3), and the ablation study results are shown in Table 6. Network Encoder Decoder linear 0.5844 0.6225 one-layer MLP 0.6187 0.3559 two-layer MLP 0.6225 0.1047 Table 6: Ablation study with different encoder/decoder networks. It is observed that for the encoder networks, increasing the non-linearity by stacking MLP layers leads to better empirical results. In other words, endowing the hash function with more modeling capacity is advantageous to retrieval tasks. However, when we employ a non-linear network for the decoder, the retrieval precision drops dramatically. It is worth noting that the only difference between linear transformation and one-layer MLP is whether a non-linear activation function is employed or not. This observation may be attributed the fact that the decoder networks can be considered as a similarity measure between latent variable z and the word embeddings Ek for every word, and the probabilities for words that present in the document is maximized to ensure that z is informative. As a result, if we allow the decoder to be too expressive (e.g., a one-layer MLP), it is likely that we will end up with a very flexible similarity measure but relatively less meaningful binary representations. This finding is consistent with several image hashing methods, such as SGH (Dai et al., 2017) or binary autoencoder (Carreira-Perpin´an and Raziperchikolaei, 2015), where a linear decoder is typically adopted to obtain promising retrieval results. However, our experiments may not speak for other choices of encoder-decoder architectures, e.g., LSTM-based sequence-to-sequence models (Sutskever et al., 2014) or DCNN-based autoencoder (Zhang et al., 2017). 5.3 Qualitative Analysis 5.3.1 Analysis of Semantic Information To understand what information has been learned in our NASH model, we examine the matrix E ∈Rd×l in (6). Similar to (Miao et al., 2016; Larochelle and Lauly, 2012), we select the 5 nearest words according to the word vectors learned from NASH and compare with the corresponding results from NVDM. As shown in Table 2, although our NASH model contains a binary latent variable, rather than a continuous one as in NVDM, it also effectively group semantically-similar words together in the learned vector space. This further demonstrates that the proposed generative framework manages to bypass the binary/discrete constraint and is able to abstract useful semantic information from documents. 5.3.2 Case Study In Table 5, we show some examples of the learned binary hashing codes on 20Newsgroups 2049 dataset. We observe that for both 8-bit and 16bit cases, NASH typically compresses documents with shared topics into very similar binary codes. On the contrary, the hashing codes for documents with different topics exhibit much larger Hamming distance. As a result, relevant documents can be efficiently retrieved by simply computing their Hamming distances. 6 Conclusions This paper presents a first step towards end-to-end semantic hashing, where the binary/discrete constraints are carefully handled with an effective gradient estimator. A neural variational framework is introduced to train our model. Motivated by the connections between the proposed method and rate-distortion theory, we inject data-dependent noise into the Bernoulli latent variable at the training stage. The effectiveness of our framework is demonstrated with extensive experiments. Acknowledgments We would like to thank the ACL reviewers for their insightful suggestions. This research was supported in part by DARPA, DOE, NIH, NSF and ONR. References Johannes Ball´e, Valero Laparra, and Eero P Simoncelli. 2016. End-to-end optimization of nonlinear transform codes for perceptual quality. In Picture Coding Symposium (PCS), 2016. IEEE, pages 1–5. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 . Toby Berger. 1971. Rate-distortion theory. Encyclopedia of Telecommunications . Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 . Miguel A Carreira-Perpin´an and Ramin Raziperchikolaei. 2015. Hashing with binary autoencoders. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on. IEEE, pages 557–566. Suthee Chaidaroon and Yi Fang. 2017. Variational deep semantic hashing for text documents. In Proceedings of the 40th international ACM SIGIR conference on Research and development in information retrieval. ACM. Ting Chen, Martin Renqiang Min, and Yizhou Sun. 2017. Learning k-way d-dimensional discrete code for compact embedding representations. arXiv preprint arXiv:1711.03067 . Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, and Le Song. 2017. Stochastic generative hashing. arXiv preprint arXiv:1701.02815 . Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S Mirrokni. 2004. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational geometry. ACM, pages 253–262. Geoffrey Hinton. 2012. Neural networks for machine learning, coursera. URL: http://coursera. org/course/neuralnets . Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks. In Advances in neural information processing systems. pages 4107–4115. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 . Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 . Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems. pages 3294–3302. Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 426–434. Hugo Larochelle and Stanislas Lauly. 2012. A neural autoregressive topic model. In Advances in Neural Information Processing Systems. pages 2708–2716. Michael S Lew, Nicu Sebe, Chabane Djeraba, and Ramesh Jain. 2006. Content-based multimedia information retrieval: State of the art and challenges. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 2(1):1–19. Qi Li, Zhenan Sun, Ran He, and Tieniu Tan. 2017. Deep supervised discrete hashing. arXiv preprint arXiv:1705.10999 . Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. 2012. Supervised hashing with kernels. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, pages 2074–2081. 2050 Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712 . Christopher D Manning, Prabhakar Raghavan, Hinrich Sch¨utze, et al. 2008. Introduction to information retrieval, volume 1. Cambridge university press Cambridge. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. arXiv preprint arXiv:1706.00359 . Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International Conference on Machine Learning. pages 1727–1736. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Sandeep Pandey, Andrei Broder, Flavio Chierichetti, Vanja Josifovski, Ravi Kumar, and Sergei Vassilvitskii. 2009. Nearest-neighbor caching for contentmatch applications. In Proceedings of the 18th international conference on World wide web. ACM, pages 441–450. Ruslan Salakhutdinov and Geoffrey Hinton. 2009. Semantic hashing. International Journal of Approximate Reasoning 50(7):969–978. Dinghan Shen, Martin Renqiang Min, Yitong Li, and Lawrence Carin. 2017a. Adaptive convolutional filter generation for natural language understanding. arXiv preprint arXiv:1709.08294 . Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms. In ACL. Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, and Lawrence Carin. 2017b. Deconvolutional latent-variable model for text sequence matching. AAAI . Raphael Shu and Hideki Nakayama. 2017. Compressing word embeddings via deep compositional code learning. arXiv preprint arXiv:1711.01068 . Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958. Benno Stein, Sven Meyer zu Eissen, and Martin Potthast. 2007. Strategies for retrieving plagiarized documents. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 825–826. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Husz´ar. 2017. Lossy image compression with compressive autoencoders. ICLR . Aaron van den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. In Advances in Neural Information Processing Systems. pages 6309–6318. Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. 2014. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927 . Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. 2012. Semi-supervised hashing for large-scale search. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(12):2393–2406. Qifan Wang, Dan Zhang, and Luo Si. 2013. Semantic hashing using tags and topic modeling. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 213–222. Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. 2018. Topic compositional neural language model. In AISTATS. Yair Weiss, Antonio Torralba, and Rob Fergus. 2009. Spectral hashing. In Advances in neural information processing systems. pages 1753–1760. Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, Fangyuan Wang, and Hongwei Hao. 2015. Convolutional neural networks for text hashing. In IJCAI. pages 1369–1375. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. arXiv preprint arXiv:1702.08139 . Dell Zhang, Jun Wang, Deng Cai, and Jinsong Lu. 2010. Self-taught hashing for fast similarity search. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 18–25. Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017. Deconvolutional paragraph representation learning. In Advances in Neural Information Processing Systems. pages 4172–4182.
2018
190
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2051–2060 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2051 Large-Scale QA-SRL Parsing Nicholas FitzGerald∗ Julian Michael∗ Luheng He Luke Zettlemoyer∗ Paul G. Allen School of Computer Science and Engineering University of Washington, Seattle, WA {nfitz,julianjm,luheng,lsz}@cs.washington.edu Abstract We present a new large-scale corpus of Question-Answer driven Semantic Role Labeling (QA-SRL) annotations, and the first high-quality QA-SRL parser. Our corpus, QA-SRL Bank 2.0, consists of over 250,000 question-answer pairs for over 64,000 sentences across 3 domains and was gathered with a new crowd-sourcing scheme that we show has high precision and good recall at modest cost. We also present neural models for two QA-SRL subtasks: detecting argument spans for a predicate and generating questions to label the semantic relationship. The best models achieve question accuracy of 82.6% and span-level accuracy of 77.6% (under human evaluation) on the full pipelined QASRL prediction task. They can also, as we show, be used to gather additional annotations at low cost. 1 Introduction Learning semantic parsers to predict the predicateargument structures of a sentence is a long standing, open challenge (Palmer et al., 2005; Baker et al., 1998). Such systems are typically trained from datasets that are difficult to gather,1 but recent research has explored training nonexperts to provide this style of semantic supervision (Abend and Rappoport, 2013; Basile et al., 2012; Reisinger et al., 2015; He et al., 2015). In this paper, we show for the first time that it is possible to go even further by crowdsourcing a large ∗Much of this work was done while these authors were at the Allen Institute for Artificial Intelligence. 1The PropBank (Bonial et al., 2010) and FrameNet (Ruppenhofer et al., 2016) annotation guides are 89 and 119 pages, respectively. In 1950 Alan M. Turing published "Computing machinery and intelligence" in Mind, in which he proposed that machines could be tested for intelligence using questions and answers. Predicate Question Answer published 1 Who published something? Alan M. Turing 2 What was published? “Computing Machinery and Intelligence” 3 When was something published? In 1950 proposed 4 Who proposed something? Alan M. Turing 5 What did someone propose? that machines could be tested for intelligent using questions and answers 6 When did someone propose something? In 1950 tested 7 What can be tested? machines 8 What can something be tested for? intelligence 9 How can something be tested? using questions and answers using 10 What was being used? questions and answers 11 Why was something being used? tested for intelligence Figure 1: An annotated sentence from our dataset. Question 6 was not produced by crowd workers in the initial collection, but was produced by our parser as part of Data Expansion (see Section 5.) scale dataset that can be used to train high quality parsers at modest cost. We adopt the Question-Answer-driven Semantic Role Labeling (QA-SRL) (He et al., 2015) annotation scheme. QA-SRL is appealing because it is intuitive to non-experts, has been shown to closely match the structure of traditional predicate-argument structure annotation schemes (He et al., 2015), and has been used for end tasks such as Open IE (Stanovsky and Dagan, 2016). In QA-SRL, each predicate-argument relationship is labeled with a question-answer pair (see Figure 1). He et al. (2015) showed that high precision QA-SRL annotations can be gathered with limited training but that high recall is challenging to achieve; it is relatively easy to gather answerable questions, but difficult to ensure that every possible question is labeled for every verb. For this reason, they hired and trained hourly annotators and only labeled a relatively small dataset (3000 sentences). Our first contribution is a new, scalable approach for crowdsourcing QA-SRL. We introduce a streamlined web interface (including an autosuggest mechanism and automatic quality control to boost recall) and use a validation stage to en2052 sure high precision (i.e. all the questions must be answerable). With this approach, we produce QA-SRL Bank 2.0, a dataset with 133,479 verbs from 64,018 sentences across 3 domains, totaling 265,140 question-answer pairs, in just 9 days. Our analysis shows that the data has high precision with good recall, although it does not cover every possible question. Figure 1 shows example annotations. Using this data, our second contribution is a comparison of several new models for learning a QA-SRL parser. We follow a pipeline approach where the parser does (1) unlabeled span detection to determine the arguments of a given verb, and (2) question generation to label the relationship between the predicate and each detected span. Our best model uses a span-based representation similar to that introduced by Lee et al. (2016) and a custom LSTM to decode questions from a learned span encoding. Our model does not require syntactic information and can be trained directly from the crowdsourced span labels. Experiments demonstrate that the model does well on our new data, achieving up to 82.2% spandetection F1 and 47.2% exact-match question accuracy relative to the human annotations. We also demonstrate the utility of learning to predict easily interpretable QA-SRL structures, using a simple data bootstrapping approach to expand our dataset further. By tuning our model to favor recall, we over-generate questions which can be validated using our annotation pipeline, allowing for greater recall without requiring costly redundant annotations in the question writing step. Performing this procedure on the training and development sets grows them by 20% and leads to improvements when retraining our models. Our final parser is highly accurate, achieving 82.6% question accuracy and 77.6% span-level precision in an human evaluation. Our data, code, and trained models will be made publicly available.2 2 Data Annotation A QA-SRL annotation consists of a set of question-answer pairs for each verbal predicate in a sentence, where each answer is a set of contiguous spans from the sentence. QA-SRL questions are defined by a 7-slot template shown in Table 1. We introduce a crowdsourcing pipeline to collect annotations rapidly, cheaply, and at large scale. 2http://qasrl.org Figure 2: Interface for the generation step. Autocomplete shows completions of the current QASRL slot, and auto-suggest shows fully-formed questions (highlighted green) based on the previous questions. Pipeline Our crowdsourcing pipeline consists of a generation and validation step. In the generation step, a sentence with one of its verbs marked is shown to a single worker, who must write QASRL questions for the verb and highlight their answers in the sentence. The questions are passed to the validation step, where n workers answer each question or mark it as invalid. In each step, no two answers to distinct questions may overlap with each other, to prevent redundancy. Instructions Workers are instructed that a valid question-answer pair must satisfy three criteria: 1) the question is grammatical, 2) the questionanswer pair is asking about the time, place, participants, etc., of the target verb, and 3) all correct answers to each question are given. Autocomplete We provide an autocomplete drop-down to streamline question writing. Autocomplete is implemented as a Non-deterministic Finite Automaton (NFA) whose states correspond to the 7 QA-SRL slots paired with a partial representation of the question’s syntax. We use the NFA to make the menu more compact by disallowing obviously ungrammatical combinations (e.g., What did been appeared?), and the syntactic representation to auto-suggest complete questions about arguments that have not yet been covered (see Figure 2). The auto-suggest feature significantly reduces the number of keystrokes required to enter new questions after the first one, speeding up the annotation process and making it easier for annotators to provide higher recall. 2053 Wh Aux Subj Verb Obj Prep Misc Who blamed someone What did someone blame something on Who refused to do something When did someone refuse to do something Who might put something somewhere Where might someone put something Table 1: Example QA-SRL questions, decomposed into their slot-based representation. See He et al. (2015) for the full details. All slots draw from a small, deterministic set of options, including verb tense (present, pastparticiple, etc.) Here we have replaced the verb-tense slot with its conjugated form. Wikipedia Wikinews Science Sentences 15,000 14,682 46,715 Verbs 32,758 34,026 66,653 Questions 75,867 80,081 143,388 Valid Qs 67,146 70,555 127,455 Table 2: Statistics for the dataset with questions written by workers across three domains. Payment and quality control Generation pays 5c for the first QA pair (required), plus 5c, 6c, etc. for each successive QA pair (optional), to boost recall. The validation step pays 8c per verb, plus a 2c bonus per question beyond four. Generation workers must write at least 2 questions per verb and have 85% of their questions counted valid, and validators must maintain 85% answer span agreement with others, or they are disqualified from further work. A validator’s answer is considered to agree with others if their answer span overlaps with answer spans provided by a majority of workers. Preprocessing We use the Stanford CoreNLP tools (Manning et al., 2014) for sentence segmentation, tokenizing, and POS-tagging. We identify verbs by POS tag, with heuristics to filter out auxiliary verbs while retaining non-auxiliary uses of “have” and “do.” We identify conjugated forms of each verb for the QA-SRL templates by finding them in Wiktionary.3 Dataset We gathered annotations for 133,479 verb mentions in 64,018 sentences (1.27M tokens) across 3 domains: Wikipedia, Wikinews, and science textbook text from the Textbook Question Answering (TQA) dataset (Kembhavi et al., 2017). We partitioned the source documents into train, dev, and test, sampled paragraph-wise from each document with an 80/10/10 split by sentence. Annotation in our pipeline with n = 2 valida3www.wiktionary.org tors took 9 days on Amazon Mechanical Turk.4 1,165 unique workers participated, annotating a total of 299,308 questions. Of these, 265,140 (or 89%) were considered valid by both validators, for an average of 1.99 valid questions per verb and 4.14 valid questions per sentence. See Table 2 for a breakdown of dataset statistics by domain. The total cost was $43,647.33, for an average of 32.7c per verb mention, 14.6c per question, or 16.5c per valid question. For comparison, He et al. (2015) interviewed and hired contractors to annotate data at much smaller scale for a cost of about 50c per verb. Our annotation scheme is cheaper, far more scalable, and provides more (though noisier) supervision for answer spans. To allow for more careful evaluation, we validated 5,205 sentences at a higher density (up to 1,000 for each domain in dev and test), re-running the generated questions through validation with n = 3 for a total of 6 answer annotations for each question. Quality Judgments of question validity had moderate agreement. About 89.5% of validator judgments rated a question as valid, and the agreement rate between judgments of the same question on whether the question is invalid is 90.9%. This gives a Fleiss’s Kappa of 0.51. In the higherdensity re-run, validators were primed to be more critical: 76.5% of judgments considered a question valid, and agreement was at 83.7%, giving a Fleiss’s Kappa of 0.55. Despite being more critical in the denser annotation round, questions marked valid in the original dataset were marked valid by the new annotators in 86% of cases, showing our data’s relatively high precision. The high precision of our annotation pipeline is also backed up by our small-scale manual evaluation (see Coverage below). Answer spans for each question also exhibit 4www.mturk.com 2054 P R F He et al. (2015) 97.5 86.6 91.7 This work 95.7 72.4 82.4 This work (unfiltered) 94.9 85.4 89.9 Table 3: Precision and recall of our annotation pipeline on a merged and validated subset of 100 verbs. The unfiltered number represents relaxing the restriction that none of 2 validators marked the question as invalid. good agreement. On the original dataset, each answer span has a 74.8% chance to exactly match one provided by another annotator (up to two), and on the densely annotated subset, each answer span has an 83.1% chance to exactly match one provided by another annotator (up to five). Coverage Accurately measuring recall for QASRL annotations is an open challenge. For example, question 6 in Figure 1 reveals an inferred temporal relation that would not be annotated as part of traditional SRL. Exhaustively enumerating the full set of such questions is difficult, even for experts. However, we can compare to the original QASRL dataset (He et al., 2015), where Wikipedia sentences were annotated with 2.43 questions per verb. Our data has lower—but loosely comparable—recall, with 2.05 questions per verb in Wikipedia. In order to further analyze the quality of our annotations relative to (He et al., 2015), we reannotate a 100-verb subset of their data both manually (aiming for exhaustivity) and with our crowdsourcing pipeline. We merge the three sets of annotations, manually remove bad questions (and their answers), and calculate the precision and recall of the crowdsourced annotations and those of He et al. (2015) against this pooled, filtered dataset (using the span detection metrics described in Section 4). Results, shown in Table 3, show that our pipeline produces comparable precision with only a modest decrease in recall. Interestingly, readding the questions rejected in the validation step greatly increases recall with only a small decrease in precision, showing that validators sometimes rejected questions considered valid by the authors. However, we use the filtered dataset for our experiments, and in Section 5, we show how another crowdsourcing step can further improve recall. 3 Models Given a sentence X = x0, . . . , xn, the goal of a QA-SRL parser is to produce a set of tuples (vi, Qi, Si), where v ∈{0, . . . , n} is the index of a verbal predicate, Qi is a question, and Si ∈ {(i, j) | i, j ∈[0, n], j ≥i} is a set of spans which are valid answers. Our proposed parsers construct these tuples in a three-step pipeline: 1. Verbal predicates are identified using the same POS-tags and heuristics as in data collection (see Section 2). 2. Unlabeled span detection selects a set Sv of spans as arguments for a given verb v. 3. Question generation predicts a question for each span in Sv. Spans are then grouped by question, giving each question a set of answers. We describe two models for unlabeled span detection in section 3.1, followed by question generation in section 3.2. All models are built on an LSTM encoding of the sentence. Like He et al. (2017), we start with an input Xv = {x0 . . . xn}, where the representation xi at each time step is a concatenation of the token wi’s embedding and an embedded binary feature (i = v) which indicates whether wi is the predicate under consideration. We then compute the output representation Hv = BILSTM(Xv) using a stacked alternating LSTM (Zhou and Xu, 2015) with highway connections (Srivastava et al., 2015) and recurrent dropout (Gal and Ghahramani, 2016). Since the span detection and question generation models both use an LSTM encoding, this component could in principle be shared between them. However, in preliminary experiments we found that sharing hurt performance, so for the remainder of this work each model is trained independently. 3.1 Span Detection Given an encoded sentence Hv, the goal of span detection is to select the spans Sv that correspond to arguments of the given predicate. We explore two models: a sequence-tagging model with BIO encoding, and a span-based model which assigns a probability to every possible span. 3.1.1 BIO Sequence Model Our BIO model predicts a set of spans via a sequence y where each yi ∈{B, I, O}, representing a token at the beginning, interior, or outside of any span, respectively. Similar to He et al. 2055 (2017), we make independent predictions for each token at training time, and use Viterbi decoding to enforce hard BIO-constraints5 at test time. The resulting sequences are in one-to-one correspondence with sets Sv of spans which are pairwise non-overlapping. The locally-normalized BIO-tag distributions are computed from the BiLSTM outputs Hv = {hv0, . . . , hvn}: p(yt | x) ∝exp(w⊺ tagMLP(hvt) + btag) (1) 3.1.2 Span-based Model Our span-based model makes independent binary decisions for all O(n2) spans in the sentence. Following Lee et al. (2016), the representation of a span (i, j) is the concatenation of the BiLSTM output at each endpoint: svij = [hvi, hvj]. (2) The probability that the span is an argument of predicate v is computed by the sigmoid function: p(yij | Xv) = σ(w⊺ spanMLP(svij) + bspan) (3) At training time, we minimize the binary cross entropy summed over all n2 possible spans, counting a span as a positive example if it appears as an answer to any question. At test time, we choose a threshold τ and select every span that the model assigns probability greater than τ, allowing us to trade off precision and recall. 3.2 Question Generation We introduce two question generation models. Given a span representation svij defined in subsubsection 3.1.2, our models generate questions by picking a word for each question slot (see Section 2). Each model calculates a joint distribution p(y | Xv, svij) over values y = (y1, . . . , y7) for the question slots given a span svij, and is trained to minimize the negative log-likelihood of gold slot values. 3.2.1 Local Model The local model predicts the words for each slot independently: p(yk | Xv, svij) ∝exp(w⊺ kMLP(svij) + bk). (4) 5E.g., an I-tag should only follow a B-tag. 3.2.2 Sequence Model The sequence model uses the machinery of an RNN to share information between slots. At each slot k, we apply a multiple layers of LSTM cells: hl,k, cl,k = LSTMCELLl,k(hl−1,k, hl,k−1, cl,k−1) (5) where the initial input at each slot is a concatenation of the span representation and the embedding of the previous word of the question: h0,k = [svij; yk−1]. Since each question slot predicts from a different set of words, we found it beneficial to use separate weights for the LSTM cells at each slot k. During training, we feed in the gold token at the previous slot, while at test time, we use the predicted token. The output distribution at slot k is computed via the final layers’ output vector hLk: p(yk | Xv, svij) ∝exp(w⊺ kMLP(hLk) + bk) (6) 4 Initial Results Automatic evaluation for QA-SRL parsing presents multiple challenges. In this section, we introduce automatic metrics that can help us compare models. In Section 6, we will report human evaluation results for our final system. 4.1 Span Detection Metrics We evaluate span detection using a modified notion of precision and recall. We count predicted spans as correct if they match any of the labeled spans in the dataset. Since each predicted span could potentially be a match to multiple questions (due to overlapping annotations) we map each predicted span to one matching question in the way that maximizes measured recall using maximum bipartite matching. We use both exact match and intersection-over-union (IOU) greater than 0.5 as matching criteria. Results Table 4 shows span detection results on the development set. We report results for the span-based models at two threshold values τ: τ = 0.5, and τ = τ ∗maximizing F1. The span-based model significantly improves over the BIO model in both precision and recall, although the difference is less pronounced under IOU matching. 4.2 Question Generation Metrics Like all generation tasks, evaluation metrics for question generation must contend with 2056 Exact Match P R F BIO 69.0 75.9 72.2 Span (τ = 0.5) 81.7 80.9 81.3 Span (τ = τ∗) 80.0 84.7 82.2 IOU ≥0.5 P R F BIO 80.4 86.0 83.1 Span (τ = 0.5) 87.5 84.2 85.8 Span (τ = τ∗) 83.8 93.0 88.1 Table 4: Results for Span Detection on the dense development dataset. Span detection results are given with the cutoff threshold τ at 0.5, and at the value which maximizes F-score. The top chart lists precision, recall and F-score with exact span match, while the bottom reports matches where the intersection over union (IOU) is ≥0.5. EM PM SA Local 44.2 62.0 83.2 Seq. 47.2 62.3 82.9 Table 5: Question Generation results on the dense development set. EM - Exact Match accuracy, PM - Partial Match Accuracy, SA - Slot-level accuracy the fact that there are in general multiple possible valid questions for a given predicate-argument pair. For instance, the question “Who did someone blame something on?” may be rephrased as “Who was blamed for something?” However, due to the constrained space of possible questions defined by QA-SRL’s slot format, accuracy-based metrics can still be informative. In particular, we report the rate at which the predicted question exactly matches the gold question, as well as a relaxed match where we only count the question word (WH), subject (SBJ), object (OBJ) and Miscellaneous (Misc) slots (see Table 1). Finally, we report average slot-level accuracy. Results Table 5 shows the results for question generation on the development set. The sequential model’s exact match accuracy is significantly higher, while word-level accuracy is roughly comparable, reflecting the fact that the local model learns the slot-level posteriors. 4.3 Joint results Table 6 shows precision and recall for joint span detection and question generation, using exact P R F Span + Local 37.8 43.7 40.6 Span + Seq. (τ = 0.5) 39.6 45.8 42.4 Table 6: Joint span detection and question generation results on the dense development set, using exact-match for both spans and questions. match for both. This metric is exceedingly hard, but it shows that almost 40% of predictions are exactly correct in both span and question. In Section 6, we use human evaluation to get a more accurate assessment of our model’s accuracy. 5 Data Expansion Since our trained parser can produce full QASRL annotations, its predictions can be validated by the same process as in our original annotation pipeline, allowing us to focus annotation efforts towards filling potential data gaps. By detecting spans at a low probability cutoff, we over-generate QA pairs for already-annotated sentences. Then, we filter out QA pairs whose answers overlap with answer spans in the existing annotations, or whose questions match existing questions. What remains are candidate QA pairs which fill gaps in the original annotation. We pass these questions to the validation step of our crowdsourcing pipeline with n = 3 validators, resulting in new labels. We run this process on the training and development partitions of our dataset. For the development set, we use the trained model described in the previous section. For the training set, we use a relaxed version of jackknifing, training 5 models over 5 different folds. We generate 92,080 questions at a threshold of τ = 0.2. Since in this case many sentences have only one question, we restructure the pay to a 2c base rate with a 2c bonus per question after the first (still paying no less than 2c per question). Data statistics 46,017 (50%) of questions run through the expansion step were considered valid by all three annotators. In total, after filtering, the expansion step increased the number of valid questions in the train and dev partitions by 20%. However, for evaluation, since our recall metric identifies a single question for each answer span (via bipartite matching), we filter out likely question paraphrases by removing questions in the ex2057 Exact Match P R F AUC Original 80.8 86.8 83.7 .906 Expanded 82.9 86.4 84.6 .910 IOU ≥0.5 P R F AUC Original 87.1 93.2 90.1 .946 Expanded 87.9 93.1 90.5 .949 (a) Span Detection results with τ∗. EM PM WA Original 50.5 64.4 84.1 Expanded 50.8 64.9 84.1 (b) Question Generation results P R F Original 47.5 46.9 47.2 Expanded 44.3 55.0 49.1 (c) Joint span detection and question generation results with τ = 0.5 Table 7: Results on the expanded development set comparing the full model trained on the original data, and with the expanded data. panded development set whose answer spans have two overlaps with the answer spans of one question in the original annotations. After this filtering, the expanded development set we use for evaluation has 11.5% more questions than the original development set. The total cost including MTurk fees was $8,210.66, for a cost of 8.9c per question, or 17.8c per valid question. While the cost per valid question was comparable to the initial annotation, we gathered many more negative examples (which may serve useful in future work), and this method allowed us to focus on questions that were missed in the first round and improve the exhaustiveness of the annotation (whereas it is not obvious how to make fully crowdsourced annotation more exhaustive at a comparable cost per question). Retrained model We retrained our final model on the training set extended with the new valid questions, yielding modest improvements on both span detection and question generation in the development set (see Table 7). The span detection numbers are higher than on the original dataset, because the expanded development data captures true positives produced by the original model (and the resulting increase in precision can be traded off for recall as well). 6 Final Evaluation We use the crowdsourced validation step to do a final human evaluation of our models. We test 3 parsers: the span-based span detection model paired with each of the local and sequential question generation models trained on the initial dataset, and our final model (span-based span detection and sequential question generation) trained with the expanded data. Methodology On the 5,205 sentence densely annotated subset of dev and test, we generate QASRL labels with all of the models using a span detection threshold of τ = 0.2 and combine the questions with the existing data. We filter out questions that fail the autocomplete grammaticality check (counting them invalid) and pass the data into the validation step, annotating each question to a total of 6 validator judgments. We then compute question and span accuracy as follows: A question is considered correct if 5 out of 6 annotators consider it valid, and a span is considered correct if its generated question is correct and the span is among those selected for the question by validators. We rank all questions and spans by the threshold at which they are generated, which allows us to compute accuracy at different levels of recall. Results Figure 3 shows the results. As expected, the sequence-based question generation models are much more accurate than the local model; this is largely because the local model generated many questions that failed the grammaticality check. Furthermore, training with our expanded data results in more questions and spans generated at the same threshold. If we choose a threshold value which gives a similar number of questions per sentence as were labeled in the original data annotation (2 questions / verb), question and span accuracy are 82.64% and 77.61%, respectively. Table 8 shows the output of our best system on 3 randomly selected sentences from our development set (one from each domain). The model was overall highly accurate—only one question and 3 spans are considered incorrect, and each mistake is nearly correct,6 even when the sentence contains a negation. 6The incorrect question “When did someone appear?” would be correct if the Prep and Misc slots were corrected to read “When did someone appear to do something?” 2058 (a) Question accuracy on Dev (b) Question accuracy on Test (c) Span accuracy on Dev (d) Span accuracy on Test Figure 3: Human evaluation accuracy for questions and spans, as each model’s span detection threshold is varied. Questions are considered correct if 5 out of 6 annotators consider it valid. Spans are considered correct if their question was valid, and the span was among those labeled by human annotators for that question. The vertical line indicates a threshold value where the number of questions per sentence matches that of the original labeled data (2 questions / verb). 7 Related Work Resources and formalisms for semantics often require expert annotation and underlying syntax (Palmer et al., 2005; Baker et al., 1998; Banarescu et al., 2013). Some more recent semantic resources require less annotator training, or can be crowdsourced (Abend and Rappoport, 2013; Reisinger et al., 2015; Basile et al., 2012; Michael et al., 2018). In particular, the original QA-SRL (He et al., 2015) dataset is annotated by freelancers, while we developed streamlined crowdsourcing approaches for more scalable annotation. Crowdsourcing has also been used for indirectly annotating syntax (He et al., 2016; Duan et al., 2016), and to complement expert annotation of SRL (Wang et al., 2018). Our crowdsourcing approach draws heavily on that of Michael et al. (2018), with automatic two-stage validation for the collected question-answer pairs. More recently, models have been developed for these newer semantic resources, such as UCCA (Teichert et al., 2017) and Semantic Proto-Roles (White et al., 2017). Our work is the first highquality parser for QA-SRL, which has several unique modeling challenges, such as its highly structured nature and the noise in crowdsourcing. Several recent works have explored neural models for SRL tasks (Collobert and Weston, 2007; FitzGerald et al., 2015; Swayamdipta et al., 2017; Yang and Mitchell, 2017), many of which employ a BIO encoding (Zhou and Xu, 2015; He et al., 2017). Recently, span-based models have proven to be useful for question answering (Lee et al., 2016) and coreference resolution (Lee et al., 2017), and PropBank SRL (He et al., 2018). 2059 Produced What produced something? A much larger super eruption Where did something produce something? in Colorado What did something produce? over 5,000 cubic kilometers of material A much larger super eruption in Colorado produced over 5,000 cubic kilometers of material. appeared Where didn’t someone appear to do something? In the video Who didn’t appear to do something? the perpetrators When did someone appear? never What didn’t someone appear to do? look at the camera to look at the camera look Where didn't someone look at something? In the video Who didn’t look? the perpetrators What didn’t someone look at? the camera In the video, the perpetrators never appeared to look at the camera. met Who met someone? Some of the vegetarians vegetarians Who met? he What did someone meet? members of the Theosophical Society founded What had been founded? members of the Theosophical Society the Theosophical Society When was something founded? in 1875 1875 Why has something been founded? to further universal brotherhood devoted What was devoted to something? members of the Theosophical Society What was something devoted to? the study of Buddhist and Hindu literature Some of the vegetarians he met were members of the Theosophical Society, which had been founded in 1875 to further universal brotherhood, and which was devoted to the study of Buddhist and Hindu literature. Table 8: System output on 3 randomly sampled sentences from the development set (1 from each of the 3 domains). Spans were selected with τ = 0.5. Questions and spans with a red background were marked incorrect during human evaluation. 8 Conclusion In this paper, we demonstrated that QA-SRL can be scaled to large datasets, enabling a new methodology for labeling and producing predicate-argument structures at a large scale. We presented a new, scalable approach for crowdsourcing QA-SRL, which allowed us to collect QA-SRL Bank 2.0, a new dataset covering over 250,000 question-answer pairs from over 64,000 sentences, in just 9 days. We demonstrated the utility of this data by training the first parser which is able to produce high-quality QA-SRL structures. Finally, we demonstrated that the validation stage of our crowdsourcing pipeline, in combination with our parser tuned for recall, can be used to add new annotations to the dataset, increasing recall. Acknowledgements The crowdsourcing funds for QA-SRL Bank 2.0 was provided by the Allen Institute for Artificial Intelligence. This research was supported in part by the ARO (W911NF-16-1-0121) the NSF (IIS1252835, IIS-1562364), a gift from Amazon, and an Allen Distinguished Investigator Award. We would like to thank Gabriel Stanovsky and Mark Yatskar for their helpful feedback. References Omri Abend and Ari Rappoport. 2013. Universal conceptual cognitive annotation (UCCA). In ACL 2013. Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The Berkeley Framenet project. In ICCL 1998. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In 7th Linguistic Annotation Workshop and Interoperability with Discourse. Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Developing a large semantically annotated corpus. In LREC 2012. Claire Bonial, Olga Babko-Malaya, Jinho D Choi, Jena Hwang, and Martha Palmer. 2010. Propbank annotation guidelines. Ronan Collobert and Jason Weston. 2007. Fast semantic extraction using a novel neural network architecture. In ACL 2007. 2060 Manjuan Duan, Ethan Hill, and Michael White. 2016. Generating disambiguating paraphrases for structurally ambiguous sentences. In 10th Linguistic Annotation Workshop. Nicholas FitzGerald, Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In EMNLP 2015. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In NIPS 2016. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In ACL 2018. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and whats next. In ACL 2017. Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In EMNLP 2015. Luheng He, Julian Michael, Mike Lewis, and Luke Zettlemoyer. 2016. Human-in-the-loop parsing. In EMNLP 2016. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In CVPR 2017. Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. In EMNLP 2017. Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for extractive question answering. arXiv preprint arXiv:1611.01436. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL 2014, pages 55–60. Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, and Luke Zettlemoyer. 2018. Crowdsourcing question-answer meaning representations. In NAACL 2018. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics. Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. TACL. Josef Ruppenhofer, Michael Ellsworth, Miriam RL Petruck, Christopher R Johnson, and Jan Scheffczyk. 2016. FrameNet II: Extended theory and practice. Institut f¨ur Deutsche Sprache, Bibliothek. Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In NIPS 2015. Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In EMNLP 2016. Swabha Swayamdipta, Sam Thomson, Chris Dyer, and Noah A Smith. 2017. Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold. arXiv preprint arXiv:1706.09528. Adam R Teichert, Adam Poliak, Benjamin Van Durme, and Matthew R Gormley. 2017. Semantic proto-role labeling. In AAAI 2017, pages 4459–4466. Chenguang Wang, Alan Akbik, Laura Chiticariu, Yunyao Li, Fei Xia, and Anbang Xu. 2018. Crowd-inthe-loop: A hybrid approach for annotating semantic roles. In EMNLP 2017. Aaron Steven White, Kyle Rawlins, and Benjamin Van Durme. 2017. The semantic proto-role linking model. In ACL 2017. Bishan Yang and Tom Mitchell. 2017. A joint sequential and relational model for frame-semantic parsing. In EMNLP 2017, pages 1247–1256. Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In ACL 2015.
2018
191
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2061–2071 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2061 Syntax for Semantic Role Labeling, To Be, Or Not To Be Shexia He1,2,∗, Zuchao Li1,2,∗, Hai Zhao1,2,†, Hongxiao Bai1,2, Gongshen Liu3 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, Shanghai, China 3School of Cyber Security, Shanghai Jiao Tong University, China {heshexia, charlee}@sjtu.edu.cn, [email protected], {baippa, lgshen}@sjtu.edu.cn Abstract Semantic role labeling (SRL) is dedicated to recognizing the predicate-argument structure of a sentence. Previous studies have shown syntactic information has a remarkable contribution to SRL performance. However, such perception was challenged by a few recent neural SRL models which give impressive performance without a syntactic backbone. This paper intends to quantify the importance of syntactic information to dependency SRL in deep learning framework. We propose an enhanced argument labeling model companying with an extended korder argument pruning algorithm for effectively exploiting syntactic information. Our model achieves state-of-the-art results on the CoNLL-2008, 2009 benchmarks for both English and Chinese, showing the quantitative significance of syntax to neural SRL together with a thorough empirical survey over existing models. 1 Introduction Semantic role labeling (SRL), namely semantic parsing, is a shallow semantic parsing task, which aims to recognize the predicate-argument structure of each predicate in a sentence, such as who did what to whom, where and when, etc. Specifically, we seek to identify arguments and label their semantic roles given a predicate. SRL is an impor∗These authors made equal contribution.† Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), National Natural Science Foundation of China (No. 61672343 and No. 61733011), Key Project of National Society Science Foundation of China (No. 15ZDA041), The Art and Science Interdisciplinary Funds of Shanghai Jiao Tong University (No. 14JCRZ04). tant method to obtain semantic information beneficial to a wide range of natural language processing (NLP) tasks, including machine translation (Shi et al., 2016), question answering (Berant et al., 2013; Yih et al., 2016) and discourse relation sense classification (Mihaylov and Frank, 2016). There are two formulizations for semantic predicate-argument structures, one is based on constituents (i.e., phrase or span), the other is based on dependencies. The latter proposed by the CoNLL-2008 shared task (Surdeanu et al., 2008) is also called semantic dependency parsing, which annotates the heads of arguments rather than phrasal arguments. Generally, SRL is decomposed into multi-step classification subtasks in pipeline systems, consisting of predicate identification and disambiguation, argument identification and classification. In prior work of SRL, considerable attention has been paid to feature engineering that struggles to capture sufficient discriminative information, while neural network models are capable of extracting features automatically. In particular, syntactic information, including syntactic tree feature, has been show extremely beneficial to SRL since a larger scale of empirical verification of Punyakanok et al. (2008). However, all the work had to take the risk of erroneous syntactic input, leading to an unsatisfactory performance. To alleviate the above issues, Marcheggiani et al. (2017) propose a simple but effective model for dependency SRL without syntactic input. It seems that neural SRL does not have to rely on syntactic features, contradicting with the belief that syntax is a necessary prerequisite for SRL as early as Gildea and Palmer (2002). This dramatic contradiction motivates us to make a thorough exploration on syntactic contribution to SRL. This paper will focus on semantic dependency parsing and formulate SRL as one or two se2062 quence tagging tasks with predicate-specific encoding. With the help of the proposed k-order argument pruning algorithm over syntactic tree, our model obtains state-of-the-art scores on the CoNLL benchmarks for both English and Chinese. In order to quantitatively evaluate the contribution of syntax to SRL, we adopt the ratio between labeled F1 score for semantic dependencies (Sem-F1) and the labeled attachment score (LAS) for syntactic dependencies introduced by CoNLL2008 Shared Task1 as evaluation metric. Considering that various syntactic parsers contribute different syntactic inputs with various range of quality levels, the ratio provides a fairer comparison between syntactically-driven SRL systems, which will be surveyed by our empirical study. 2 Model To fully disclose the predicate-argument structure, typical SRL systems have to step by step perform four subtasks. Since the predicates in CoNLL2009 (Hajiˇc et al., 2009) corpus have been preidentified, we need to tackle three other subtasks, which are formulized into two-step pipeline in this work, predicate disambiguation and argument labeling. Namely, we do the work of argument identification and classification in one model. Argument structure for each known predicate will be disclosed by our argument labeler over a sequence including possible arguments (candidates). There are two ways to determine the sequence, one is to simply input the entire sentence as a syntax-agnostic SRL system does, the other is to select words according to syntactic parse tree around the predicate as most previous SRL systems did. The latter strategy usually works through a syntactic tree based argument pruning algorithm. We will use the proposed k-order argument pruning algorithm (Section 2.1) to get a sequence w = (w1, . . . , wn) for each predicate. Then, we represent each word wi ∈w as xi (Section 2.2). Eventually, we obtain contextual features with sequence encoder (Section 2.3). The overall role labeling model is depicted in Figure 1. 2.1 Argument Pruning As pointed out by Punyakanok et al. (2008), syntactic information is most relevant in identifying 1CoNLL-2008 is an English-only task, while CoNLL2009 extends to a multilingual one. Their main difference is that predicates have been beforehand indicated for the latter. BiLSTM CNN + BiLSTM Word Representation Hidden Layer Softmax x ie x re x pe x ce x le x pos x de Highway ... ... ... ... ... ... Figure 1: The Argument Labeling Model the arguments, and the most crucial contribution of full parsing is in the pruning stage. In this paper, we propose a k-order argument pruning algorithm inspired by Zhao et al. (2009b). First of all, for node n and its descendant nd in a syntactic dependency tree, we define the order to be the distance between the two nodes, denoted as D(n, nd). Then we define k-order descendants of given node satisfying D(n, nd) = k, and k-order traversal that visits each node from the given node to its descendant nodes within k-th order. Note that the definition of k-order traversal is somewhat different from tree traversal in terminology. A brief description of the proposed k-order pruning algorithm is given as follow. Initially, we set a given predicate as the current node in a syntactic dependency tree. Then, collect all its argument candidates by the strategy of k-order traversal. Afterwards, reset the current node to its syntactic head and repeat the previous step till the root of the tree. Finally, collect the root and stop. The k-order argument algorithm is presented in Algorithm 1 in detail. An example of a syntactic dependency tree for sentence She began to trade the art for money is shown in Figure 2. The main reasons for applying the extended korder argument pruning algorithm are two-fold. 2063 Algorithm 1 k-order argument pruning algorithm Input: A predicate p, the root node r given a syntactic dependency tree T, the order k Output: The set of argument candidates S 1: initialization set p as current node c, c = p 2: for each descendant ni of c in T do 3: if D(c, ni) ≤k and ni /∈S then 4: S = S + ni 5: end if 6: end for 7: find the syntactic head ch of c, and let c = ch 8: if c = r then 9: S = S + r 10: else 11: goto step 2 12: end if 13: return argument candidates set S First, previous standard pruning algorithm may hurt the argument coverage too much, even though indeed arguments usually tend to surround their predicate in a close distance. As a sequence tagging model has been applied, it can effectively handle the imbalanced distribution between arguments and non-arguments, which is hardly tackled by early argument classification models that commonly adopt the standard pruning algorithm. Second, the extended pruning algorithm provides a better trade-off between computational cost and performance by carefully tuning k. 2.2 Word Representation We produce a predicate-specific word representation xi for each word wi, where i stands for the word position in an input sequence, following Marcheggiani et al. (2017). However, we differ by (1) leveraging a predicate-specific indicator embedding, (2) using deeper refined representation, including character and dependency relation embeddings, and (3) applying recent advances in RNNs, such as highway connections (Srivastava et al., 2015). In this work, word representation xi is the concatenation of four types of features: predicatespecific feature, character-level, word-level and linguistic features. Unlike previous work, we leverage a predicate-specific indicator embedding xie i rather than directly using a binary flag either 0 or 1. At character level, we exploit convolutional neural network (CNN) with bidirectional LSTM (BiLSTM) to learn character embedding ROOT the She began to trade art for money SBJ OPRD IM OBJ NMOD NMOD PMOD 1st-order 2nd-order 3rd-order Figure 2: An example of first-order, second-order and third-order argument pruning. Shadow part indicates the given predicate. xce i . As shown in Figure 1, the representation calculated by the CNN is fed as input to BiLSTM. At word level, we use a randomly initialized word embedding xre i and a pre-trained word embedding xpe i . For linguistic features, we employ a randomly initialized lemma embedding xle i and a randomly initialized POS tag embedding xpos i . In order to incorporate more syntactic information, we adopt an additional feature, the dependency relation to syntactic head. Likewise, it is a randomly initialized embedding xde i . The resulting word representation is concatenated as xi = [xie i , xce i , xre i , xpe i , xle i , xpos i , xde i ]. 2.3 Sequence Encoder As Long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) have shown significant representational effectiveness to NLP tasks, we thus use BiLSTM as the sentence encorder. Given an input sequence x = (x1, . . . , xn), BiLSTM processes the sequence in both forward and backward direction to obtain two separated hidden states, −→h i which handles data from x1 to xi and ←−h i which tackles data from xn to xi for each word representation. Finally, we get a contextual representation hi = [−→h i, ←−h i] by concatenating the states of BiLSTM networks. To get the final predicted semantic roles, we exploit a multi-layer perceptron (MLP) with highway connections on the top of BiLSTM networks, which takes as input the hidden representation hi 2064 Hyperparameter values die (indicator embedding) 16 dpe (pre-trained embedding) 100 dce (character embedding) 300 dre (word embedding) 100 dle (lemma embedding) 100 dpos (POS tag embedding) 32 dde (dependency label embedding) 64 LSTM hidden sizes 512 BiLSTM layers 4 Hidden layers 10 Learning rate 0.001 Word dropout 0.1 Table 1: Hyperparameter values. of all time steps. The MLP network consists of 10 layers with highway connections and we employ ReLU activations for the hidden layers. Finally, we use a softmax layer over the outputs to maximize the likelihood of labels. 2.4 Predicate Disambiguation Although predicates have been identified given a sentence, predicate disambiguation is an indispensable task, which aims to determine the predicate-argument structure for an identified predicate in a particular context. Here, we also use the identical model (BiLSTM composed with MLP) for predicate disambiguation, in which the only difference is that we remove the syntactic dependency relation feature in corresponding word representation (Section 2.2). Exactly, given a predicate p, the resulting word representation is pi = [pie i , pce i , pre i , ppe i , ple i , ppos i ]. 3 Experiments Our model2 is evaluated on the CoNLL-2009 shared task both for English and Chinese datasets, following the standard training, development and test splits. The hyperparameters in our model were selected based on the development set, and are summarized in Table 1. Note that the parameters of predicate model are the same as these in argument model. All real vectors are randomly initialized, and the pre-trained word embeddings for English are GloVe vectors (Pennington et al., 2014). For Chinese, we exploit Wikipedia documents to train Word2Vec embeddings (Mikolov 2The code is available at https://github.com/ bcmi220/srl_syn_pruning. 0 5 10 15 20 k 0 20 40 60 80 100 Percentage (%) Coverage Reduction Figure 3: Changing curves of coverage and reduction with different k value on English training set. The coverage rate is the proportion of true arguments in pruning output, while the reduction is the one of pruned argument candidates in total tokens. et al., 2013). During training procedures, we use the categorical cross-entropy as objective, with Adam optimizer (Kingma and Ba, 2015). We train models for a maximum of 20 epochs and obtain the nearly best model based on development results. For argument labeling, we preprocess corpus with k-order argument pruning algorithm. In addition, we use four CNN layers with singlelayer BiLSTM to induce character representations derived from sentences. For English3, to further enhance the representation, we adopt CNNBiLSTM character embedding structure from AllenNLP toolkit (Peters et al., 2018). 3.1 Preprocessing During the pruning of argument candidates, we use the officially predicted syntactic parses provided by CoNLL-2009 shared-task organizers on both English and Chinese. Figure 3 shows changing curves of coverage and reduction following k on the English train set. According to our statistics, the number of non-arguments is ten times more than that of arguments, where the data distribution is fairly unbalanced. However, a proper pruning strategy could alleviate this problem. Accordingly, the first-order pruning reduces more than 50% candidates at the cost of missing 5.5% true ones on average, and the second-order prunes about 40% candidates with nearly 2.0% loss. The coverage of third-order has achieved 99% and it reduces approximately 1/3 corpus size. It is worth noting that as k is larger than 19, 3For Chinese, we do not use character embedding. 2065 System (syntax-aware) P R F1 Single model Zhao et al. (2009a) − − 86.2 Zhao et al. (2009c) − − 85.4 Bj¨orkelund et al. (2010) 87.1 84.5 85.8 Lei et al. (2015) − − 86.6 FitzGerald et al. (2015) − − 86.7 Roth and Lapata (2016) 88.1 85.3 86.7 Marcheggiani and Titov (2017) 89.1 86.8 88.0 Ours 89.7 89.3 89.5 Ensemble model FitzGerald et al. (2015) − − 87.7 Roth and Lapata (2016) 90.3 85.7 87.9 Marcheggiani and Titov (2017) 90.5 87.7 89.1 System (syntax-agnostic) P R F1 Marcheggiani et al. (2017) 88.7 86.8 87.7 Ours 89.5 87.9 88.7 Table 2: Results on the English test set (WSJ). there will come full coverage on all argument candidates for English training set, which let our high order pruning algorithm degrade into a syntaxagnostic setting. In this work, we use the tenthorder pruning for pursuing the best performance. 3.2 Results Our system performance is measured with the official script from CoNLL-2009 benchmarks, combining the output of our predicate disambiguation with our semantic role labeling. Our predicate disambiguation model achieves the accuracy of 95.01% and 95.58%4 on development and test sets, respectively. We compare our model performance with the state-of-the-art models for dependency SRL.5 Noteworthily, our model is local and single without reranking, which neither includes global inference nor combines multiple models. The experimental results on the English in-domain (WSJ) and out-of-domain (Brown) test sets are shown in Tables 2 and 3, respectively. For English, our syntax-aware model outperforms previously published best single model, scoring 89.5% F1 with 1.5% absolute improvement on the in-domain (WSJ) test data. Compared 4Note that we give a slightly better predicate model than Roth and Lapata (2016), with 94.77% and 95.47% accuracy on development and test sets, respectively. 5Here, we do not compare against span-based SRL models, which annotate roles for entire argument spans instead of semantic dependencies. System (syntax-aware) P R F1 Single model Zhao et al. (2009a) − − 74.6 Zhao et al. (2009c) − − 73.3 Bj¨orkelund et al. (2010) 75.7 72.2 73.9 Lei et al. (2015) − − 75.6 FitzGerald et al. (2015) − − 75.2 Roth and Lapata (2016) 76.9 73.8 75.3 Marcheggiani and Titov (2017) 78.5 75.9 77.2 Ours 81.9 76.9 79.3 Ensemble model FitzGerald et al. (2015) − − 75.5 Roth and Lapata (2016) 79.7 73.6 76.5 Marcheggiani and Titov (2017) 80.8 77.1 78.9 System (syntax-agnostic) P R F1 Marcheggiani et al. (2017) 79.4 76.2 77.7 Ours 81.7 76.1 78.8 Table 3: Results on English out-of-domain test set (Brown). System (syntax-aware) P R F1 Zhao et al. (2009a) 80.4 75.2 77.7 Bj¨orkelund et al. (2009) 82.4 75.1 78.6 Roth and Lapata (2016) 83.2 75.9 79.4 Marcheggiani and Titov (2017) 84.6 80.4 82.5 Ours 84.2 81.5 82.8 System (syntax-agnostic) P R F1 Marcheggiani et al. (2017) 83.4 79.1 81.2 Ours 84.5 79.3 81.8 Table 4: Results on the Chinese test set. with ensemble models, our single model even provides better performance (+0.4% F1) than the system (Marcheggiani and Titov, 2017), and significantly surpasses all the rest models. In the syntaxagnostic setting (without pruning and dependency relation embedding), we also reach the new stateof-the-art, achieving a performance gain of 1% F1. On the out-of-domain (Brown) test set, we achieve the new best results of 79.3% (syntaxaware) and 78.8% (syntax-agnostic) in F1 scores. Moreover, our syntax-aware model performs better than the syntax-agnostic one. Table 4 presents the results on Chinese test set. Even though we use the same parameters as for English, our model also outperforms the best reported results by 0.3% (syntax-aware) and 0.6% (syntax-agnostic) in F1 scores. 2066 System(without predicate sense) P R F1 1st-order 84.4 82.6 83.5 2nd-order 84.8 83.0 83.9 3rd-order 85.1 83.3 84.2 Marcheggiani and Titov (2017) 85.2 81.6 83.3 Table 5: SRL results without predicate sense. Our system P R F1 BiLSTM 86.5 85.1 85.8 basic model 86.3 85.7 86.0 + indicator embedding 86.8 85.8 86.3 + character embedding 87.2 86.6 86.9 + both 87.7 87.0 87.3 BiLSTM + both 87.3 86.7 87.0 Table 6: Ablation on development set. The “+” denotes a specific version over the basic model. 3.3 Analysis To evaluate the contributions of key factors in our method, a series of ablation studies are performed on the English development set. In order to demonstrate the effectiveness of our k-order pruning algorithm, we report the SRL performance excluding predicate senses in evaluation, eliminating the performance gain from predicate disambiguation. Table 5 shows the results from our syntax-aware model with lower order argument pruning. Compared to the best previous model, our system still yields an increment in recall by more than 1%, leading to improvements in F1 score. It demonstrates that refining syntactic parser tree based candidate pruning does help in argument recognition. Table 6 presents the performance of our syntaxagnostic SRL system with a basic configuration, which removes components, including indicator and character embeddings. Note that the first row is the results of BiLSTM (removing MLP from basic model), whose encoding is the same as Marcheggiani et al. (2017). Experiments show that both enhanced representations improve over our basic model, and our adopted labeling model is superior to the simple BiLSTM. Figure 4 shows F1 scores in different k-order pruning together with our syntax-agnostic model. It also indicates that the least first-order pruning fails to give satisfactory performance, the best performing setting coming from a moderate setting of k = 10, and the largest k shows that our argu0 5 15 20 10 k 85 86 87 88 89 F1 (%) syntax-aware syntax-agnostic Figure 4: F1 scores by k-order pruning and the syntax-agnostic result on English development set. ment pruning falls back to syntax-agnostic type. Meanwhile, from the best k setting to the lower order pruning, we receive a much faster performance drop, compared to the higher order pruning until the complete syntax-agnostic case. The proposed k-order pruning algorithm always works even it reaches the syntax-agnostic setting, which empirically explains why the current syntax-aware and syntax-agnostic SRL models hold little performance difference, as maximum k-order pruning actually removes few words just like syntaxagnostic model. 3.4 End-to-end SRL In this work, we consider additional model that integrates predicate disambiguation and argument labeling into one sequence labeling model. In order to implement an end-to-end model, we introduce a virtual root (VR) for predicate disambiguation similar to Zhao et al. (2013) who handled the entire SRL task as word pair classification. Concretely, we add a predicate sense feature to the input sequence by concatenating a VR. The word representation of VR is randomly initialized during training. In Figure 5, we give an example sequence with the labels for the given sentence. We also report results of our end-to-end model on CoNLL-2009 test set with syntax-aware and syntax-agnostic settings. As shown in Table 7, our end-to-end model yields slightly weaker performance compared with our pipeline. A reasonable account for performance degradation is that the training data has completely different genre distributions over predicate senses and argument roles, which may be somewhat confusing for integrative model to make classification decisions. 2067 A2 A0 02 <VR> Someone makes you happy NONE A1 Figure 5: An example sequence with labels of endto-end model (makes is the given predicate). Our system P R F1 syntax-aware (end-to-end) 89.3 88.7 89.0 syntax-aware (pipeline) 89.7 89.3 89.5 syntax-agnostic (end-to-end) 88.9 87.9 88.4 syntax-agnostic (pipeline) 89.5 87.9 88.7 Table 7: Comparison of results on CoNLL-2009 data between our end-to-end and pipeline models. 3.5 CoNLL-2008 SRL Setting For a full SRL task, the predicate identification subtask is also indispensable, which has been included in CoNLL-2008 shared task. We thus evaluate our model in terms of data and setting of the CoNLL-2008 benchmark (WSJ). To identify predicates, we train the BiLSTMMLP sequence labeling model with same parameters in Section 2.4 to tackle the predicate identification and disambiguation subtasks in one shot, and the only difference is that we remove the predicate-specific indicator feature. The F1 score of our predicate labeling model is 90.53% on indomain (WSJ) data. Compared with the best reported results, we observe absolute improvements in semantic F1 of 0.8% (in Table 8). Note that as predicate identification is introduced, our same model shows about 6% performance loss for either syntax-agnostic or syntax-aware case, which indicates that predicate identification should be carefully handled, as it is very needed in a complete practical SRL system. 4 Syntactic Contribution Syntactic information plays an informative role in semantic role labeling. However, few studies were done to quantitatively evaluate the syntactic contribution to SRL. Furthermore, we observe that most of the above compared neural SRL systems took the syntactic parser of (Bj¨orkelund et al., 2010) as syntactic inputs instead of the one from CoNLL-2009 shared task, which adopted a much weaker syntactic parser. Especially (Marcheggiani and Titov, 2017), adopted an external syntactic System LAS Sem-F1 Johansson and Nugues (2008) 90.13 81.75 Zhao and Kit (2008) 87.52 77.67 Zhao et al. (2009b) 88.39 82.1 (80.53) 89.28 82.5 (80.94) Zhao et al. (2013) 88.39 82.5 (80.91) 89.28 82.4 (80.88) Ours (syntax-agnostic) − 82.9 Ours (syntax-aware) 86.0 83.3 Table 8: Results on the CoNLL-2008 in-domain (WSJ) test set. The results in parenthesis are on WSJ + Brown test set. parser with even higher parsing accuracy. Contrarily, our SRL model is based on the automatically predicted parse with moderate performance provided by CoNLL-2009 shared task, but outperforms their models. This section thus attempts to explore how much syntax contributes to dependency-based SRL in deep learning framework and how to effectively evaluate relative performance of syntax-based SRL. To this end, we conduct experiments for empirical analysis with different syntactic inputs. Syntactic Input In order to obtain different syntactic inputs, we design a faulty syntactic tree generator (refer to STG hereafter), which is able to produce random errors in the output parse tree like a true parser does. To simplify implementation, we construct a new syntactic tree based on the gold standard parse tree. Given an input error probability distribution estimated from a true parser output, our algorithm presented in Algorithm 2 stochastically modifies the syntactic heads of nodes on the premise of a valid tree. Evaluation Measure For SRL task, the primary evaluation measure is the semantic labeled F1 score. However, the score is influenced by the quality of syntactic input to some extent, leading to unfaithfully reflecting the competence of syntax-based SRL system. Namely, this is not the outcome of a true and fair quantitative comparison for these types of SRL models. To normalize the semantic score relative to syntactic parse, we take into account additional evaluation measure to estimate the actual overall performance of SRL. Here, we use the ratio between labeled F1 score for semantic dependencies (Sem-F1) and the labeled attachment score (LAS) for syntactic dependencies 2068 System LAS (%) P (%) R (%) Sem-F1 (%) Sem-F1/LAS (%) Zhao et al. (2009c) [SRL-only] 86.0 − − 85.4 99.3 Zhao et al. (2009a) [Joint] 89.2 − − 86.2 96.6 Bj¨orkelund et al. (2010) 89.8 87.1 84.5 85.8 95.6 Lei et al. (2015) 90.4 − − 86.6 95.8 Roth and Lapata (2016) 89.8 88.1 85.3 86.7 96.5 Marcheggiani and Titov (2017) 90.3∗ 89.1 86.8 88.0 97.5 Ours + CoNLL-2009 predicted 86.0 89.7 89.3 89.5 104.0 Ours + Auto syntax 90.0 90.5 89.3 89.9 99.9 Ours + Gold syntax 100 91.0 89.7 90.3 90.3 Table 9: Results on English test set, in terms of labeled attachment score for syntactic dependencies (LAS), semantic precision (P), semantic recall (R), semantic labeled F1 score (Sem-F1), the ratio SemF1/LAS. A superscript * indicates LAS results from our personal communication with the authors. Algorithm 2 Faulty Syntactic Tree Generator Input: A gold standard syntactic tree GT, the specific error probability p Output: The new generative syntactic tree NT 1: N denotes the number of nodes in GT 2: for each node n ∈GT do 3: r = random(0, 1), a random number 4: if r < p then 5: h = random(0, N), a random integer 6: find the syntactic head nh of n in GT 7: modify nh = h, and get a new tree NT 8: if NT is a valid tree then 9: break 10: else 11: goto step 5 12: end if 13: end if 14: end for 15: return the new generative tree NT proposed by Surdeanu et al. (2008) as evaluation metric.6 The benefits of this measure are twofold: quantitatively evaluating syntactic contribution to SRL and impartially estimating the true performance of SRL, independent of the performance of the input syntactic parser. Table 9 reports the performance of existing models7 in term of Sem-F1/LAS ratio on CoNLL2009 English test set. Interestingly, even though our system has significantly lower scores than others by 3.8% LAS in syntactic components, we 6The idea of ratio score in Surdeanu et al. (2008) actually was from author of this paper, Hai Zhao, which has been indicated in the acknowledgement part of Surdeanu et al. (2008). 7Note that several SRL systems without providing syntactic information are not listed in the table. 85 90 95 100 LAS (%) 84 86 88 90 92 Sem-F1 (%) 1st-order SRL 10th-order SRL GCNs Figure 6: The Sem-F1 scores of our models with different quality of syntactic inputs vs. GCNs (Marcheggiani and Titov, 2017) on test set. obtain the highest results both on Sem-F1 and the Sem-F1/LAS ratio, respectively. These results show that our SRL component is relatively much stronger. Moreover, the ratio comparison in Table 9 also shows that since the CoNLL-2009 shared task, most SRL works actually benefit from the enhanced syntactic component rather than the improved SRL component itself. All post-CoNLL SRL systems, either traditional or neural types, did not exceed the top systems of CoNLL-2009 shared task, (Zhao et al., 2009c) (SRL-only track using the provided predicated syntax) and (Zhao et al., 2009a) (Joint track using self-developed parser). We believe that this work for the first time reports both higher Sem-F1 and higher Sem-F1/LAS ratio since CoNLL-2009 shared task. We also perform our first and tenth order pruning models with different erroneous syntactic inputs generated from STG and evaluate their per2069 formance using the Sem-F1/LAS ratio. Figure 6 shows Sem-F1 scores at different quality of syntactic parse inputs on the English test set whose LAS varies from 85% to 100%. Compared to previous state-of-the-arts (Marcheggiani and Titov, 2017). Our tenth-order pruning model gives quite stable SRL performance no matter the syntactic input quality varies in a broad range, while our firstorder pruning model yields overall lower results (1-5% F1 drop), owing to missing too many true arguments. These results show that high-quality syntactic parses may indeed enhance dependency SRL. Furthermore, it indicates that our model with an accurate enough syntactic input as Marcheggiani and Titov (2017), namely, 90% LAS, will give a Sem-F1 exceeding 90% for the first time in the research timeline of semantic role labeling. 5 Related Work Semantic role labeling was pioneered by Gildea and Jurafsky (2002). Most traditional SRL models rely heavily on feature templates (Pradhan et al., 2005; Zhao et al., 2009b; Bj¨orkelund et al., 2009). Among them, Pradhan et al. (2005) combined features derived from different syntactic parses based on SVM classifier, while Zhao et al. (2009b) presented an integrative approach for dependency SRL by greedy feature selection algorithm. Later, Collobert et al. (2011) proposed a convolutional neural network model of inducing word embeddings substituting for hand-crafted features, which was a breakthrough for SRL task. With the impressive success of deep neural networks in various NLP tasks (Zhang et al., 2016; Qin et al., 2017; Cai et al., 2017), a series of neural SRL systems have been proposed. Foland and Martin (2015) presented a dependency semantic role labeler using convolutional and time-domain neural networks, while FitzGerald et al. (2015) exploited neural network to jointly embed arguments and semantic roles, akin to the work (Lei et al., 2015), which induced a compact feature representation applying tensor-based approach. Recently, researchers consider multiple ways to effectively integrate syntax into SRL learning. Roth and Lapata (2016) introduced dependency path embedding to model syntactic information and exhibited a notable success. Marcheggiani and Titov (2017) leveraged the graph convolutional network to incorporate syntax into neural models. Differently, Marcheggiani et al. (2017) proposed a syntax-agnostic model using effective word representation for dependency SRL, which for the first time achieves comparable performance as stateof-the-art syntax-aware SRL models. However, most neural SRL works seldom pay much attention to the impact of input syntactic parse over the resulting SRL performance. This work is thus more than proposing a high performance SRL model through reviewing the highlights of previous models, and presenting an effective syntactic tree based argument pruning. Our work is also closely related to (Punyakanok et al., 2008; He et al., 2017). Under the traditional methods, Punyakanok et al. (2008) investigated the significance of syntax to SRL system and shown syntactic information most crucial in the pruning stage. He et al. (2017) presented extensive error analysis with deep learning model for span SRL, including discussion of how constituent syntactic parser could be used to improve SRL performance. 6 Conclusion and Future Work This paper presents a simple and effective neural model for dependency-based SRL, incorporating syntactic information with the proposed extended k-order pruning algorithm. With a large enough setting of k, our pruning algorithm will result in a syntax-agnostic setting for the argument labeling model, which smoothly unifies syntax-aware and syntax-agnostic SRL in a consistent way. Experimental results show that with the help of deep enhanced representation, our model outperforms the previous state-of-the-art models in both syntaxaware and syntax-agnostic situations. In addition, we consider the Sem-F1/LAS ratio as a mean of evaluating syntactic contribution to SRL, and true performance of SRL independent of the quality of syntactic parser. Though we again confirm the importance of syntax to SRL with empirical experiments, we are aware that since (Pradhan et al., 2005), the gap between syntax-aware and syntax-agnostic SRL has been greatly reduced, from as high as 10% to only 1-2% performance loss in this work. However, maybe we will never reach a satisfying conclusion, as whenever one proposes a syntax-agnostic SRL system which can outperform all syntax-aware ones at then, always there comes argument that you have never fully explored creative new method to effectively exploit the syntax input. 2070 References Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). Seattle, Washington, USA, pages 1533–1544. Anders Bj¨orkelund, Bohnet Bernd, Love Hafdell, and Pierre Nugues. 2010. A high-performance syntactic and semantic dependency parser. In Proceedings of the 23rd International Conference on Computational Linguistics (CoLING 2010). Beijing, China, pages 33–36. Anders Bj¨orkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task. Boulder, Colorado, pages 43–48. Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada, pages 608–615. Ronan Collobert, Jason Weston, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(1):2493–2537. Nicholas FitzGerald, Oscar Tckstrm, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 960–970. William Foland and James Martin. 2015. Dependencybased semantic role labeling using convolutional neural networks. In Joint Conference on Lexical and Computational Semantics. pages 279–288. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational linguistics 28(3):245–288. Daniel Gildea and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics (ACL). Philadelphia, Pennsylvania, USA, pages 239–246. Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, Pavel Straˇn´ak, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task. Boulder, Colorado, pages 1–18. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada, pages 473–483. Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Richard Johansson and Pierre Nugues. 2008. Dependency-based syntactic-semantic analysis with propbank and nombank. In Proceedings of the Twelfth Conference on Computational Natural Language Learning (CoNLL). pages 183–187. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). Tao Lei, Yuan Zhang, Llu´ıs M`arquez, Alessandro Moschitti, and Regina Barzilay. 2015. High-order lowrank tensors for semantic role labeling. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL: HLT). pages 1150–1160. Diego Marcheggiani, Anton Frolov, and Ivan Titov. 2017. A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Vancouver, Canada, pages 411–420. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP). Copenhagen, Denmark, pages 1506–1515. Todor Mihaylov and Anette Frank. 2016. Discourse relation sense classification using cross-argument semantic similarity based on word embeddings. In Proceedings of the Twentieth Conference on Computational Natural Language Learning - Shared Task (CoNLL). Berlin, Germany, pages 100–107. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems (NIPS). pages 3111–3119. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar, pages 1532– 1543. 2071 Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL: HLT). New Orleans, Louisiana. Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James Martin, and Daniel Jurafsky. 2005. Semantic role labeling using different syntactic views. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL). Ann Arbor, Michigan, pages 581–588. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics 34(2):257–287. Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric Xing. 2017. Adversarial connectiveexploiting networks for implicit discourse relation classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada, pages 1006– 1017. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany, pages 1192–1202. Chen Shi, Shujie Liu, Shuo Ren, Shi Feng, Mu Li, Ming Zhou, Xu Sun, and Houfeng Wang. 2016. Knowledge-based semantic embedding for machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany, pages 2245–2254. Rupesh K Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Training very deep networks. In Advances in neural information processing systems. pages 2377–2385. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The conll 2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the Twelfth Conference on Computational Natural Language Learning - Shared Task (CoNLL). Manchester, England, pages 159–177. Wen-tau Yih, Matthew Richardson, Chris Meek, MingWei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany, pages 201–206. Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany, pages 1382–1392. Hai Zhao, Wenliang Chen, Jun’ichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009a. Multilingual dependency learning: Exploiting rich features for tagging syntactic and semantic dependencies. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning - Shared Task (CoNLL). Boulder, Colorado, pages 61–66. Hai Zhao, Wenliang Chen, and Chunyu Kit. 2009b. Semantic dependency parsing of NomBank and PropBank: An efficient integrated approach via a largescale feature selection. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP). Singapore, pages 30– 39. Hai Zhao, Wenliang Chen, and Guodong Zhou. 2009c. Multilingual dependency learning: A huge feature engineering method to semantic dependency parsing. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning Shared Task (CoNLL). Boulder, Colorado, pages 55– 60. Hai Zhao and Chunyu Kit. 2008. Parsing syntactic and semantic dependencies with two single-stage maximum entropy models. In Proceedings of the Twelfth Conference on Computational Natural Language Learning (CoNLL). pages 203–207. Hai Zhao, Xiaotian Zhang, and Chunyu Kit. 2013. Integrative semantic dependency parsing via efficient large-scale feature selection. Journal of Artificial Intelligence Research 46:203–233.
2018
192
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2072–2082 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2072 Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation Alane Suhr and Yoav Artzi Department of Computer Science and Cornell Tech Cornell University New York, NY, 10044 {suhr, yoav}@cs.cornell.edu Abstract We propose a learning approach for mapping context-dependent sequential instructions to actions. We address the problem of discourse and state dependencies with an attention-based model that considers both the history of the interaction and the state of the world. To train from start and goal states without access to demonstrations, we propose SESTRA, a learning algorithm that takes advantage of singlestep reward observations and immediate expected reward maximization. We evaluate on the SCONE domains, and show absolute accuracy improvements of 9.8%25.3% across the domains over approaches that use high-level logical representations. 1 Introduction An agent executing a sequence of instructions must address multiple challenges, including grounding the language to its observed environment, reasoning about discourse dependencies, and generating actions to complete high-level goals. For example, consider the environment and instructions in Figure 1, in which a user describes moving chemicals between beakers and mixing chemicals together. To execute the second instruction, the agent needs to resolve sixth beaker and last one to objects in the environment. The third instruction requires resolving it to the rightmost beaker mentioned in the second instruction, and reasoning about the set of actions required to mix the colors in the beaker to brown. In this paper, we describe a model and learning approach to map sequences of instructions to actions. Our model considers previous utterances and the world state to select actions, learns to combine simple actions to achieve complex goals, and can be trained using Start Goal throw out first beaker POP 1, STOP pour sixth beaker into last one POP 6, POP 6, PUSH 7 O, PUSH 7 O, STOP it turns brown POP 7, POP 7, POP 7, PUSH 7 B, PUSH 7 B, PUSH 7 B, STOP pour purple beaker into yellow one POP 3, PUSH 5 P, STOP throw out two units of brown one POP 7, POP 7, STOP Start Goal Figure 1: Example from the SCONE (Long et al., 2016) ALCHEMY domain, including a start state (top), sequence of instructions, and a goal state (bottom). Each instruction is annotated with a sequence of actions from the set of actions we define for ALCHEMY. goal states without access to demonstrations. The majority of work on executing sequences of instructions focuses on mapping instructions to high-level formal representations, which are then evaluated to generate actions (e.g., Chen and Mooney, 2011; Long et al., 2016). For example, the third instruction in Figure 1 will be mapped to mix(prev_arg1), indicating that the mix action should be applied to first argument of the previous action (Long et al., 2016; Guu et al., 2017). In contrast, we focus on directly generating the sequence of actions. This requires resolving references without explicitly modeling them, and learning the sequences of actions required to complete high-level actions; for example, that mixing requires removing everything in the beaker and replacing with the same number of brown items. A key challenge in executing sequences of instructions is considering contextual cues from both the history of the interaction and the state of the world. Instructions often refer to previously 2073 mentioned objects (e.g., it in Figure 1) or actions (e.g., do it again). The world state provides the set of objects the instruction may refer to, and implicitly determines the available actions. For example, liquid can not be removed from an empty beaker. Both types of contexts continuously change during an interaction. As new instructions are given, the instruction history expands, and as the agent acts the world state changes. We propose an attentionbased model that takes as input the current instruction, previous instructions, the initial world state, and the current state. At each step, the model computes attention encodings of the different inputs, and predicts the next action to execute. We train the model given instructions paired with start and goal states without access to the correct sequence of actions. During training, the agent learns from rewards received through exploring the environment with the learned policy by mapping instructions to sequences of actions. In practice, the agent learns to execute instructions gradually, slowly correctly predicting prefixes of the correct sequences of increasing length as learning progress. A key challenge is learning to correctly select actions that are only required later in execution sequences. Early during learning, these actions receive negative updates, and the agent learns to assign them low probabilities. This results in an exploration problem in later stages, where actions that are only required later are not sampled during exploration. For example, in the ALCHEMY domain shown in Figure 1, the agent behavior early during execution of instructions can be accomplished by only using POP actions. As a result, the agent quickly learns a strong bias against PUSH actions, which in practice prevents the policy from exploring them again. We address this with a learning algorithm that observes the reward for all possible actions for each visited state, and maximizes the immediate expected reward. We evaluate our approach on SCONE (Long et al., 2016), which includes three domains, and is used to study recovering predicate logic meaning representations for sequential instructions. We study the problem of generating a sequence of low-level actions, and re-define the set of actions for each domain. For example, we treat the beakers in the ALCHEMY domain as stacks and use only POP and PUSH actions. Our approach robustly learns to execute sequential instructions with up to 89.1% task-completion accuracy for single instruction, and 62.7% for complete sequences. Our code is available at https://github.com/clic-lab/scone. 2 Technical Overview Task and Notation Let S be the set of all possible world states, X be the set of all natural language instructions, and A be the set of all actions. An instruction ¯x ∈X of length |¯x| is a sequence of tokens ⟨x1, ...x|¯x|⟩. Executing an action modifies the world state following a transition function T : S × A →S. For example, the ALCHEMY domain includes seven beakers that contain colored liquids. The world state defines the content of each beaker. We treat each beaker as a stack. The actions are POP N and PUSH N C, where 1 ≤N ≤7 is the beaker number and C is one of six colors. There are a total of 50 actions, including the STOP action. Section 6 describes the domains in detail. Given a start state s1 and a sequence of instructions ⟨¯x1, . . . , ¯xn⟩, our goal is to generate the sequence of actions specified by the instructions starting from s1. We treat the execution of a sequence of instructions as executing each instruction in turn. The execution ¯e of an instruction ¯xi starting at a state s1 and given the history of the instruction sequence ⟨¯x1, . . . , ¯xi−1⟩is a sequence of state-action pairs ¯e = ⟨(s1, a1), ..., (sm, am)⟩, where ak ∈A, sk+1 = T(sk, ak). The final action am is the special action STOP, which indicates the execution has terminated. The final state is then sm, as T(sk, STOP) = sk. Executing a sequence of instructions in order generates a sequence ⟨¯e1, ..., ¯en⟩, where ¯ei is the execution of instruction ¯xi. When referring to states and actions in an indexed execution ¯ei, the k-th state and action are si,k and ai,k. We execute instructions one after the other: ¯e1 starts at the interaction initial state s1 and si+1,1 = si,|¯ei|, where si+1,1 is the start state of ¯ei+1 and si,|¯ei| is the final state of ¯ei. Model We model the agent with a neural network policy (Section 4). At step k of executing the i-th instruction, the model input is the current instruction ¯xi, the previous instructions ⟨¯x1, . . . , ¯xi−1⟩, the world state s1 at the beginning of executing ¯xi, and the current state sk. The model predicts the next action ak to execute. If ak = STOP, we switch to the next instruction, or if at the end of the instruction sequence, terminate. Otherwise, we update the state to sk+1 = T(sk, ak). The model uses attention to 2074 process the different inputs and a recurrent neural network (RNN) decoder to generate actions (Bahdanau et al., 2015). Learning We assume access to a set of N instruction sequences, where each instruction in each sequence is paired with its start and goal states. During training, we create an example for each instruction. Formally, the training set is {(¯x(j) i , s(j) i,1, ⟨¯x(j) 1 , . . . , ¯x(j) i−1⟩, g(j) i )}N,n(j) j=1,i=1, where ¯x(j) i is an instruction, s(j) i,1 is a start state, ⟨¯x(j) 1 , . . . , ¯x(j) i−1⟩is the instruction history, g(j) i is the goal state, and n(j) is the length of the j-th instruction sequence. This training data contains no evidence about the actions and intermediate states required to execute each instruction.1 We use a learning method that maximizes the expected immediate reward for a given state (Section 5). The reward accounts for task-completion and distance to the goal via potential-based reward shaping. Evaluation We evaluate exact task completion for sequences of instructions on a test set {(s(j) 1 , ⟨¯x(j) 1 , . . . , ¯x(j) nj ⟩, g(j))}N j=1, where g(j) is the oracle goal state of executing instructions ¯x(j) 1 , . . . ,¯x(j) nj in order starting from s(j) 1 . We also evaluate single-instruction task completion using per-instruction annotated start and goal states. 3 Related Work Executing instructions has been studied using the SAIL corpus (MacMahon et al., 2006) with focus on navigation using high-level logical representations (Chen and Mooney, 2011; Chen, 2012; Artzi and Zettlemoyer, 2013; Artzi et al., 2014) and lowlevel actions (Mei et al., 2016). While SAIL includes sequences of instructions, the data demonstrates limited discourse phenomena, and instructions are often processed in isolation. Approaches that consider as input the entire sequence focused on segmentation (Andreas and Klein, 2015). Recently, other navigation tasks were proposed with focus on single instructions (Anderson et al., 2018; Janner et al., 2018). We focus on sequences of environment manipulation instructions and modeling contextual cues from both the changing environment and instruction history. Manipulation using single-sentence instructions has been stud1This training set is a subset of the data used in previous work (Section 6, Guu et al., 2015), in which training uses all instruction sequences of length 1 and 2. ied using the Blocks domain (Bisk et al., 2016, 2018; Misra et al., 2017; Tan and Bansal, 2018). Our work is related to the work of Branavan et al. (2009) and Vogel and Jurafsky (2010). While both study executing sequences of instructions, similar to SAIL, the data includes limited discourse dependencies. In addition, both learn with rewards computed from surface-form similarity between text in the environment and the instruction. We do not rely on such similarities, but instead use a state distance metric. Language understanding in interactive scenarios that include multiple turns has been studied with focus on dialogue for querying database systems using the ATIS corpus (Hemphill et al., 1990; Dahl et al., 1994). Tür et al. (2010) surveys work on ATIS. Miller et al. (1996), Zettlemoyer and Collins (2009), and Suhr et al. (2018) modeled context dependence in ATIS for generating formal representations. In contrast, we focus on environments that change during execution and directly generating environment actions, a scenario that is more related to robotic agents than database query. The SCONE corpus (Long et al., 2016) was designed to reflect a broad set of discourse context-dependence phenomena. It was studied extensively using logical meaning representations (Long et al., 2016; Guu et al., 2017; Fried et al., 2018). In contrast, we are interested in directly generating actions that modify the environment. This requires generating lower-level actions and learning procedures that are otherwise hardcoded in the logic (e.g., mixing action in Figure 1). Except for Fried et al. (2018), previous work on SCONE assumes access only to the initial and final states during training. This form of supervision does not require operating the agent manually to acquire the correct sequence of actions, a difficult task in robotic agents with complex control. Goal state supervision has been studied for instructional language (e.g., Branavan et al., 2009; Artzi and Zettlemoyer, 2013; Bisk et al., 2016), and more extensively in question answering when learning with answer annotations only (e.g., Clarke et al., 2010; Liang et al., 2011; Kwiatkowski et al., 2013; Berant et al., 2013; Berant and Liang, 2014, 2015; Liang et al., 2017). 4 Model We map sequences of instructions ⟨¯x1, . . . , ¯xn⟩ to actions by executing the instructions in or2075 Utterance initial state s1 <latexit sha1_ba se64="pSKeRC6KrabkRj9ZFy6P3Vrmk v4=">ACUHicbZBNb9QwEIadBUoJX1 s4crFYUXFaJRSp5VapF45FIrTSJlpN JrOtVduJ7EnLKkr/B7+mVzhz4qdwAu8 HEmwZyfKr9x17NE/ZaOU5SX5Egzt372 3d34QP3z0+MnT4c6zT75uHVKGta7da QmetLKUsWJNp40jMKWmk/LiaJGfXJLz qrYfed5QYeDMqplC4GBNh3u78vo6Z/r MXcZMDixSL/Nc7sqVq6xiBVp6BibZS z9Np8NRMk6WJW+LdC1GYl3H051oK69q bA1ZRg3eT9Kk4aIDxwo19XHemoAL+C MJkFaMOSLbrldL18Fp5Kz2oVjWS7dv1 90YLyfmzJ0GuBzv5ktzP9mlV98uDGdZ wdFWLpmSyuhs9aLbmWC3yUo6Q9TwI QBfQoMRzcICBnY/j3JGlK6yNAVt1Of aTtOi63Bk5Svs+DuTSTU63RfZm/G6cf Hg7OkzWCLfFC/FSvBap2BeH4r04FplA 8UXciK/iW/Q9+hn9GkSr1j+3eC7+qUH 8G3easyo=</latexit> <latexit sha1_ba se64="pSKeRC6KrabkRj9ZFy6P3Vrmk v4=">ACUHicbZBNb9QwEIadBUoJX1 s4crFYUXFaJRSp5VapF45FIrTSJlpN JrOtVduJ7EnLKkr/B7+mVzhz4qdwAu8 HEmwZyfKr9x17NE/ZaOU5SX5Egzt372 3d34QP3z0+MnT4c6zT75uHVKGta7da QmetLKUsWJNp40jMKWmk/LiaJGfXJLz qrYfed5QYeDMqplC4GBNh3u78vo6Z/r MXcZMDixSL/Nc7sqVq6xiBVp6BibZS z9Np8NRMk6WJW+LdC1GYl3H051oK69q bA1ZRg3eT9Kk4aIDxwo19XHemoAL+C MJkFaMOSLbrldL18Fp5Kz2oVjWS7dv1 90YLyfmzJ0GuBzv5ktzP9mlV98uDGdZ wdFWLpmSyuhs9aLbmWC3yUo6Q9TwI QBfQoMRzcICBnY/j3JGlK6yNAVt1Of aTtOi63Bk5Svs+DuTSTU63RfZm/G6cf Hg7OkzWCLfFC/FSvBap2BeH4r04FplA 8UXciK/iW/Q9+hn9GkSr1j+3eC7+qUH 8G3easyo=</latexit> <latexit sha1_ba se64="pSKeRC6KrabkRj9ZFy6P3Vrmk v4=">ACUHicbZBNb9QwEIadBUoJX1 s4crFYUXFaJRSp5VapF45FIrTSJlpN JrOtVduJ7EnLKkr/B7+mVzhz4qdwAu8 HEmwZyfKr9x17NE/ZaOU5SX5Egzt372 3d34QP3z0+MnT4c6zT75uHVKGta7da QmetLKUsWJNp40jMKWmk/LiaJGfXJLz qrYfed5QYeDMqplC4GBNh3u78vo6Z/r MXcZMDixSL/Nc7sqVq6xiBVp6BibZS z9Np8NRMk6WJW+LdC1GYl3H051oK69q bA1ZRg3eT9Kk4aIDxwo19XHemoAL+C MJkFaMOSLbrldL18Fp5Kz2oVjWS7dv1 90YLyfmzJ0GuBzv5ktzP9mlV98uDGdZ wdFWLpmSyuhs9aLbmWC3yUo6Q9TwI QBfQoMRzcICBnY/j3JGlK6yNAVt1Of aTtOi63Bk5Svs+DuTSTU63RfZm/G6cf Hg7OkzWCLfFC/FSvBap2BeH4r04FplA 8UXciK/iW/Q9+hn9GkSr1j+3eC7+qUH 8G3easyo=</latexit> Throw out first beaker It turns brown Pour sixth beaker into last one Previous Utterances: ¯x1, ¯x2 <latexit sha1_base64="7La+CZxS F6irvZeZfcBl4yF3Q=">ACTXicbZBNaxRBEIZ7NprE8WsTj14aF8GDLDMh 4Mcp4CXHFRwT2BmWmp7apEl/DN01MUszf8Nf49WcvfpHPInYu1lBNxY0PLxvV f3W7dKesqy78lg687d7Z3de+n9Bw8fPR7u7X/0tnMC2GVdac1eFTSYEGSFJ6 2DkHXCk/qi3dL/+QSnZfWfKBFi5WGMyPnUgBFaTbMSsIrChOHl9J2nhdE6MAI 9G95X9bgwlU/y1/yP3gwG46ycbYqfhvyNYzYuiazvWS7bKzoNBoSCryf5lLVQ BHUijs07Lz2IK4gDOcRjSg0Vdh9bWeP49Kw+fWxWOIr9S/JwJo7xe6jp0a6Nx vekvxv17jlxdubKf56ypI03aERtwsn3eKk+XL7HgjHQpSiwgnIzv5+IcHIiY mU/T0qHBT8JqDaYJpeineRVC6TQf5X2fxuTyzZxuQ3EwfjPO3h+OjrJ1hLvsKX vGXrCcvWJH7JhNWME+8y+sK/sOvmW/Eh+Jr9uWgfJeuYJ+6cGO78Bg3uzvQ= =</latexit> <latexit sha1_base64="7La+CZxS F6irvZeZfcBl4yF3Q=">ACTXicbZBNaxRBEIZ7NprE8WsTj14aF8GDLDMh 4Mcp4CXHFRwT2BmWmp7apEl/DN01MUszf8Nf49WcvfpHPInYu1lBNxY0PLxvV f3W7dKesqy78lg687d7Z3de+n9Bw8fPR7u7X/0tnMC2GVdac1eFTSYEGSFJ6 2DkHXCk/qi3dL/+QSnZfWfKBFi5WGMyPnUgBFaTbMSsIrChOHl9J2nhdE6MAI 9G95X9bgwlU/y1/yP3gwG46ycbYqfhvyNYzYuiazvWS7bKzoNBoSCryf5lLVQ BHUijs07Lz2IK4gDOcRjSg0Vdh9bWeP49Kw+fWxWOIr9S/JwJo7xe6jp0a6Nx vekvxv17jlxdubKf56ypI03aERtwsn3eKk+XL7HgjHQpSiwgnIzv5+IcHIiY mU/T0qHBT8JqDaYJpeineRVC6TQf5X2fxuTyzZxuQ3EwfjPO3h+OjrJ1hLvsKX vGXrCcvWJH7JhNWME+8y+sK/sOvmW/Eh+Jr9uWgfJeuYJ+6cGO78Bg3uzvQ= =</latexit> <latexit sha1_base64="7La+CZxS F6irvZeZfcBl4yF3Q=">ACTXicbZBNaxRBEIZ7NprE8WsTj14aF8GDLDMh 4Mcp4CXHFRwT2BmWmp7apEl/DN01MUszf8Nf49WcvfpHPInYu1lBNxY0PLxvV f3W7dKesqy78lg687d7Z3de+n9Bw8fPR7u7X/0tnMC2GVdac1eFTSYEGSFJ6 2DkHXCk/qi3dL/+QSnZfWfKBFi5WGMyPnUgBFaTbMSsIrChOHl9J2nhdE6MAI 9G95X9bgwlU/y1/yP3gwG46ycbYqfhvyNYzYuiazvWS7bKzoNBoSCryf5lLVQ BHUijs07Lz2IK4gDOcRjSg0Vdh9bWeP49Kw+fWxWOIr9S/JwJo7xe6jp0a6Nx vekvxv17jlxdubKf56ypI03aERtwsn3eKk+XL7HgjHQpSiwgnIzv5+IcHIiY mU/T0qHBT8JqDaYJpeineRVC6TQf5X2fxuTyzZxuQ3EwfjPO3h+OjrJ1hLvsKX vGXrCcvWJH7JhNWME+8y+sK/sOvmW/Eh+Jr9uWgfJeuYJ+6cGO78Bg3uzvQ= =</latexit> Current Utterance ¯x3 <latexit sha1_base64="yHXbEQsJZWgzUchi9wCnpE+5bvg=">ACP3icb VBNT9wFHSgUJpCu8CxF4tVpZ5WCUVquSFx4QgSKUibaPXivAUL24nsl8LKyr2/hiuc+Rf8A05Vr73Vu+yhXTqSpdHM+/KUjZKOkuQxWlp+tbL6eu1N/HZ949 373ubWN1e3VmAmalXb8xIcKmkwI0kKzxuLoEuFZ+XV4dQ/+47Wydqc0qTBQsOFkWMpgI06u3khDfkD1tr0RDPiNCEci7vATrb7rR51GvnwySGfhLks5Jn81 xPNqMVvOqFq0OE4UC54Zp0lDhwZIUCrs4bx02IK7gAoeBGtDoCj/7TMc/BqXi49qGFy6aqX93eNDOTXQZKjXQpVv0puJ/vcpNBy5sp/HXwkvTtIRGPC8ft4pT zadp8UpaFKQmgYCwMtzPxSVYECEmF8d5CA2vRa01mMrnohumhfe51byfdl0cksXc3pJst3B/iA52esfJPMI19gHtsM+sZR9YQfsiB2zjAn2g92yO3YfPURP0 c/o13PpUjTv2Wb/IPr9B/vPry0=</latexit> <latexit sha1_base64="yHXbEQsJZWgzUchi9wCnpE+5bvg=">ACP3icb VBNT9wFHSgUJpCu8CxF4tVpZ5WCUVquSFx4QgSKUibaPXivAUL24nsl8LKyr2/hiuc+Rf8A05Vr73Vu+yhXTqSpdHM+/KUjZKOkuQxWlp+tbL6eu1N/HZ949 373ubWN1e3VmAmalXb8xIcKmkwI0kKzxuLoEuFZ+XV4dQ/+47Wydqc0qTBQsOFkWMpgI06u3khDfkD1tr0RDPiNCEci7vATrb7rR51GvnwySGfhLks5Jn81 xPNqMVvOqFq0OE4UC54Zp0lDhwZIUCrs4bx02IK7gAoeBGtDoCj/7TMc/BqXi49qGFy6aqX93eNDOTXQZKjXQpVv0puJ/vcpNBy5sp/HXwkvTtIRGPC8ft4pT zadp8UpaFKQmgYCwMtzPxSVYECEmF8d5CA2vRa01mMrnohumhfe51byfdl0cksXc3pJst3B/iA52esfJPMI19gHtsM+sZR9YQfsiB2zjAn2g92yO3YfPURP0 c/o13PpUjTv2Wb/IPr9B/vPry0=</latexit> <latexit sha1_base64="yHXbEQsJZWgzUchi9wCnpE+5bvg=">ACP3icb VBNT9wFHSgUJpCu8CxF4tVpZ5WCUVquSFx4QgSKUibaPXivAUL24nsl8LKyr2/hiuc+Rf8A05Vr73Vu+yhXTqSpdHM+/KUjZKOkuQxWlp+tbL6eu1N/HZ949 373ubWN1e3VmAmalXb8xIcKmkwI0kKzxuLoEuFZ+XV4dQ/+47Wydqc0qTBQsOFkWMpgI06u3khDfkD1tr0RDPiNCEci7vATrb7rR51GvnwySGfhLks5Jn81 xPNqMVvOqFq0OE4UC54Zp0lDhwZIUCrs4bx02IK7gAoeBGtDoCj/7TMc/BqXi49qGFy6aqX93eNDOTXQZKjXQpVv0puJ/vcpNBy5sp/HXwkvTtIRGPC8ft4pT zadp8UpaFKQmgYCwMtzPxSVYECEmF8d5CA2vRa01mMrnohumhfe51byfdl0cksXc3pJst3B/iA52esfJPMI19gHtsM+sZR9YQfsiB2zjAn2g92yO3YfPURP0 c/o13PpUjTv2Wb/IPr9B/vPry0=</latexit> MLP <latexit sha1_base64="FGxIWEFcvBEhuI9AHi8VO3WDcA=">ACJHi cbZDNSgMxFIUz/tbxr9Wlm2ARXJUZEdRdwY0LhQrWKp2hZDK3NjTJDElGKcM8hVtd+zSuxIUbn8VM24W2Hgczr039/JFKWfaeN6Xs7C4tLyWlz1zc2t 7artZ1bnWSKQpsmPF3EdHAmYS2YbDXaqAiIhDJxqel/XOIyjNEnljRimEgjxI1meUGBvd54ES+OqyVfSqda/hjYXnjT81dTRVq1dzVoI4oZkAaSgnWn d9LzVhTpRhlEPhBpmGlNAheYCutZI0GE+vrjABzaJcT9R9kmDx+nviZwIrUcisp2CmIGerZXhv7VYlx/ObDf90zBnMs0MSDpZ3s84NgkukeCYKaCGj6wh VDF7P6YDog1FpzrBgokPNFECLjPKBF1w/zMbi6XxSuJefPcpo37aPGWcO7Pq43vSnCtpD+gQ+egENdEFaqE2okigZ/SCXp035935cD4nrQvOdGYX/ ZHz/QNn6KPE</latexit> <latexit sha1_base64="FGxIWEFcvBEhuI9AHi8VO3WDcA=">ACJHi cbZDNSgMxFIUz/tbxr9Wlm2ARXJUZEdRdwY0LhQrWKp2hZDK3NjTJDElGKcM8hVtd+zSuxIUbn8VM24W2Hgczr039/JFKWfaeN6Xs7C4tLyWlz1zc2t 7artZ1bnWSKQpsmPF3EdHAmYS2YbDXaqAiIhDJxqel/XOIyjNEnljRimEgjxI1meUGBvd54ES+OqyVfSqda/hjYXnjT81dTRVq1dzVoI4oZkAaSgnWn d9LzVhTpRhlEPhBpmGlNAheYCutZI0GE+vrjABzaJcT9R9kmDx+nviZwIrUcisp2CmIGerZXhv7VYlx/ObDf90zBnMs0MSDpZ3s84NgkukeCYKaCGj6wh VDF7P6YDog1FpzrBgokPNFECLjPKBF1w/zMbi6XxSuJefPcpo37aPGWcO7Pq43vSnCtpD+gQ+egENdEFaqE2okigZ/SCXp035935cD4nrQvOdGYX/ ZHz/QNn6KPE</latexit> <latexit sha1_base64="FGxIWEFcvBEhuI9AHi8VO3WDcA=">ACJHi cbZDNSgMxFIUz/tbxr9Wlm2ARXJUZEdRdwY0LhQrWKp2hZDK3NjTJDElGKcM8hVtd+zSuxIUbn8VM24W2Hgczr039/JFKWfaeN6Xs7C4tLyWlz1zc2t 7artZ1bnWSKQpsmPF3EdHAmYS2YbDXaqAiIhDJxqel/XOIyjNEnljRimEgjxI1meUGBvd54ES+OqyVfSqda/hjYXnjT81dTRVq1dzVoI4oZkAaSgnWn d9LzVhTpRhlEPhBpmGlNAheYCutZI0GE+vrjABzaJcT9R9kmDx+nviZwIrUcisp2CmIGerZXhv7VYlx/ObDf90zBnMs0MSDpZ3s84NgkukeCYKaCGj6wh VDF7P6YDog1FpzrBgokPNFECLjPKBF1w/zMbi6XxSuJefPcpo37aPGWcO7Pq43vSnCtpD+gQ+egENdEFaqE2okigZ/SCXp035935cD4nrQvOdGYX/ ZHz/QNn6KPE</latexit> Decoder snippet <latexit sha1_base64="ZQxgHwmHzW9gN4+WfjZUN/xCJ0=">ACQXicbVD LThtBEJwl4bU8Yocjl1EMiJO1iyIBN0vhwJFIGCx5V9bsbBtGzGM10wtYq/0BviZXcs5P8AucolxzYWyMlNiU1FKpqnt6urJCodR9BQsfPi4uLS8shqurW9sfmo0 P184U1oOXW6ksb2MOZBCQxcFSugVFpjKJFxmN9/G/uUtWCeMPsdRAaliV1oMBWfopUFjZ48mCPdYnQA3OdiaJgl905wWRQFYDxqtqB1NQOdJPCUtMsXZoBksJbnh pQKNXDLn+nFUYFoxi4JLqMOkdFAwfsOuoO+pZgpcWk3OqemuV3I6NaXRjpR/52omHJupDLfqRheu1lvL7r5W784Mx2HB6ldBFiaD56/JhKSkaOs6L5sICRznyh HEr/P8pv2aWcfSphmFiQcMdN0oxnVcJr/txWlWJVbQV13Xok4tnc5on3YP2cTv6/rXViaYRrpBt8oXsk5gckg45JWekSzh5ID/I/kZ/Aqeg9/Bn9fWhWA6s0X+Q/ D3BTD+rzY=</latexit> <latexit sha1_base64="ZQxgHwmHzW9gN4+WfjZUN/xCJ0=">ACQXicbVD LThtBEJwl4bU8Yocjl1EMiJO1iyIBN0vhwJFIGCx5V9bsbBtGzGM10wtYq/0BviZXcs5P8AucolxzYWyMlNiU1FKpqnt6urJCodR9BQsfPi4uLS8shqurW9sfmo0 P184U1oOXW6ksb2MOZBCQxcFSugVFpjKJFxmN9/G/uUtWCeMPsdRAaliV1oMBWfopUFjZ48mCPdYnQA3OdiaJgl905wWRQFYDxqtqB1NQOdJPCUtMsXZoBksJbnh pQKNXDLn+nFUYFoxi4JLqMOkdFAwfsOuoO+pZgpcWk3OqemuV3I6NaXRjpR/52omHJupDLfqRheu1lvL7r5W784Mx2HB6ldBFiaD56/JhKSkaOs6L5sICRznyh HEr/P8pv2aWcfSphmFiQcMdN0oxnVcJr/txWlWJVbQV13Xok4tnc5on3YP2cTv6/rXViaYRrpBt8oXsk5gckg45JWekSzh5ID/I/kZ/Aqeg9/Bn9fWhWA6s0X+Q/ D3BTD+rzY=</latexit> <latexit sha1_base64="ZQxgHwmHzW9gN4+WfjZUN/xCJ0=">ACQXicbVD LThtBEJwl4bU8Yocjl1EMiJO1iyIBN0vhwJFIGCx5V9bsbBtGzGM10wtYq/0BviZXcs5P8AucolxzYWyMlNiU1FKpqnt6urJCodR9BQsfPi4uLS8shqurW9sfmo0 P184U1oOXW6ksb2MOZBCQxcFSugVFpjKJFxmN9/G/uUtWCeMPsdRAaliV1oMBWfopUFjZ48mCPdYnQA3OdiaJgl905wWRQFYDxqtqB1NQOdJPCUtMsXZoBksJbnh pQKNXDLn+nFUYFoxi4JLqMOkdFAwfsOuoO+pZgpcWk3OqemuV3I6NaXRjpR/52omHJupDLfqRheu1lvL7r5W784Mx2HB6ldBFiaD56/JhKSkaOs6L5sICRznyh HEr/P8pv2aWcfSphmFiQcMdN0oxnVcJr/txWlWJVbQV13Xok4tnc5on3YP2cTv6/rXViaYRrpBt8oXsk5gckg45JWekSzh5ID/I/kZ/Aqeg9/Bn9fWhWA6s0X+Q/ D3BTD+rzY=</latexit> Current state s3 <latexit sha1_base64="wP0LmfnIO 0hDCuIKBqTz5N56Nd0=">ACNnicbVDLSsNAFJ34Nr5aXYmbwSK4KokK6q7 gxmUFq0ITymRyWwdnkjBzo5YQ/Bq3uvZTXLkSt36C08dCWw8MHM65rzlRJoVB z3t3Zmbn5hcWl5bdldW19Y1KdfPKpLnm0OKpTPVNxAxIkUALBUq4yTQwFUm4 ju7OBv71PWgj0uQS+xmEivUS0RWcoZU6le0A4RGLs1xrSJAaZAi0pKZz2KnUv Lo3BJ0m/pjUyBjNTtVZCOKU58oO4pIZ0/a9DMOCaRcQukGuYGM8TvWg7alC VNgwmL4h5LuWSWm3VTbZw8Zqr87CqaM6avIViqGt2bSG4j/erEZDJzYjt2TsB BJliMkfLS8m0uKR2ERGOhgaPsW8K4FvZ+ym+ZhxtlK4b2KzgadKsSQuAl 62/bAoAq1ozS9L1ybnT+Y0TVoH9dO6d3FUa3jCJfIDtkl+8Qnx6RBzkmTtAg nT+SZvJBX5835cD6dr1HpjDPu2SJ/4Hz/AG+Oqts=</latexit> <latexit sha1_base64="wP0LmfnIO 0hDCuIKBqTz5N56Nd0=">ACNnicbVDLSsNAFJ34Nr5aXYmbwSK4KokK6q7 gxmUFq0ITymRyWwdnkjBzo5YQ/Bq3uvZTXLkSt36C08dCWw8MHM65rzlRJoVB z3t3Zmbn5hcWl5bdldW19Y1KdfPKpLnm0OKpTPVNxAxIkUALBUq4yTQwFUm4 ju7OBv71PWgj0uQS+xmEivUS0RWcoZU6le0A4RGLs1xrSJAaZAi0pKZz2KnUv Lo3BJ0m/pjUyBjNTtVZCOKU58oO4pIZ0/a9DMOCaRcQukGuYGM8TvWg7alC VNgwmL4h5LuWSWm3VTbZw8Zqr87CqaM6avIViqGt2bSG4j/erEZDJzYjt2TsB BJliMkfLS8m0uKR2ERGOhgaPsW8K4FvZ+ym+ZhxtlK4b2KzgadKsSQuAl 62/bAoAq1ozS9L1ybnT+Y0TVoH9dO6d3FUa3jCJfIDtkl+8Qnx6RBzkmTtAg nT+SZvJBX5835cD6dr1HpjDPu2SJ/4Hz/AG+Oqts=</latexit> <latexit sha1_base64="wP0LmfnIO 0hDCuIKBqTz5N56Nd0=">ACNnicbVDLSsNAFJ34Nr5aXYmbwSK4KokK6q7 gxmUFq0ITymRyWwdnkjBzo5YQ/Bq3uvZTXLkSt36C08dCWw8MHM65rzlRJoVB z3t3Zmbn5hcWl5bdldW19Y1KdfPKpLnm0OKpTPVNxAxIkUALBUq4yTQwFUm4 ju7OBv71PWgj0uQS+xmEivUS0RWcoZU6le0A4RGLs1xrSJAaZAi0pKZz2KnUv Lo3BJ0m/pjUyBjNTtVZCOKU58oO4pIZ0/a9DMOCaRcQukGuYGM8TvWg7alC VNgwmL4h5LuWSWm3VTbZw8Zqr87CqaM6avIViqGt2bSG4j/erEZDJzYjt2TsB BJliMkfLS8m0uKR2ERGOhgaPsW8K4FvZ+ym+ZhxtlK4b2KzgadKsSQuAl 62/bAoAq1ozS9L1ybnT+Y0TVoH9dO6d3FUa3jCJfIDtkl+8Qnx6RBzkmTtAg nT+SZvJBX5835cD6dr1HpjDPu2SJ/4Hz/AG+Oqts=</latexit> a2 <latexit sha1_base64="SUbkZWmj3PoRhsHQX1zVFHCx3E=">ACHni cbVDLSsNAFJ34qDW+Wl26GSyCq5IUQd0V3LisaG2hCWUyuW2HzkzCzEQpIZ/gVtd+jStxq3/j9LHQ1gMXDufcFydKOdPG876dtfWNzdJWedvd2d3bP6hUD x90kikKbZrwRHUjoEzCW3DIduqoCIiEMnGl9P/c4jKM0SeW8mKYSCDCUbMEqMle5Iv9Gv1Ly6NwNeJf6C1NACrX7VKQVxQjMB0lBOtO75XmrCnCjDKI fCDTINKaFjMoSepZI0GE+7XAp1aJ8SBRtqTBM/X3RE6E1hMR2U5BzEgve1PxXy/W04VL183gMsyZTDMDks6PDzKOTYKnYeCYKaCGTywhVDH7P6Yjog1 NjLXDRIeKJETGeUCLnh/meaAErvlF4drk/OWcVkm7Ub+qe7fnta3iLCMjtEJOkM+ukBNdINaqI0oGqJn9IJenTfn3flwPueta85i5gj9gfP1A41Qo T4=</latexit> <latexit sha1_base64="SUbkZWmj3PoRhsHQX1zVFHCx3E=">ACHni cbVDLSsNAFJ34qDW+Wl26GSyCq5IUQd0V3LisaG2hCWUyuW2HzkzCzEQpIZ/gVtd+jStxq3/j9LHQ1gMXDufcFydKOdPG876dtfWNzdJWedvd2d3bP6hUD x90kikKbZrwRHUjoEzCW3DIduqoCIiEMnGl9P/c4jKM0SeW8mKYSCDCUbMEqMle5Iv9Gv1Ly6NwNeJf6C1NACrX7VKQVxQjMB0lBOtO75XmrCnCjDKI fCDTINKaFjMoSepZI0GE+7XAp1aJ8SBRtqTBM/X3RE6E1hMR2U5BzEgve1PxXy/W04VL183gMsyZTDMDks6PDzKOTYKnYeCYKaCGTywhVDH7P6Yjog1 NjLXDRIeKJETGeUCLnh/meaAErvlF4drk/OWcVkm7Ub+qe7fnta3iLCMjtEJOkM+ukBNdINaqI0oGqJn9IJenTfn3flwPueta85i5gj9gfP1A41Qo T4=</latexit> <latexit sha1_base64="SUbkZWmj3PoRhsHQX1zVFHCx3E=">ACHni cbVDLSsNAFJ34qDW+Wl26GSyCq5IUQd0V3LisaG2hCWUyuW2HzkzCzEQpIZ/gVtd+jStxq3/j9LHQ1gMXDufcFydKOdPG876dtfWNzdJWedvd2d3bP6hUD x90kikKbZrwRHUjoEzCW3DIduqoCIiEMnGl9P/c4jKM0SeW8mKYSCDCUbMEqMle5Iv9Gv1Ly6NwNeJf6C1NACrX7VKQVxQjMB0lBOtO75XmrCnCjDKI fCDTINKaFjMoSepZI0GE+7XAp1aJ8SBRtqTBM/X3RE6E1hMR2U5BzEgve1PxXy/W04VL183gMsyZTDMDks6PDzKOTYKnYeCYKaCGTywhVDH7P6Yjog1 NjLXDRIeKJETGeUCLnh/meaAErvlF4drk/OWcVkm7Ub+qe7fnta3iLCMjtEJOkM+ukBNdINaqI0oGqJn9IJenTfn3flwPueta85i5gj9gfP1A41Qo T4=</latexit> a3 <latexit sha1_base64="GbHL2sz697mZWBzIQupVbiLk95o=">ACHnicbVDLSsNAFJ3 UV42vVpduBovgqiQqLuCG5cVrS0oUwmt+3QmUmYmSgl5BPc6tqvcSVu9W+cPhbaeuDC4Zz74kQpZ9p43rdTWldW98ob7pb2zu7e5Xq/oNOMkWhROeqE5ENHAmoWY4dBJFRARcWhH o+uJ34EpVki7804hVCQgWR9Romx0h3pnfUqNa/uTYGXiT8nNTRHs1d1oM4oZkAaSgnWnd9LzVhTpRhlEPhBpmGlNARGUDXUkE6DCf/lrgY6vEuJ8oW9Lgqfp7IidC67GIbKcgZqgXv Yn4rxfrycKF6Z/GeZMpkBSWfH+xnHJsGTMHDMFDx5YQqpj9H9MhUYQaG5nrBgokPNFECLjPKBF1w/zPFAC1/yicG1y/mJOy6R1Wr+qe7fntY3j7CMDtEROkE+ukANdIOaqIUoGq Bn9IJenTfn3flwPmetJWc+c4D+wPn6AY8IoT8=</latexit> <latexit sha1_base64="GbHL2sz697mZWBzIQupVbiLk95o=">ACHnicbVDLSsNAFJ3 UV42vVpduBovgqiQqLuCG5cVrS0oUwmt+3QmUmYmSgl5BPc6tqvcSVu9W+cPhbaeuDC4Zz74kQpZ9p43rdTWldW98ob7pb2zu7e5Xq/oNOMkWhROeqE5ENHAmoWY4dBJFRARcWhH o+uJ34EpVki7804hVCQgWR9Romx0h3pnfUqNa/uTYGXiT8nNTRHs1d1oM4oZkAaSgnWnd9LzVhTpRhlEPhBpmGlNARGUDXUkE6DCf/lrgY6vEuJ8oW9Lgqfp7IidC67GIbKcgZqgXv Yn4rxfrycKF6Z/GeZMpkBSWfH+xnHJsGTMHDMFDx5YQqpj9H9MhUYQaG5nrBgokPNFECLjPKBF1w/zPFAC1/yicG1y/mJOy6R1Wr+qe7fntY3j7CMDtEROkE+ukANdIOaqIUoGq Bn9IJenTfn3flwPmetJWc+c4D+wPn6AY8IoT8=</latexit> <latexit sha1_base64="GbHL2sz697mZWBzIQupVbiLk95o=">ACHnicbVDLSsNAFJ3 UV42vVpduBovgqiQqLuCG5cVrS0oUwmt+3QmUmYmSgl5BPc6tqvcSVu9W+cPhbaeuDC4Zz74kQpZ9p43rdTWldW98ob7pb2zu7e5Xq/oNOMkWhROeqE5ENHAmoWY4dBJFRARcWhH o+uJ34EpVki7804hVCQgWR9Romx0h3pnfUqNa/uTYGXiT8nNTRHs1d1oM4oZkAaSgnWnd9LzVhTpRhlEPhBpmGlNARGUDXUkE6DCf/lrgY6vEuJ8oW9Lgqfp7IidC67GIbKcgZqgXv Yn4rxfrycKF6Z/GeZMpkBSWfH+xnHJsGTMHDMFDx5YQqpj9H9MhUYQaG5nrBgokPNFECLjPKBF1w/zPFAC1/yicG1y/mJOy6R1Wr+qe7fntY3j7CMDtEROkE+ukANdIOaqIUoGq Bn9IJenTfn3flwPmetJWc+c4D+wPn6AY8IoT8=</latexit> zc 3 <latexit sha1_base64="EJC7QrFvNSewx3yUndYnljFodc=">ACKXi cbVA9T8MwFHTKd/gqMLJYVEhMVQJIwFaJhbFIlFZqQuU4L2DVdiLbAZUo/4MVZn4NE7DyR3DaDlA4ydLp7j2/0UZ9p43odTm5tfWFxaXnFX19Y3Nutb2 9c6zRWFDk15qnoR0cCZhI5hkMvU0BExKEbDc8rv3sPSrNUXplRBqEgt5IljBJjpZtAEHMXJcVjeUMHR4N6w2t6Y+C/xJ+SBpqiPdhyFoM4pbkAaSgnWv d9LzNhQZRhlEPpBrmGjNAhuYW+pZI0GExjl3ifavEOEmVfdLgsfpzoyBC65GI7GQVU896lfivF+vqw5nrJjkNCyaz3ICk+NJzrFJcdULjpkCavjIEkIV s/kxvSOKUGPbc91AgYQHmgpBZFwEtOz7YVESuCGX5aubc6f7ekv6Rw2z5re5XGj5U0rXEa7aA8dIB+doBa6QG3UQRQp9ISe0Yvz6rw5787nZLTmTHd20 C84X98/k6ZN</latexit> <latexit sha1_base64="EJC7QrFvNSewx3yUndYnljFodc=">ACKXi cbVA9T8MwFHTKd/gqMLJYVEhMVQJIwFaJhbFIlFZqQuU4L2DVdiLbAZUo/4MVZn4NE7DyR3DaDlA4ydLp7j2/0UZ9p43odTm5tfWFxaXnFX19Y3Nutb2 9c6zRWFDk15qnoR0cCZhI5hkMvU0BExKEbDc8rv3sPSrNUXplRBqEgt5IljBJjpZtAEHMXJcVjeUMHR4N6w2t6Y+C/xJ+SBpqiPdhyFoM4pbkAaSgnWv d9LzNhQZRhlEPpBrmGjNAhuYW+pZI0GExjl3ifavEOEmVfdLgsfpzoyBC65GI7GQVU896lfivF+vqw5nrJjkNCyaz3ICk+NJzrFJcdULjpkCavjIEkIV s/kxvSOKUGPbc91AgYQHmgpBZFwEtOz7YVESuCGX5aubc6f7ekv6Rw2z5re5XGj5U0rXEa7aA8dIB+doBa6QG3UQRQp9ISe0Yvz6rw5787nZLTmTHd20 C84X98/k6ZN</latexit> <latexit sha1_base64="EJC7QrFvNSewx3yUndYnljFodc=">ACKXi cbVA9T8MwFHTKd/gqMLJYVEhMVQJIwFaJhbFIlFZqQuU4L2DVdiLbAZUo/4MVZn4NE7DyR3DaDlA4ydLp7j2/0UZ9p43odTm5tfWFxaXnFX19Y3Nutb2 9c6zRWFDk15qnoR0cCZhI5hkMvU0BExKEbDc8rv3sPSrNUXplRBqEgt5IljBJjpZtAEHMXJcVjeUMHR4N6w2t6Y+C/xJ+SBpqiPdhyFoM4pbkAaSgnWv d9LzNhQZRhlEPpBrmGjNAhuYW+pZI0GExjl3ifavEOEmVfdLgsfpzoyBC65GI7GQVU896lfivF+vqw5nrJjkNCyaz3ICk+NJzrFJcdULjpkCavjIEkIV s/kxvSOKUGPbc91AgYQHmgpBZFwEtOz7YVESuCGX5aubc6f7ekv6Rw2z5re5XGj5U0rXEa7aA8dIB+doBa6QG3UQRQp9ISe0Yvz6rw5787nZLTmTHd20 C84X98/k6ZN</latexit> zp 3 <latexit sha1_base64="8Bany60kVwAIAuetY5K9K0r4ASs=">ACKXicbVA 9T8MwFHT4JnzDyGJRITFVCSABGxILY5EIrdSEynFeWgvbiWwHVKL8D1aY+TVMwMofwWk7QOEkS6e79/xOF+ecaeN5H87M7Nz8wuLSsruyura+sbm1faOzQlEIaMYz1 YmJBs4kBIYZDp1cARExh3Z8d1H7XtQmXy2gxziATpS5YySoyVbkNBzCBOy8fqNu8d9TYbXtMbAf8l/oQ0ASt3pazECYZLQRIQznRut7uYlKogyjHCo3LDTkhN6 RPnQtlUSAjspR7ArvWyXBabskwaP1J8bJRFaD0VsJ+uYetqrxX+9RNcfTl036WlUMpkXBiQdH08Ljk2G615whRQw4eWEKqYzY/pgChCjW3PdUMFEh5oJgSRSRnSq utHZRkqgRt+Vbm2OX+6p78kOGyeNb2r48a5N6lwCe2iPXSAfHSCztElaqEAUaTQE3pGL86r8+a8O5/j0RlnsrODfsH5+gZWBaZa</latexit> <latexit sha1_base64="8Bany60kVwAIAuetY5K9K0r4ASs=">ACKXicbVA 9T8MwFHT4JnzDyGJRITFVCSABGxILY5EIrdSEynFeWgvbiWwHVKL8D1aY+TVMwMofwWk7QOEkS6e79/xOF+ecaeN5H87M7Nz8wuLSsruyura+sbm1faOzQlEIaMYz1 YmJBs4kBIYZDp1cARExh3Z8d1H7XtQmXy2gxziATpS5YySoyVbkNBzCBOy8fqNu8d9TYbXtMbAf8l/oQ0ASt3pazECYZLQRIQznRut7uYlKogyjHCo3LDTkhN6 RPnQtlUSAjspR7ArvWyXBabskwaP1J8bJRFaD0VsJ+uYetqrxX+9RNcfTl036WlUMpkXBiQdH08Ljk2G615whRQw4eWEKqYzY/pgChCjW3PdUMFEh5oJgSRSRnSq utHZRkqgRt+Vbm2OX+6p78kOGyeNb2r48a5N6lwCe2iPXSAfHSCztElaqEAUaTQE3pGL86r8+a8O5/j0RlnsrODfsH5+gZWBaZa</latexit> <latexit sha1_base64="8Bany60kVwAIAuetY5K9K0r4ASs=">ACKXicbVA 9T8MwFHT4JnzDyGJRITFVCSABGxILY5EIrdSEynFeWgvbiWwHVKL8D1aY+TVMwMofwWk7QOEkS6e79/xOF+ecaeN5H87M7Nz8wuLSsruyura+sbm1faOzQlEIaMYz1 YmJBs4kBIYZDp1cARExh3Z8d1H7XtQmXy2gxziATpS5YySoyVbkNBzCBOy8fqNu8d9TYbXtMbAf8l/oQ0ASt3pazECYZLQRIQznRut7uYlKogyjHCo3LDTkhN6 RPnQtlUSAjspR7ArvWyXBabskwaP1J8bJRFaD0VsJ+uYetqrxX+9RNcfTl036WlUMpkXBiQdH08Ljk2G615whRQw4eWEKqYzY/pgChCjW3PdUMFEh5oJgSRSRnSq utHZRkqgRt+Vbm2OX+6p78kOGyeNb2r48a5N6lwCe2iPXSAfHSCztElaqEAUaTQE3pGL86r8+a8O5/j0RlnsrODfsH5+gZWBaZa</latexit> zs 1,3 <latexit sha1_base64="n3EkG0a5jSiHqVLmztodlUtaw1s=">ACL3 icbVBNS8QwFEz9tn6tevQSXAQPsrQqLcFLx4VXBW2dUnTVw0maUlSdQ39K1717K/Ri3j1X5iue9DVgcAw817eMEnBmTZB8OaNjU9MTk3PzPpz8wuL S43lTOdl4pCh+Y8VxcJ0cCZhI5hsNFoYCIhMN5cnNY+e3oDTL5anpFxALciVZxigxTuo1ViJBzHWS2YfqUvdsuLVT9RrNoBUMgP+ScEiaIj3rI 3FaU5LQVIQznRuhsGhYktUYZRDpUflRoKQm/IFXQdlUSAju0gfIU3nJLiLFfuSYMH6s8NS4TWfZG4yTqHvVq8V8v1fWHI9dNth9bJovSgKTfx7OSY 5Pjuh2cMgXU8L4jhCrm8mN6TRShxnXo+5ECXc0F4LI1Ea06oaxtZESuBlWle+aC0d7+ks6262DVnCy2wHwpn0BpaR5soRHuojY7QMeogiu7RI3p Cz96L9+q9ex/fo2PecGcV/YL3+QXRdKgL</latexit> <latexit sha1_base64="n3EkG0a5jSiHqVLmztodlUtaw1s=">ACL3 icbVBNS8QwFEz9tn6tevQSXAQPsrQqLcFLx4VXBW2dUnTVw0maUlSdQ39K1717K/Ri3j1X5iue9DVgcAw817eMEnBmTZB8OaNjU9MTk3PzPpz8wuL S43lTOdl4pCh+Y8VxcJ0cCZhI5hsNFoYCIhMN5cnNY+e3oDTL5anpFxALciVZxigxTuo1ViJBzHWS2YfqUvdsuLVT9RrNoBUMgP+ScEiaIj3rI 3FaU5LQVIQznRuhsGhYktUYZRDpUflRoKQm/IFXQdlUSAju0gfIU3nJLiLFfuSYMH6s8NS4TWfZG4yTqHvVq8V8v1fWHI9dNth9bJovSgKTfx7OSY 5Pjuh2cMgXU8L4jhCrm8mN6TRShxnXo+5ECXc0F4LI1Ea06oaxtZESuBlWle+aC0d7+ks6262DVnCy2wHwpn0BpaR5soRHuojY7QMeogiu7RI3p Cz96L9+q9ex/fo2PecGcV/YL3+QXRdKgL</latexit> <latexit sha1_base64="n3EkG0a5jSiHqVLmztodlUtaw1s=">ACL3 icbVBNS8QwFEz9tn6tevQSXAQPsrQqLcFLx4VXBW2dUnTVw0maUlSdQ39K1717K/Ri3j1X5iue9DVgcAw817eMEnBmTZB8OaNjU9MTk3PzPpz8wuL S43lTOdl4pCh+Y8VxcJ0cCZhI5hsNFoYCIhMN5cnNY+e3oDTL5anpFxALciVZxigxTuo1ViJBzHWS2YfqUvdsuLVT9RrNoBUMgP+ScEiaIj3rI 3FaU5LQVIQznRuhsGhYktUYZRDpUflRoKQm/IFXQdlUSAju0gfIU3nJLiLFfuSYMH6s8NS4TWfZG4yTqHvVq8V8v1fWHI9dNth9bJovSgKTfx7OSY 5Pjuh2cMgXU8L4jhCrm8mN6TRShxnXo+5ECXc0F4LI1Ea06oaxtZESuBlWle+aC0d7+ks6262DVnCy2wHwpn0BpaR5soRHuojY7QMeogiu7RI3p Cz96L9+q9ex/fo2PecGcV/YL3+QXRdKgL</latexit> zs 3,3 <latexit sha1_base64="tM1KNEgIwbncVNhrpEpM3h4jr4=">ACL3icbVB NS8QwFEz9tn6tevQSXAQPsrSuoN4ELx4VXBW2dUnTVw0maUlSdQ39K1717K/Ri3j1X5iue9DVgcAw817eMEnBmTZB8OaNjU9MTk3PzPpz8wuLS43lTOdl4pCh+Y8V xcJ0cCZhI5hsNFoYCIhMN5cnNY+e3oDTL5anpFxALciVZxigxTuo1ViJBzHWS2YfqUvdse6td9RrNoBUMgP+ScEiaIj3rI3FaU5LQVIQznRuhsGhYktUYZRDpU flRoKQm/IFXQdlUSAju0gfIU3nJLiLFfuSYMH6s8NS4TWfZG4yTqHvVq8V8v1fWHI9dNthdbJovSgKTfx7OSY5Pjuh2cMgXU8L4jhCrm8mN6TRShxnXo+5ECXc0F 4LI1Ea06oaxtZESuBlWle+aC0d7+ks62639VnCy0zwIhXOoDW0jZRiHbRATpCx6iDKLpHj+gJPXsv3qv37n18j45w51V9Ave5xfU6qgN</latexit> <latexit sha1_base64="tM1KNEgIwbncVNhrpEpM3h4jr4=">ACL3icbVB NS8QwFEz9tn6tevQSXAQPsrSuoN4ELx4VXBW2dUnTVw0maUlSdQ39K1717K/Ri3j1X5iue9DVgcAw817eMEnBmTZB8OaNjU9MTk3PzPpz8wuLS43lTOdl4pCh+Y8V xcJ0cCZhI5hsNFoYCIhMN5cnNY+e3oDTL5anpFxALciVZxigxTuo1ViJBzHWS2YfqUvdse6td9RrNoBUMgP+ScEiaIj3rI3FaU5LQVIQznRuhsGhYktUYZRDpU flRoKQm/IFXQdlUSAju0gfIU3nJLiLFfuSYMH6s8NS4TWfZG4yTqHvVq8V8v1fWHI9dNthdbJovSgKTfx7OSY5Pjuh2cMgXU8L4jhCrm8mN6TRShxnXo+5ECXc0F 4LI1Ea06oaxtZESuBlWle+aC0d7+ks62639VnCy0zwIhXOoDW0jZRiHbRATpCx6iDKLpHj+gJPXsv3qv37n18j45w51V9Ave5xfU6qgN</latexit> <latexit sha1_base64="tM1KNEgIwbncVNhrpEpM3h4jr4=">ACL3icbVB NS8QwFEz9tn6tevQSXAQPsrSuoN4ELx4VXBW2dUnTVw0maUlSdQ39K1717K/Ri3j1X5iue9DVgcAw817eMEnBmTZB8OaNjU9MTk3PzPpz8wuLS43lTOdl4pCh+Y8V xcJ0cCZhI5hsNFoYCIhMN5cnNY+e3oDTL5anpFxALciVZxigxTuo1ViJBzHWS2YfqUvdse6td9RrNoBUMgP+ScEiaIj3rI3FaU5LQVIQznRuhsGhYktUYZRDpU flRoKQm/IFXQdlUSAju0gfIU3nJLiLFfuSYMH6s8NS4TWfZG4yTqHvVq8V8v1fWHI9dNthdbJovSgKTfx7OSY5Pjuh2cMgXU8L4jhCrm8mN6TRShxnXo+5ECXc0F 4LI1Ea06oaxtZESuBlWle+aC0d7+ks62639VnCy0zwIhXOoDW0jZRiHbRATpCx6iDKLpHj+gJPXsv3qv37n18j45w51V9Ave5xfU6qgN</latexit> φO(a2) <latexit sha1_base64="SyYd38ATr7lW/RbaJR76zfrCZnQ=">ACJnicbVD LSsNAFJ34rPHV6tLNYBHqpiQiqLuCG3dWsCo0sUwmt+3gPMLMRCkhn+FW136NKxF3forT2oVWD1w4nHNfnCTjzNg+PDm5hcWl5YrK/7q2vrGZrW2dWVUril0qOJ K3yTEAGcSOpZDjeZBiISDtfJ3enYv74HbZiSl3aUQSzIQLI+o8Q6qRtlQ3Z73iC9g/1etR40gwnwXxJOSR1N0e7VvKUoVTQXIC3lxJhuGQ2Loi2jHIo/Sg3kBF6 RwbQdVQSASYuJj+XeM8pKe4r7UpaPF/ThREGDMSiesUxA7NrDcW/VSM14c932j+OCySy3IOn38X7OsV4HApOmQZq+cgRQjVz/2M6JpQ6Lz/UiDhAeqhCAy LSJadsO4KCItcD0sS98lF87m9Jd0DponzeDisN4KphFW0A7aRQ0UoiPUQmeojTqIoUe0RN69l68V+/Ne/9unfOmM9voF7zPL0QipCk=</latexit> <latexit sha1_base64="SyYd38ATr7lW/RbaJR76zfrCZnQ=">ACJnicbVD LSsNAFJ34rPHV6tLNYBHqpiQiqLuCG3dWsCo0sUwmt+3gPMLMRCkhn+FW136NKxF3forT2oVWD1w4nHNfnCTjzNg+PDm5hcWl5YrK/7q2vrGZrW2dWVUril0qOJ K3yTEAGcSOpZDjeZBiISDtfJ3enYv74HbZiSl3aUQSzIQLI+o8Q6qRtlQ3Z73iC9g/1etR40gwnwXxJOSR1N0e7VvKUoVTQXIC3lxJhuGQ2Loi2jHIo/Sg3kBF6 RwbQdVQSASYuJj+XeM8pKe4r7UpaPF/ThREGDMSiesUxA7NrDcW/VSM14c932j+OCySy3IOn38X7OsV4HApOmQZq+cgRQjVz/2M6JpQ6Lz/UiDhAeqhCAy LSJadsO4KCItcD0sS98lF87m9Jd0DponzeDisN4KphFW0A7aRQ0UoiPUQmeojTqIoUe0RN69l68V+/Ne/9unfOmM9voF7zPL0QipCk=</latexit> <latexit sha1_base64="SyYd38ATr7lW/RbaJR76zfrCZnQ=">ACJnicbVD LSsNAFJ34rPHV6tLNYBHqpiQiqLuCG3dWsCo0sUwmt+3gPMLMRCkhn+FW136NKxF3forT2oVWD1w4nHNfnCTjzNg+PDm5hcWl5YrK/7q2vrGZrW2dWVUril0qOJ K3yTEAGcSOpZDjeZBiISDtfJ3enYv74HbZiSl3aUQSzIQLI+o8Q6qRtlQ3Z73iC9g/1etR40gwnwXxJOSR1N0e7VvKUoVTQXIC3lxJhuGQ2Loi2jHIo/Sg3kBF6 RwbQdVQSASYuJj+XeM8pKe4r7UpaPF/ThREGDMSiesUxA7NrDcW/VSM14c932j+OCySy3IOn38X7OsV4HApOmQZq+cgRQjVz/2M6JpQ6Lz/UiDhAeqhCAy LSJadsO4KCItcD0sS98lF87m9Jd0DponzeDisN4KphFW0A7aRQ0UoiPUQmeojTqIoUe0RN69l68V+/Ne/9unfOmM9voF7zPL0QipCk=</latexit> Start state context zs 1,3 <latexit sha1_b ase64="WUGQFpqtV8IiAsh8xOrQDg JrZk=">ACWXicbVBNTxRBEO0dFdZ BcIEjl4bjAezmQETIPFA4sUjRFZId sZNT08NdOiPSXcNuHTmz/hrvOrNxB9 jz7IkuviSTr+8V9V/YpaCodJ8qsXP Xn6bGW1/zxe7G+8XKwufXZmcZyGHM jb0omAMpNIxRoISL2gJThYTz4vpD5 5/fgHXC6DOc1ZArdqlFJTjDIE0H71/ TDOEr+k/ILFKHDKGlWUYfdG50d9MgK oZXReXv2i9u6tO3+10MExGyRz0MUk XZEgWOJlu9lay0vBGgUYumXOTNKkx9 2G04BLaOGsc1Ixfs0uYBKqZApf7+Td buhuUklbGhqORztW/OzxTzs1UESq7V d2y14n/9UrXPbg0HavD3AtdNwia3w+ vGknR0C5HWgoLHOUsEMatCPtTfsUs4 xjSjuPMgoZbpRiuvQZbydp7n1mFR2 mbRuH5NLlnB6T8d7oaJScvhseJ4sI+ 2SHvCJvSEoOyDH5SE7ImHDyjXwnP8j P3u8oivpRfF8a9RY92+QfRNt/ACL5t P0=</latexit> <latexit sha1_b ase64="WUGQFpqtV8IiAsh8xOrQDg JrZk=">ACWXicbVBNTxRBEO0dFdZ BcIEjl4bjAezmQETIPFA4sUjRFZId sZNT08NdOiPSXcNuHTmz/hrvOrNxB9 jz7IkuviSTr+8V9V/YpaCodJ8qsXP Xn6bGW1/zxe7G+8XKwufXZmcZyGHM jb0omAMpNIxRoISL2gJThYTz4vpD5 5/fgHXC6DOc1ZArdqlFJTjDIE0H71/ TDOEr+k/ILFKHDKGlWUYfdG50d9MgK oZXReXv2i9u6tO3+10MExGyRz0MUk XZEgWOJlu9lay0vBGgUYumXOTNKkx9 2G04BLaOGsc1Ixfs0uYBKqZApf7+Td buhuUklbGhqORztW/OzxTzs1UESq7V d2y14n/9UrXPbg0HavD3AtdNwia3w+ vGknR0C5HWgoLHOUsEMatCPtTfsUs4 xjSjuPMgoZbpRiuvQZbydp7n1mFR2 mbRuH5NLlnB6T8d7oaJScvhseJ4sI+ 2SHvCJvSEoOyDH5SE7ImHDyjXwnP8j P3u8oivpRfF8a9RY92+QfRNt/ACL5t P0=</latexit> <latexit sha1_b ase64="WUGQFpqtV8IiAsh8xOrQDg JrZk=">ACWXicbVBNTxRBEO0dFdZ BcIEjl4bjAezmQETIPFA4sUjRFZId sZNT08NdOiPSXcNuHTmz/hrvOrNxB9 jz7IkuviSTr+8V9V/YpaCodJ8qsXP Xn6bGW1/zxe7G+8XKwufXZmcZyGHM jb0omAMpNIxRoISL2gJThYTz4vpD5 5/fgHXC6DOc1ZArdqlFJTjDIE0H71/ TDOEr+k/ILFKHDKGlWUYfdG50d9MgK oZXReXv2i9u6tO3+10MExGyRz0MUk XZEgWOJlu9lay0vBGgUYumXOTNKkx9 2G04BLaOGsc1Ixfs0uYBKqZApf7+Td buhuUklbGhqORztW/OzxTzs1UESq7V d2y14n/9UrXPbg0HavD3AtdNwia3w+ vGknR0C5HWgoLHOUsEMatCPtTfsUs4 xjSjuPMgoZbpRiuvQZbydp7n1mFR2 mbRuH5NLlnB6T8d7oaJScvhseJ4sI+ 2SHvCJvSEoOyDH5SE7ImHDyjXwnP8j P3u8oivpRfF8a9RY92+QfRNt/ACL5t P0=</latexit> Current state context zs 3,3 <latexit sha1_base64="IHRIBcXwd mI+lhfupt36Qb2Aieo=">ACXicbVBNb9QwEPUGWEoZUsPHLhYrKg4oFX SIlFuK/XSY5HYtImrBxn0lr1R2RPoIsVfg6/ptciceOn4GxzoFtGsvz03oz n+RW1FA6T5PcgevDw0fDxpP46eazrej7RcnzjSWw4wbaexZwRxIoWGAiW c1RaYKiScFpeHnX76FawTRn/GZQ25YudaVIzDNRiN2lGcIV+sPGWtBIHTK ElmYZ3aU/eo0b3d0IrhRVH57+0Xt/D7/bxWicTJV0fsg7cGY9HW82B4 Ms9LwRoVtXDLn5mlSY+6ZRcEltHWOKgZv2TnMA9QMwUu96uvtvRNYEpaGRt OcLti/53wTDm3VEXo7Ky6da0j/6uVrntwbTtWB7kXum4QNL9dXjWSoqFdlrQ UFjKZQCMWxH8U37BLOMYEo/jLAQK37hRiunSZ7ydp7n3mV0nLZtHJL13O 6D2Z7k4+T5NP78TpI9wgr8hr8pak5AOZkiNyTGaEk5/kmtyQX4M/0TDajLZ uW6NBP7ND7lT08i+VNLa</latexit> <latexit sha1_base64="IHRIBcXwd mI+lhfupt36Qb2Aieo=">ACXicbVBNb9QwEPUGWEoZUsPHLhYrKg4oFX SIlFuK/XSY5HYtImrBxn0lr1R2RPoIsVfg6/ptciceOn4GxzoFtGsvz03oz n+RW1FA6T5PcgevDw0fDxpP46eazrej7RcnzjSWw4wbaexZwRxIoWGAiW c1RaYKiScFpeHnX76FawTRn/GZQ25YudaVIzDNRiN2lGcIV+sPGWtBIHTK ElmYZ3aU/eo0b3d0IrhRVH57+0Xt/D7/bxWicTJV0fsg7cGY9HW82B4 Ms9LwRoVtXDLn5mlSY+6ZRcEltHWOKgZv2TnMA9QMwUu96uvtvRNYEpaGRt OcLti/53wTDm3VEXo7Ky6da0j/6uVrntwbTtWB7kXum4QNL9dXjWSoqFdlrQ UFjKZQCMWxH8U37BLOMYEo/jLAQK37hRiunSZ7ydp7n3mV0nLZtHJL13O 6D2Z7k4+T5NP78TpI9wgr8hr8pak5AOZkiNyTGaEk5/kmtyQX4M/0TDajLZ uW6NBP7ND7lT08i+VNLa</latexit> <latexit sha1_base64="IHRIBcXwd mI+lhfupt36Qb2Aieo=">ACXicbVBNb9QwEPUGWEoZUsPHLhYrKg4oFX SIlFuK/XSY5HYtImrBxn0lr1R2RPoIsVfg6/ptciceOn4GxzoFtGsvz03oz n+RW1FA6T5PcgevDw0fDxpP46eazrej7RcnzjSWw4wbaexZwRxIoWGAiW c1RaYKiScFpeHnX76FawTRn/GZQ25YudaVIzDNRiN2lGcIV+sPGWtBIHTK ElmYZ3aU/eo0b3d0IrhRVH57+0Xt/D7/bxWicTJV0fsg7cGY9HW82B4 Ms9LwRoVtXDLn5mlSY+6ZRcEltHWOKgZv2TnMA9QMwUu96uvtvRNYEpaGRt OcLti/53wTDm3VEXo7Ky6da0j/6uVrntwbTtWB7kXum4QNL9dXjWSoqFdlrQ UFjKZQCMWxH8U37BLOMYEo/jLAQK37hRiunSZ7ydp7n3mV0nLZtHJL13O 6D2Z7k4+T5NP78TpI9wgr8hr8pak5AOZkiNyTGaEk5/kmtyQX4M/0TDajLZ uW6NBP7ND7lT08i+VNLa</latexit> Instruction history context zp 3 <latexit sha1_base64="1/uQc8uZl 9PjheJu1+kCxdeR7gE=">ACUXicbVBNTxRBEO0ZEHFQWfDIpcNG42kzIyT ojcSL3jBhWRn3PT01LAd+mPSXQOunfkj/hquePbiX/Fkz7IHWKyk5f3qupV v7KRwmGa/onitfUnG083nyVbz1+83B7s7H51prUcxtxIY89L5kAKDWMUKOG8s cBUKeGsvPzY62dXYJ0w+hTnDRSKXWhRC84wUNPB4RuaI3xH/1k7tC3vWToLv sbOKTe612hHc8VwVtb+R/etmR5MB8N0lC6KPgbZEgzJsk6mO9FGXhneKtDIJX NukqUNFp5ZFxCl+Stg4bxS3YBkwA1U+AKv/heR18HpqK1seFpAv2/oRnyr m5KkNnf6Zb1Xryv1rl+oUr7li/L7zQTYug+Z153UqKhvb50UpY4CjnATBuRbi f8hmzjGNIOUlyCxquVGK6crnvJtkhfe5VXSYdV0SkstWc3oMxu9GH0bpl8P hcbqMcJPskX3ylmTkiByT+SEjAknP8kNuSW/ot/R35jE8V1rHC1nXpEHFW/9 AxAhs6I=</latexit> <latexit sha1_base64="1/uQc8uZl 9PjheJu1+kCxdeR7gE=">ACUXicbVBNTxRBEO0ZEHFQWfDIpcNG42kzIyT ojcSL3jBhWRn3PT01LAd+mPSXQOunfkj/hquePbiX/Fkz7IHWKyk5f3qupV v7KRwmGa/onitfUnG083nyVbz1+83B7s7H51prUcxtxIY89L5kAKDWMUKOG8s cBUKeGsvPzY62dXYJ0w+hTnDRSKXWhRC84wUNPB4RuaI3xH/1k7tC3vWToLv sbOKTe612hHc8VwVtb+R/etmR5MB8N0lC6KPgbZEgzJsk6mO9FGXhneKtDIJX NukqUNFp5ZFxCl+Stg4bxS3YBkwA1U+AKv/heR18HpqK1seFpAv2/oRnyr m5KkNnf6Zb1Xryv1rl+oUr7li/L7zQTYug+Z153UqKhvb50UpY4CjnATBuRbi f8hmzjGNIOUlyCxquVGK6crnvJtkhfe5VXSYdV0SkstWc3oMxu9GH0bpl8P hcbqMcJPskX3ylmTkiByT+SEjAknP8kNuSW/ot/R35jE8V1rHC1nXpEHFW/9 AxAhs6I=</latexit> <latexit sha1_base64="1/uQc8uZl 9PjheJu1+kCxdeR7gE=">ACUXicbVBNTxRBEO0ZEHFQWfDIpcNG42kzIyT ojcSL3jBhWRn3PT01LAd+mPSXQOunfkj/hquePbiX/Fkz7IHWKyk5f3qupV v7KRwmGa/onitfUnG083nyVbz1+83B7s7H51prUcxtxIY89L5kAKDWMUKOG8s cBUKeGsvPzY62dXYJ0w+hTnDRSKXWhRC84wUNPB4RuaI3xH/1k7tC3vWToLv sbOKTe612hHc8VwVtb+R/etmR5MB8N0lC6KPgbZEgzJsk6mO9FGXhneKtDIJX NukqUNFp5ZFxCl+Stg4bxS3YBkwA1U+AKv/heR18HpqK1seFpAv2/oRnyr m5KkNnf6Zb1Xryv1rl+oUr7li/L7zQTYug+Z153UqKhvb50UpY4CjnATBuRbi f8hmzjGNIOUlyCxquVGK6crnvJtkhfe5VXSYdV0SkstWc3oMxu9GH0bpl8P hcbqMcJPskX3ylmTkiByT+SEjAknP8kNuSW/ot/R35jE8V1rHC1nXpEHFW/9 AxAhs6I=</latexit> Instruction context zc 3 <latexit sha1_base64="HArRW9tQp06NtIVv721UEWaLDn4=">ACWHicbV BNb9QwEHUC9CN8dFuOXCxWVJxWCSAVxKUSF7gViaWVNmHlTCatVX9E9gRYovBf+DVc6bHiz+Ds7gG2PMny03sznvErGyU9pel1FN+6fWdre2c3uXv/oO90f7BR 29bBzgFq6w7K4VHJQ1OSZLCs8ah0KXC0/LyzeCfkbnpTUfaNFgocW5kbUEQUGaj14f8u854Vfq3hlProVB5j3Pc37I+coBa4Z7ULWgi7LuvWfYP58Phqnk3QJ fpNkazJma5zM96OtvLQajQESng/y9KGik4kqCwT/LWYyPgUpzjLFAjNPqiW/6y50+CUvHaunAM8aX6d0cntPcLXYbKYU2/6Q3if73KDw9uTKf6ZdFJ07SEBlb D61ZxsnyIkVfSIZBaBCLAybA/hwvhBFAIO0lyhwa/gNVamKrLoZ9lRdflTvNx1vdJSC7bzOkmT6bvJqk71+Mj9N1hDvsEXvMnrKMHbFj9padsCkD9oP9ZL/YVf Q7juLteHdVGkfrnofsH8QHfwBZUbSf</latexit> <latexit sha1_base64="HArRW9tQp06NtIVv721UEWaLDn4=">ACWHicbV BNb9QwEHUC9CN8dFuOXCxWVJxWCSAVxKUSF7gViaWVNmHlTCatVX9E9gRYovBf+DVc6bHiz+Ds7gG2PMny03sznvErGyU9pel1FN+6fWdre2c3uXv/oO90f7BR 29bBzgFq6w7K4VHJQ1OSZLCs8ah0KXC0/LyzeCfkbnpTUfaNFgocW5kbUEQUGaj14f8u854Vfq3hlProVB5j3Pc37I+coBa4Z7ULWgi7LuvWfYP58Phqnk3QJ fpNkazJma5zM96OtvLQajQESng/y9KGik4kqCwT/LWYyPgUpzjLFAjNPqiW/6y50+CUvHaunAM8aX6d0cntPcLXYbKYU2/6Q3if73KDw9uTKf6ZdFJ07SEBlb D61ZxsnyIkVfSIZBaBCLAybA/hwvhBFAIO0lyhwa/gNVamKrLoZ9lRdflTvNx1vdJSC7bzOkmT6bvJqk71+Mj9N1hDvsEXvMnrKMHbFj9padsCkD9oP9ZL/YVf Q7juLteHdVGkfrnofsH8QHfwBZUbSf</latexit> <latexit sha1_base64="HArRW9tQp06NtIVv721UEWaLDn4=">ACWHicbV BNb9QwEHUC9CN8dFuOXCxWVJxWCSAVxKUSF7gViaWVNmHlTCatVX9E9gRYovBf+DVc6bHiz+Ds7gG2PMny03sznvErGyU9pel1FN+6fWdre2c3uXv/oO90f7BR 29bBzgFq6w7K4VHJQ1OSZLCs8ah0KXC0/LyzeCfkbnpTUfaNFgocW5kbUEQUGaj14f8u854Vfq3hlProVB5j3Pc37I+coBa4Z7ULWgi7LuvWfYP58Phqnk3QJ fpNkazJma5zM96OtvLQajQESng/y9KGik4kqCwT/LWYyPgUpzjLFAjNPqiW/6y50+CUvHaunAM8aX6d0cntPcLXYbKYU2/6Q3if73KDw9uTKf6ZdFJ07SEBlb D61ZxsnyIkVfSIZBaBCLAybA/hwvhBFAIO0lyhwa/gNVamKrLoZ9lRdflTvNx1vdJSC7bzOkmT6bvJqk71+Mj9N1hDvsEXvMnrKMHbFj9padsCkD9oP9ZL/YVf Q7juLteHdVGkfrnofsH8QHfwBZUbSf</latexit> Figure 2: Illustration of the model architecture while generating the third action a3 in the third utterance ¯x3 from Figure 1. Context vectors computed using attention are highlighted in blue. The model takes as input vector encodings from the current and previous instructions ¯x1, ¯x2, and ¯x3, the initial state s1, the current state s3, and the previous action a2. Instruction encodings are computed with a bidirectional RNN. We attend over the previous and current instructions and the initial and current states. We use an MLP to select the next action. der. The model generates an execution ¯e = ⟨(s1, a1), . . . , (smi, ami)⟩for each instruction ¯xi. The agent context, the information available to the agent at step k, is ˜sk = (¯xi, ⟨¯x1, . . . , ¯xi−1⟩, sk, ¯e[: k]), where ¯e[: k] is the execution up until but not including step k. In contrast to the world state, the agent context also includes instructions and the execution so far. The agent policy πθ(˜sk, a) is modeled as a probabilistic neural network parametrized by θ, where ˜sk is the agent context at step k and a is an action. To generate executions, we generate one action at a time, execute the action, and observe the new world state. In step k of executing the i-th instruction, the network inputs are the current utterance ¯xi, the previous instructions ⟨¯x1, . . . , ¯xi−1⟩, the initial state s1 at beginning of executing ¯xi, and the current state sk. When executing a sequence of instructions, the initial state s1 is either the state at the beginning of executing the sequence or the final state of the execution of the previous instruction. Figure 2 illustrates our architecture. We generate continuous vector representations for all inputs. Each input is represented as a set of vectors that are then processed with an attention function to generate a single vector representation (Luong et al., 2015). We assume access to a domain-specific encoding function ENC(s) that, given a state s, generates a set of vectors S representing the objects in the state. For example, in the ALCHEMY domain, a vector is generated for each beaker using an RNN. Section 6 describes the different domains and their encoding functions. We use a single bidirectional RNN with a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) recurrence to encode the instructions. All instructions ¯x1,. . . ,¯xi are encoded with a single RNN by concatenating them to ¯x′. We use two delimiter tokens: one separates previous instructions, and the other separates the previous instructions from the current one. The forward LSTM RNN hidden states are computed as:2 −−→ hj+1 = −−−−−→ LSTME  φI(x′ j+1); −→ hj  , where φI is a learned word embedding function and −−−−−→ LSTME is the forward LSTM recurrence function. We use a similar computation to compute the backward hidden states ←− hj. For each token x′ j in ¯x′, a vector representation h′ j = h−→ hj; ←− hj i is computed. We then create two sets of vectors, one for all the vectors of the current instruction and one for the previous instructions: Xc = {h′ j}J+|¯xi| j=J Xp = {h′ j}j<J j=0 where J is the index in ¯x′ where the current instruction ¯xi begins. Separating the vectors to two sets will allows computing separate attention on the current instruction and previous ones. To compute each input representation during decoding, we use a bi-linear attention function (Luong et al., 2015). Given a set of vectors H, a query vector hq, and a weight matrix W, the attention function ATTEND(H, hq, W) computes a context vector z: αi ∝ exp(h⊺ i Whq) : i = 0, . . . , |H| z = |H| X i=1 αihi . 2To simplify the notation, we omit the memory cell (often denoted as cj) from all LSTM descriptions. We use only the hidden state hj to compute the intended representations (e.g., for the input text tokens). All LSTMs in this paper use zero vectors as initial hidden state h0 and initial cell memory c0. 2076 We use a decoder to generate actions. At each time step k, we compute an input representation using the attention function, update the decoder state, and compute the next action to execute. Attention is first computed over the vectors of the current instruction, which is then used to attend over the other inputs. We compute the context vectors zc k and zp k for the current instruction and previous instructions: zc k = ATTEND(Xc, hd k−1, Wc) zp k = ATTEND(Xp, [hd k−1, zc k], Wp) , where hd k−1 is the decoder hidden state for step k −1, and Xc and Xp are the sets of vector representations for the current instruction and previous instructions. Two attention heads are used over both the initial and current states. This allows the model to attend to more than one location in a state at once, for example when transferring items from one beaker to another in ALCHEMY. The current state is computed by the transition function sk = T(sk−1, ak−1), where sk−1 and ak−1 are the state and action at step k −1. The context vectors for the initial state s1 and the current state sk are: zs 1,k = [ATTEND(ENC(s1), [hd k−1, zc k], Wsb,1); ATTEND(ENC(s1), [hd k−1, zc k], Wsb,2)] zs k,k = [ATTEND(ENC(sk), [hd k−1, zc k], Wsc,1); ATTEND(ENC(sk), [hd k−1, zc k], Wsc,2)] , where all W∗,∗are learned weight matrices. We concatenate all computed context vectors with an embedding of the previous action ak−1 to create the input for the decoder: hk = tanh([zc k; zp k; zs 1,k; zs k,k; φO(ak−1)]Wd + bd) hd k = LSTMD  hk; hd k−1  , where φO is a learned action embedding function and LSTMD is the LSTM decoder recurrence. Given the decoder state hd k, the next action ak is predicted with a multi-layer perceptron (MLP). The actions in our domains decompose to an action type and at most two arguments.3 For example, the action PUSH 1 B in ALCHEMY has the type PUSH and two arguments: a beaker number and a color. Section 6 describes the actions of each domain. The probability of an action is: 3We use a NULL argument for unused arguments. ha k = tanh(hd kWa) sk,aT = ha kbaT sk,a1 = ha kba1 sk,a2 = ha kba2 p(ak = aT (a1, a2) | ˜sk; θ) ∝ exp(sk,aT + sk,a1 + sk,a2) , where aT , a1, and a2 are an action type, first argument, and second argument. If the predicted action is STOP, the execution is complete. Otherwise, we execute the action ak to generate the next state sk+1, and update the agent context ˜sk to ˜sk+1 by appending the pair (sk, ak) to the execution ¯e and replacing the current state with sk+1. The model parameters θ include: the embedding functions φI and φO; the recurrence parameters for −−−−−→ LSTME, ←−−−−− LSTME, and LSTMD; WC, WP , Wsb,1, Wsb,2, Wsc,1, Wsc,2, Wd, Wa, and bd; and the domain dependent parameters, including the parameters of the encoding function ENC and the action type, first argument, and second argument weights baT , ba1, and ba2. 5 Learning We estimate the policy parameters θ using an exploration-based learning algorithm that maximizes the immediate expected reward. Broadly speaking, during learning, we observe the agent behavior given the current policy, and for each visited state compute the expected immediate reward by observing rewards for all actions. We assume access to a set of training examples {(¯x(j) i , s(j) i,1, ⟨¯x(j) 1 , . . . , ¯x(j) i−1⟩, g(j) i )}N,n(j) j=1,i=1, where each instruction ¯x(j) i is paired with a start state s(j) i,1, the previous instructions in the sequence ⟨¯x(j) 1 , . . . , ¯x(j) i−1⟩, and a goal state g(j) i . Reward The reward R(j) i : S × S × A →R is defined for each example j and instruction i: R(j) i (s, a, s′) = P (j) i (s, a, s′) + φ(j) i (s′) −φ(j) i (s) , where s is a source state, a is an action, and s′ is a target state.4 P (j) i (s, a, s′) is a problem reward and φ(j) i (s′) −φ(j) i (s) is a shaping term. The problem reward P (j) i (s, a, s′) is positive for stopping at the goal g(j) i and negative for stopping in an incorrect 4While the reward function is defined for any state-actionstate tuple, in practice, it is used during learning with tuples that follow the system dynamics, s′ = T(s, a). 2077 Algorithm 1 SESTRA: Single-step Reward Observation. Input: Training data {(¯x(j) i , s(j) i,1, ⟨¯x(j) 1 , . . . , ¯x(j) i−1⟩, g(j) i )}N,n(j) j=1,i=1, learning rate µ, entropy regularization coefficient λ, episode limit horizon M. Definitions: πθ is a policy parameterized by θ, BEG is a special action to use for the first decoder step, and STOP indicates end of an execution. T(s, a) is the state transition function, H is an entropy function, R(j) i (s, a, s′) is the reward function for example j and instruction i, and RMSPROP divides each weight by a running average of its squared gradient (Tieleman and Hinton, 2012). Output: Parameters θ defining a learned policy πθ. 1: for t = 1, . . . , T, j = 1, . . . , N do 2: for i = 1, . . . , n(j) do 3: ¯e ←⟨⟩, k ←0, a0 ←BEG 4: » Rollout up to STOP or episode limit. 5: while ak ̸= STOP ∧k < M do 6: k ←k + 1 7: ˜sk ←(¯xi, ⟨¯x1, . . . , ¯xi−1⟩, sk, ¯e[: k]) 8: » Sample an action from policy. 9: ak ∼πθ(˜sk, ·) 10: sk+1 ←T(sk, ak) 11: ¯e ←[¯e; ⟨(sk, ak)⟩] 12: ∆←¯0 13: for k′ = 1, . . . , k do 14: » Compute the entropy of πθ(˜sk′, ·). 15: ∆←∆+ λ∇θH(πθ(˜sk′, ·)) 16: for a ∈A do 17: s′ ←T(sk′, a) 18: » Compute gradient for action a. 19: ∆←∆+ R(j) i (sk′, a, s′)∇θπθ(˜sk′, a) 20: θ ←θ + µRMSPROP ∆ k  21: return θ state or taking an invalid action: P (j) i (s, a, s′) =          1.0 a = STOP ∧s′ = g(j) i −1.0 a = STOP ∧s′ ̸= g(j) i −1.0 −δ s = s′ −δ otherwise , where δ is a verbosity penalty. The case s = s′ indicates that a was invalid in state s, as in this domain, all valid actions except STOP modify the state. We use a potential-based shaping term φ(j) i (s′) −φ(j) i (s) (Ng et al., 1999), where φ(j) i (s) = −||s −g(j) i || computes the edit distance between the state s and the goal, measured over the objects in each state. The shaping term densifies the reward, providing a meaningful signal for learning in nonterminal states. Objective We maximize the immediate expected reward over all actions and use entropy regularization. The gradient is approximated by sampling an execution ¯e = ⟨(s1, a1), . . . , (sk, ak)⟩using our current policy: ∇θJ = 1 k k X k′=1 X a∈A R (sk, a, T(sk, a)) ∇θπ(˜sk, a) +λ∇θH(π(˜sk, ·))  , where H(π(˜sk, ·) is the entropy term. Algorithm Algorithm 1 shows the Single-step Reward Observation (SESTRA) learning algorithm. We iterate over the training data T times (line 1). For each example j and turn i, we first perform a rollout by sampling an execution ¯e from πθ with at most M actions (lines 5-11). If the rollout reaches the horizon without predicting STOP, we set the problem reward P (j) i to −1.0 for the last step. Given the sampled states visited, we compute the entropy (line 15) and observe the immediate reward for all actions (line 19) for each step. Entropy and rewards are used to accumulate the gradient, which is applied to the parameters using RMSPROP (Dauphin et al., 2015) (line 20). Discussion Observing the rewards for all actions for each visited state addresses an on-policy learning exploration problem. Actions that consistently receive negative reward early during learning will be visited with very low probability later on, and in practice, often not explored at all. Because the network is randomly initialized, these early negative rewards are translated into strong general biases that are not grounded well in the observed context. Our algorithm exposes the agent to such actions later on when they receive positive rewards even though the agent does not explore them during rollout. For example, in ALCHEMY, POP actions are sufficient to complete the first steps of good executions. As a result, early during learning, the agent learns a strong bias against PUSH actions. In practice, the agent then will not explore PUSH actions again. In our algorithm, as the agent learns to roll out the correct POP prefix, it is then exposed to the reward for the first PUSH even though it likely sampled another POP. It then unlearns its bias towards predicting POP. Our learning algorithm can be viewed as a costsensitive variant of the oracle in DAGGER (Ross et al., 2011), where it provides the rewards for all actions instead of an oracle action. It is also related to Locally Optimal Learning to Search (LOLS; Chang et al., 2015) with two key distinctions: (a) instead of using different roll-in and roll-out policies, we use the model policy; and (b) we branch at each step, instead of once, but do not rollout 2078 Rollin <latexit sha1_bas e64="DtVmvygCMSquf1kOAEH6sV0edc= ">ACKHicbVDLSsNAFJ34rPGtSzeDRXB VEhHUneDGpYpVoQkymdzq4DzizI1aQr7Dr a79GlfSrV/itHah1QMDh3PumXs5WSGFwy jqBxOTU9Mzs425cH5hcWl5ZXtwpnScmh zI429ypgDKTS0UaCEq8ICU5mEy+zuaOBf PoB1wuhz7BWQKnajRVdwhl5KE4QnrM6M9 Pn6eqUZtaIh6F8Sj0iTjHByvRrMJLnhpQ KNXDLnOnFUYFoxi4JLqMOkdFAwfsduoOp ZgpcWg2vrumWV3LaNdY/jXSo/kxUTDnXU 5mfVAxv3bg3EP/1cjf4cGw7dvfTSuiRN D8e3m3lBQNHdRCc2GBo+x5wrgV/n7Kb5l lH15YZhY0PDIjVJM51XC606cVlViFW3G dR365uLxnv6S9k7roBWd7jYPo1GFDbJBNs k2ickeOSTH5IS0CSf35Jm8kNfgLXgPoL +9+hEMqsk18IPr8ABH2mOQ=</latexi t> <latexit sha1_bas e64="DtVmvygCMSquf1kOAEH6sV0edc= ">ACKHicbVDLSsNAFJ34rPGtSzeDRXB VEhHUneDGpYpVoQkymdzq4DzizI1aQr7Dr a79GlfSrV/itHah1QMDh3PumXs5WSGFwy jqBxOTU9Mzs425cH5hcWl5ZXtwpnScmh zI429ypgDKTS0UaCEq8ICU5mEy+zuaOBf PoB1wuhz7BWQKnajRVdwhl5KE4QnrM6M9 Pn6eqUZtaIh6F8Sj0iTjHByvRrMJLnhpQ KNXDLnOnFUYFoxi4JLqMOkdFAwfsduoOp ZgpcWg2vrumWV3LaNdY/jXSo/kxUTDnXU 5mfVAxv3bg3EP/1cjf4cGw7dvfTSuiRN D8e3m3lBQNHdRCc2GBo+x5wrgV/n7Kb5l lH15YZhY0PDIjVJM51XC606cVlViFW3G dR365uLxnv6S9k7roBWd7jYPo1GFDbJBNs k2ickeOSTH5IS0CSf35Jm8kNfgLXgPoL +9+hEMqsk18IPr8ABH2mOQ=</latexi t> <latexit sha1_bas e64="DtVmvygCMSquf1kOAEH6sV0edc= ">ACKHicbVDLSsNAFJ34rPGtSzeDRXB VEhHUneDGpYpVoQkymdzq4DzizI1aQr7Dr a79GlfSrV/itHah1QMDh3PumXs5WSGFwy jqBxOTU9Mzs425cH5hcWl5ZXtwpnScmh zI429ypgDKTS0UaCEq8ICU5mEy+zuaOBf PoB1wuhz7BWQKnajRVdwhl5KE4QnrM6M9 Pn6eqUZtaIh6F8Sj0iTjHByvRrMJLnhpQ KNXDLnOnFUYFoxi4JLqMOkdFAwfsduoOp ZgpcWg2vrumWV3LaNdY/jXSo/kxUTDnXU 5mfVAxv3bg3EP/1cjf4cGw7dvfTSuiRN D8e3m3lBQNHdRCc2GBo+x5wrgV/n7Kb5l lH15YZhY0PDIjVJM51XC606cVlViFW3G dR365uLxnv6S9k7roBWd7jYPo1GFDbJBNs k2ickeOSTH5IS0CSf35Jm8kNfgLXgPoL +9+hEMqsk18IPr8ABH2mOQ=</latexi t> Rollout <latexit sha1_base64="ItenZLBGN1 +9hAciWYVGcj3LXKg=">ACKXicbVDLgRBFK321l6DpU3FRGI16RYJdhIbS8 QgmW5SX2Hinp0qm5j0un/sGXta6yw9SNqxiwYTnKTk3PuKycrpHAYRe/B2PjE 5NT0zGw4N7+wuNRYXjlzprQc2txIYy8y5kAKDW0UKOGisMBUJuE8uz3o+d3YJ 0w+hR7BaSKXWvRFZyhly4ThAesToyUpsT6qtGMWtEA9C+Jh6RJhji6Wg6mktzwU oFGLplznTgqMK2YRcEl1GFSOigYv2X0PFUMwUurQZv13TDKzntGutLIx2oPyc qpzrqcx3KoY3btTri/96uesvHLmO3d20EroET/Pt4tJUVD+7nQXFjgKHueM G6F/5/yG2YZR59eGCYWNxzoxTeZXwuhOnVZVYRZtxXYc+uXg0p7+kvdXa0X H2839aBjhDFkj62STxGSH7JNDckTahBNLHskTeQ5egtfgLfj4bh0LhjOr5BeCzy 8NFKbE</latexit> <latexit sha1_base64="ItenZLBGN1 +9hAciWYVGcj3LXKg=">ACKXicbVDLgRBFK321l6DpU3FRGI16RYJdhIbS8 QgmW5SX2Hinp0qm5j0un/sGXta6yw9SNqxiwYTnKTk3PuKycrpHAYRe/B2PjE 5NT0zGw4N7+wuNRYXjlzprQc2txIYy8y5kAKDW0UKOGisMBUJuE8uz3o+d3YJ 0w+hR7BaSKXWvRFZyhly4ThAesToyUpsT6qtGMWtEA9C+Jh6RJhji6Wg6mktzwU oFGLplznTgqMK2YRcEl1GFSOigYv2X0PFUMwUurQZv13TDKzntGutLIx2oPyc qpzrqcx3KoY3btTri/96uesvHLmO3d20EroET/Pt4tJUVD+7nQXFjgKHueM G6F/5/yG2YZR59eGCYWNxzoxTeZXwuhOnVZVYRZtxXYc+uXg0p7+kvdXa0X H2839aBjhDFkj62STxGSH7JNDckTahBNLHskTeQ5egtfgLfj4bh0LhjOr5BeCzy 8NFKbE</latexit> <latexit sha1_base64="ItenZLBGN1 +9hAciWYVGcj3LXKg=">ACKXicbVDLgRBFK321l6DpU3FRGI16RYJdhIbS8 QgmW5SX2Hinp0qm5j0un/sGXta6yw9SNqxiwYTnKTk3PuKycrpHAYRe/B2PjE 5NT0zGw4N7+wuNRYXjlzprQc2txIYy8y5kAKDW0UKOGisMBUJuE8uz3o+d3YJ 0w+hR7BaSKXWvRFZyhly4ThAesToyUpsT6qtGMWtEA9C+Jh6RJhji6Wg6mktzwU oFGLplznTgqMK2YRcEl1GFSOigYv2X0PFUMwUurQZv13TDKzntGutLIx2oPyc qpzrqcx3KoY3btTri/96uesvHLmO3d20EroET/Pt4tJUVD+7nQXFjgKHueM G6F/5/yG2YZR59eGCYWNxzoxTeZXwuhOnVZVYRZtxXYc+uXg0p7+kvdXa0X H2839aBjhDFkj62STxGSH7JNDckTahBNLHskTeQ5egtfgLfj4bh0LhjOr5BeCzy 8NFKbE</latexit> Branch <latexit sha1_bas e64="B5fVuaDlea4qVxFuHcErLAiJNxQ=" >ACKHicbVDLSsNAFJ34Nr5aXboZLIKr koig7opuXCpYLTShTCa3dnBmEmdu1BLyHW 517de4Erd+idPahVYPXDic1+cJfCYhB8 eDOzc/MLi0vL/srq2vpGrb5ZbPCcGjzT GamkzALUmho0AJndwAU4mE6+T2dORf34O xItOXOMwhVuxGi7gDJ0URwiPWJ4Ypvmg 6tUaQTMYg/4l4YQ0yATnvbq3EKUZLxRo5J JZ2w2DHOSGRcQuVHhYWc8Vt2A1HNVN g43L8dUV3nZLSfmZcaRj9edEyZS1Q5W4T sVwYKe9kfivl9rRwqnr2D+KS6HzAkHz7+P 9QlLM6CgWmgoDHOXQEcaNcP9TPmCGcXTh +X5kQMDz5RiOi0jXnXDuCwjo2gjrCrfJR dO5/SXtPebx83g4qDRCiYRLpFtskP2SEg OSYuckXPSJpzckSfyTF68V+/Ne/c+vltnv MnMFvkF7/MLyXWmFw=</latexit> <latexit sha1_bas e64="B5fVuaDlea4qVxFuHcErLAiJNxQ=" >ACKHicbVDLSsNAFJ34Nr5aXboZLIKr koig7opuXCpYLTShTCa3dnBmEmdu1BLyHW 517de4Erd+idPahVYPXDic1+cJfCYhB8 eDOzc/MLi0vL/srq2vpGrb5ZbPCcGjzT GamkzALUmho0AJndwAU4mE6+T2dORf34O xItOXOMwhVuxGi7gDJ0URwiPWJ4Ypvmg 6tUaQTMYg/4l4YQ0yATnvbq3EKUZLxRo5J JZ2w2DHOSGRcQuVHhYWc8Vt2A1HNVN g43L8dUV3nZLSfmZcaRj9edEyZS1Q5W4T sVwYKe9kfivl9rRwqnr2D+KS6HzAkHz7+P 9QlLM6CgWmgoDHOXQEcaNcP9TPmCGcXTh +X5kQMDz5RiOi0jXnXDuCwjo2gjrCrfJR dO5/SXtPebx83g4qDRCiYRLpFtskP2SEg OSYuckXPSJpzckSfyTF68V+/Ne/c+vltnv MnMFvkF7/MLyXWmFw=</latexit> <latexit sha1_bas e64="B5fVuaDlea4qVxFuHcErLAiJNxQ=" >ACKHicbVDLSsNAFJ34Nr5aXboZLIKr koig7opuXCpYLTShTCa3dnBmEmdu1BLyHW 517de4Erd+idPahVYPXDic1+cJfCYhB8 eDOzc/MLi0vL/srq2vpGrb5ZbPCcGjzT GamkzALUmho0AJndwAU4mE6+T2dORf34O xItOXOMwhVuxGi7gDJ0URwiPWJ4Ypvmg 6tUaQTMYg/4l4YQ0yATnvbq3EKUZLxRo5J JZ2w2DHOSGRcQuVHhYWc8Vt2A1HNVN g43L8dUV3nZLSfmZcaRj9edEyZS1Q5W4T sVwYKe9kfivl9rRwqnr2D+KS6HzAkHz7+P 9QlLM6CgWmgoDHOXQEcaNcP9TPmCGcXTh +X5kQMDz5RiOi0jXnXDuCwjo2gjrCrfJR dO5/SXtPebx83g4qDRCiYRLpFtskP2SEg OSYuckXPSJpzckSfyTF68V+/Ne/c+vltnv MnMFvkF7/MLyXWmFw=</latexit> Rollout <latexit sha1_base64="ItenZLBGN1+ 9hAciWYVGcj3LXKg=">ACKXicbVDLgRBFK321l6DpU3FRGI16RYJdhIbS8QgmW 5SX2Hinp0qm5j0un/sGXta6yw9SNqxiwYTnKTk3PuKycrpHAYRe/B2PjE5NT0zGw4 N7+wuNRYXjlzprQc2txIYy8y5kAKDW0UKOGisMBUJuE8uz3o+d3YJ0w+hR7BaSKX WvRFZyhly4ThAesToyUpsT6qtGMWtEA9C+Jh6RJhji6Wg6mktzwUoFGLplznTgqMK 2YRcEl1GFSOigYv2X0PFUMwUurQZv13TDKzntGutLIx2oPycqpzrqcx3KoY3btT ri/96uesvHLmO3d20EroET/Pt4tJUVD+7nQXFjgKHueMG6F/5/yG2YZR59eGCYW NxzoxTeZXwuhOnVZVYRZtxXYc+uXg0p7+kvdXa0XH2839aBjhDFkj62STxGSH7J NDckTahBNLHskTeQ5egtfgLfj4bh0LhjOr5BeCzy8NFKbE</latexit> <latexit sha1_base64="ItenZLBGN1+ 9hAciWYVGcj3LXKg=">ACKXicbVDLgRBFK321l6DpU3FRGI16RYJdhIbS8QgmW 5SX2Hinp0qm5j0un/sGXta6yw9SNqxiwYTnKTk3PuKycrpHAYRe/B2PjE5NT0zGw4 N7+wuNRYXjlzprQc2txIYy8y5kAKDW0UKOGisMBUJuE8uz3o+d3YJ0w+hR7BaSKX WvRFZyhly4ThAesToyUpsT6qtGMWtEA9C+Jh6RJhji6Wg6mktzwUoFGLplznTgqMK 2YRcEl1GFSOigYv2X0PFUMwUurQZv13TDKzntGutLIx2oPycqpzrqcx3KoY3btT ri/96uesvHLmO3d20EroET/Pt4tJUVD+7nQXFjgKHueMG6F/5/yG2YZR59eGCYW NxzoxTeZXwuhOnVZVYRZtxXYc+uXg0p7+kvdXa0XH2839aBjhDFkj62STxGSH7J NDckTahBNLHskTeQ5egtfgLfj4bh0LhjOr5BeCzy8NFKbE</latexit> <latexit sha1_base64="ItenZLBGN1+ 9hAciWYVGcj3LXKg=">ACKXicbVDLgRBFK321l6DpU3FRGI16RYJdhIbS8QgmW 5SX2Hinp0qm5j0un/sGXta6yw9SNqxiwYTnKTk3PuKycrpHAYRe/B2PjE5NT0zGw4 N7+wuNRYXjlzprQc2txIYy8y5kAKDW0UKOGisMBUJuE8uz3o+d3YJ0w+hR7BaSKX WvRFZyhly4ThAesToyUpsT6qtGMWtEA9C+Jh6RJhji6Wg6mktzwUoFGLplznTgqMK 2YRcEl1GFSOigYv2X0PFUMwUurQZv13TDKzntGutLIx2oPycqpzrqcx3KoY3btT ri/96uesvHLmO3d20EroET/Pt4tJUVD+7nQXFjgKHueMG6F/5/yG2YZR59eGCYW NxzoxTeZXwuhOnVZVYRZtxXYc+uXg0p7+kvdXa0XH2839aBjhDFkj62STxGSH7J NDckTahBNLHskTeQ5egtfgLfj4bh0LhjOr5BeCzy8NFKbE</latexit> Single-step Reward Observations <latexit sha1_base64="G/+rqgEQqu+ qL8l7reSb6HsUut4=">AChXicbVFNbxMxEHUWaMPy0ZQeuViNqLg02q0KhROVeu HWlhJaKV5FXu8ksWp7V/ZsS2TtX+TO/+AKqpNuVUgZydLze29mrOe8UtJhkvzsRI8e P1lb7z6Nnz1/8XKjt/nqmytrK2AoSlXai5w7UNLAECUquKgscJ0rOM8vjxb6+RVYJ 0vzFecVZJpPjZxIwTFQ495sh7KZq7gA/67ChjKE7+jPpJkq2HUIVcPYaHevwozSe2 ua3nu/wDW3Rbjd+XZoqxznDuzVcpFrxr1+MkiWR+CtAV90tbJeLOzxopS1BoMCsWd G6VJGO+5RSkUNDGrHYTXPIpjAI0XIPL/DKShr4JTEnpQ3HIF2yf3d4rp2b6zw4N ceZW9UW5H+1wi0GrmzHyYfMS1PVCEbcLp/UimJF5nTQloQqOYBcGFleD8VM265wP AzcwsGLgWpdbcFJ6JZpRm3jOraT9tmjgkl67m9BAM9wYfB8npfv8waSPsktdkm7wl KTkgh+QzOSFDIsgP8ov8Jn+ibjSI9qP3t9ao0/ZskX8q+nQDmCjFdQ=</latexit > <latexit sha1_base64="G/+rqgEQqu+ qL8l7reSb6HsUut4=">AChXicbVFNbxMxEHUWaMPy0ZQeuViNqLg02q0KhROVeu HWlhJaKV5FXu8ksWp7V/ZsS2TtX+TO/+AKqpNuVUgZydLze29mrOe8UtJhkvzsRI8e P1lb7z6Nnz1/8XKjt/nqmytrK2AoSlXai5w7UNLAECUquKgscJ0rOM8vjxb6+RVYJ 0vzFecVZJpPjZxIwTFQ495sh7KZq7gA/67ChjKE7+jPpJkq2HUIVcPYaHevwozSe2 ua3nu/wDW3Rbjd+XZoqxznDuzVcpFrxr1+MkiWR+CtAV90tbJeLOzxopS1BoMCsWd G6VJGO+5RSkUNDGrHYTXPIpjAI0XIPL/DKShr4JTEnpQ3HIF2yf3d4rp2b6zw4N ceZW9UW5H+1wi0GrmzHyYfMS1PVCEbcLp/UimJF5nTQloQqOYBcGFleD8VM265wP AzcwsGLgWpdbcFJ6JZpRm3jOraT9tmjgkl67m9BAM9wYfB8npfv8waSPsktdkm7wl KTkgh+QzOSFDIsgP8ov8Jn+ibjSI9qP3t9ao0/ZskX8q+nQDmCjFdQ=</latexit > <latexit sha1_base64="G/+rqgEQqu+ qL8l7reSb6HsUut4=">AChXicbVFNbxMxEHUWaMPy0ZQeuViNqLg02q0KhROVeu HWlhJaKV5FXu8ksWp7V/ZsS2TtX+TO/+AKqpNuVUgZydLze29mrOe8UtJhkvzsRI8e P1lb7z6Nnz1/8XKjt/nqmytrK2AoSlXai5w7UNLAECUquKgscJ0rOM8vjxb6+RVYJ 0vzFecVZJpPjZxIwTFQ495sh7KZq7gA/67ChjKE7+jPpJkq2HUIVcPYaHevwozSe2 ua3nu/wDW3Rbjd+XZoqxznDuzVcpFrxr1+MkiWR+CtAV90tbJeLOzxopS1BoMCsWd G6VJGO+5RSkUNDGrHYTXPIpjAI0XIPL/DKShr4JTEnpQ3HIF2yf3d4rp2b6zw4N ceZW9UW5H+1wi0GrmzHyYfMS1PVCEbcLp/UimJF5nTQloQqOYBcGFleD8VM265wP AzcwsGLgWpdbcFJ6JZpRm3jOraT9tmjgkl67m9BAM9wYfB8npfv8waSPsktdkm7wl KTkgh+QzOSFDIsgP8ov8Jn+ibjSI9qP3t9ao0/ZskX8q+nQDmCjFdQ=</latexit > Figure 3: Illustration of LOLS (left; Chang et al., 2015) and our learning algorithm (SESTRA, right). LOLS branches a single time, and samples complete rollout for each branch to obtain the trajectory loss. SESTRA uses a complete on-policy rollout and singlestep branching for all actions in each sample state. ALC SCE TAN # Sequences (train) 3657 3352 4189 # Sequences (dev) 245 198 199 # Sequences (test) 899 1035 800 Mean instruction 8.0±3.2 10.5±5.5 5.4±2.4 length Vocabulary size 695 816 475 Table 1: Data statistics for ALCHEMY (ALC), SCENE (SCE), and TANGRAMS (TAN). Refs/Ex 1 2 3 4 ALCHEMY 1.4 Coref. 28 7 2 0 Ellipsis 0 0 3 1 SCENE 2.4 Coref. 49 16 5 3 Ellipsis 0 0 0 0 TANGRAMS 1.7 Coref. 25 14 2 1 Ellipsis 4 0 0 0 Table 2: Counts of discourse phenomena in SCONE from 30 randomly selected development interactions for each domain. We count occurrences of coreference between instructions (e.g., he leaves in SCENE) and ellipsis (e.g., then, drain 2 units in ALCHEMY), when the last explicit mention of the referent was 1, 2, 3, or 4 turns in the past. We also report the average number of multi-turn references per interaction (Refs/Ex). from branched actions since we only optimize the immediate reward. Figure 3 illustrates the comparison. Our summation over immediate rewards for all actions is related the summation of estimated Q-values for all actions in the Mean Actor-Critic algorithm (Asadi et al., 2017). Finally, our approach is related to Misra et al. (2017), who also maximize the immediate reward, but do not observe rewards for all actions for each state. 6 SCONE Domains and Data SCONE has three domains: ALCHEMY, SCENE, and TANGRAMS. Each interaction contains five instructions. Table 1 shows data statistics. Table 2 shows discourse reference analysis. State encodings are detailed in the Supplementary Material. ALCHEMY Each environment in ALCHEMY contains seven numbered beakers, each containing up to four colored chemicals in order. Figure 1 shows an example. Instructions describe pouring chemicals between and out of beakers, and mixing beakers. We treat all beakers as stacks. There are two action types: PUSH and POP. POP takes a beaker index, and removes the top color. PUSH takes a beaker index and a color, and adds the color at the top of the beaker. To encode a state, we encode each beaker with an RNN, and concatenate the last output with the beaker index embedding. The set of vectors is the state embedding. SCENE Each environment in SCENE contains ten positions, each containing at most one person defined by a shirt color and an optional hat color. Instructions describe adding or removing people, moving a person to another position, and moving a person’s hat to another person. There are four action types: ADD_PERSON, ADD_HAT, REMOVE_PERSON, and REMOVE_HAT. ADD_PERSON and ADD_HAT take a position to place the person or hat and the color of the person’s shirt or hat. REMOVE_PERSON and REMOVE_HAT take the position to remove a person or hat from. To encode a state, we use a bidirectional RNN over the ordered positions. The input for each position is a concatenation of the color embeddings for the person and hat. The set of RNN hidden states is the state embedding. TANGRAMS Each environment in TANGRAMS is a list containing at most five unique objects. Instructions describe removing or inserting an object into a position in the list, or swapping the positions of two items. There are two action types: INSERT and REMOVE. INSERT takes the position to insert an object, and the object identifier. REMOVE takes an object position. We embed each object by concatenating embeddings for its type and position. The resulting set is the state embedding. 7 Experimental Setup Evaluation Following Long et al. (2016), we evaluate task completion accuracy using exact match between the final state and the annotated goal state. We report accuracy for complete interactions (5utts), the first three utterances of each interaction (3utts), and single instructions (Inst). For single instructions, execution starts from the annotated start state of the instruction. Systems We report performance of ablations and two baseline systems: POLICYGRADIENT: policy gradient with cumulative episodic reward without a baseline, and CONTEXTUALBANDIT: the contextual bandit approach of Misra et al. (2017). Both systems use the reward with the 2079 0 25 50 75 100 125 150 175 200 0 0.25 0.5 0.75 1 # Epochs Accuracy Figure 4: Instruction-level training accuracy per epoch when training five models on SCENE, demonstrating the effect of randomization in the learning method. Three of five experiments fail to learn effective models. The red and blue learning trajectories are overlapping. shaping term and our model. We also report supervised learning results (SUPERVISED) by heuristically generating correct executions and computing maximum-likelihood estimate using contextaction demonstration pairs. Only the supervised approach uses the heuristically generated labels. Although the results are not comparable, we also report the performance of previous approaches to SCONE. All three approaches generate logical representations based on lambda calculus. In contrast to our approach, this requires an ontology of hand built symbols and rules to evaluate the logical forms. Fried et al. (2018) uses supervised learning with annotated logical forms. Training Details For test results, we run each experiment five times and report results for the model with best validation interaction accuracy. For ablations, we do the same with three experiments. We use a batch size of 20. We stop training using a validation set sampled from the training data. We hold the validation set constant for each domain for all experiments. We use patience over the average reward, and select the best model using interaction-level (5utts) validation accuracy. We tune λ, δ, and M on the development set. The selected values and other implementation details are described in the Supplementary Material. 8 Results Table 3 shows test results. Our approach significantly outperforms POLICYGRADIENT and CONTEXTUALBANDIT, both of which suffer due to biases learned early during learning, hindering later exploration. This problem does not appear in TANGRAMS, where no action type is dominant at the beginning of executions, and all methods perform well. POLICYGRADIENT completely fails to learn ALCHEMY and SCENE due to observing only negative total rewards early during learning. Using a baseline, for example with an actor-critic method, will potentially close the gap to CONTEXTUALBANDIT. However, it is unlikely to address the on-policy exploration problem. Table 4 shows development results, including model ablation studies. Removing previous instructions (– previous instructions) or both states (– current and initial state) reduces performance across all domains. Removing only the initial state (– initial state) or the current state (– current state) shows mixed results across the domains. Providing access to both initial and current states increases performance for ALCHEMY, but reduces performance on the other domains. We hypothesize that this is due to the increase in the number of parameters outweighing what is relatively marginal information for these domains. In our development and test results we use a single architecture across the three domains, the full approach, which has the highest interactive-level accuracy when averaged across the three domains (62.7 5utts). We also report mean and standard deviation for our approach over five trials. We observe exceptionally high variance in performance on SCENE, where some experiments fail to learn and training performance remains exceptionally low (Figure 4). This highlights the sensitivity of the model to the random effects of initialization, dropout, and ordering of training examples. We analyze the instruction-level errors made by our best models when the agent is provided the correct initial state for the instruction. We study fifty examples in each domain to identify the type of failures. Table 5 shows the counts of major error categories. We consider multiple reference resolution errors. State reference errors indicate a failure to resolve a reference to the world state. For example, in ALCHEMY, the phrase leftmost red beaker specifies a beaker in the environment. If the model picked the correct action, but the wrong beaker, we count it as a state reference. We distinguish between multi-turn reference errors that should be feasible, and these that that are impossible to solve without access to states before executing previous utterances, which are not provided to our model. For example, in TANGRAMS, the instruction put it back in the same place refers to a previouslyremoved item. Because the agent only has access to the world state after following this instruction, it does not observe what kind of item was previously removed, and cannot identify the item to add. We 2080 ALCHEMY SCENE TANGRAMS System Inst 3utts 5utts Inst 3utts 5utts Inst 3utts 5utts Long et al. (2016) – 56.8 52.3 – 23.2 14.7 – 64.9 27.6 Guu et al. (2017) – 66.9 52.9 – 64.8 46.2 – 65.8 37.1 Fried et al. (2018) – – 72.0 – – 72.7 – – 69.6 SUPERVISED 89.4 73.3 62.3 88.8 78.9 66.4 86.6 81.4 60.1 POLICYGRADIENT 0.0 0.0 0.0 0.0 1.3 0.2 84.1 77.4 54.9 CONTEXTUALBANDIT 73.8 36.0 25.7 15.1 2.9 4.4 84.8 76.9 57.9 Our approach 89.1 74.2 62.7 87.1 73.9 62.0 86.6 80.8 62.4 Table 3: Test accuracies for single instructions (Inst), first-three instructions (3utts), and full interactions (5utts). ALCHEMY SCENE TANGRAMS System Inst 3utts 5utts Inst 3utts 5utts Inst 3utts 5utts SUPERVISED 92.0 83.3 71.4 85.3 72.7 60.6 86.1 81.9 58.3 POLICYGRADIENT 0.0 0.0 0.0 0.9 1.0 0.5 85.2 74.9 52.3 CONTEXTUALBANDIT 58.8 6.9 5.7 12.0 0.5 1.5 85.6 78.4 52.6 Our approach 92.1 82.9 71.8 83.9 68.7 56.1 88.5 82.4 60.3 – previous instructions 90.1 77.1 66.1 79.3 60.6 45.5 76.4 55.8 27.6 – current and initial state 25.7 4.5 3.3 17.5 0.0 0.0 45.4 15.1 3.5 – current state 89.8 78.0 62.9 83.0 68.7 54.0 87.6 78.4 60.8 – initial state 81.1 68.6 42.9 82.7 67.7 57.1 88.6 82.9 63.3 Our approach (µ ± σ) 91.5 ±1.4 80.4 ±2.6 69.5 ±5.0 62.9 ±17.7 37.8 ±23.5 29.0 ±21.1 88.2 ±0.6 80.8 ±2.8 59.2 ±2.3 Table 4: Development results, including model ablations. We also report mean µ and standard deviation σ for all metrics for our approach across five experiments. We bold the best performing variations of our model. Class ALC SCE TAN State reference 23 13 7 Multi-turn reference 12 5 13 Impossible multi-turn reference 2 5 13 Ambiguous or incorrect label 2 19 12 Table 5: Common error counts in the three domains. also find a significant number of errors due to ambiguous or incorrect instructions. For example, the SCENE instruction person in green appears on the right end is ambiguous. In the annotated goal, it is interpreted as referring to a person already in the environment, who moves to the 10th position. However, it can also be interpreted as a new person in green appearing in the 10th position. We also study performance with respect to multi-turn coreference by observing whether the model was able to identify the correct referent for each occurrence included in the analysis in Table 2. The models were able to correctly resolve 92.3%, 88.7%, and 76.0% of references in ALCHEMY, SCENE, and TANGRAMS respectively. Finally, we include attention visualization for examples from the three domains in the Supplementary Material. 9 Discussion We propose a model to reason about contextdependent instructional language that display strong dependencies both on the history of the interaction and the state of the world. Future modeling work may include using intermediate world states from previous turns in the interaction, which is required for some of the most complex references in the data. We propose to train our model using SESTRA, a learning algorithm that takes advantage of single-step reward observations to overcome learned biases in on-policy learning. Our learning approach requires additional reward observations in comparison to conventional reinforcement learning. However, it is particularly suitable to recovering from biases acquired early during learning, for example due to biased action spaces, which is likely to lead to incorrect blame assignment in neural network policies. When the domain and model are less susceptible to such biases, the benefit of the additional reward observations is less pronounced. One possible direction for future work is to use an estimator to predict rewards for all actions, rather than observing them. Acknowledgements This research was supported by the NSF (CRII1656998), Schmidt Sciences, and cloud computing credits from Amazon. We thank John Langford and Dipendra Misra for helpful and insightful discussions with regards to our learning algorithm. We also thank the anonymous reviewers for their helpful comments. 2081 References Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Visionand-Language Navigation: Interpreting visuallygrounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Jacob Andreas and Dan Klein. 2015. Alignmentbased compositional semantics for instruction following. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Yoav Artzi, Dipanjan Das, and Slav Petrov. 2014. Learning compact lexicons for CCG semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Yoav Artzi and Luke S. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association of Computational Linguistics, 1:49–62. Kavosh Asadi, Cameron Allen, Melrose Roderick, Abdel-rahman Mohamed, George Konidaris, and Michael L. Littman. 2017. Mean actor critic. CoRR, abs/1709.00503. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Jonathan Berant and Percy Liang. 2015. Imitation learning of agenda-based semantic parsers. Transactions of the Association for Computational Linguistics, 3:545–558. Yonatan Bisk, Kevin Shih, Yejin Choi, and Daniel Marcu. 2018. Learning interpretable spatial operations in a rich 3D blocks world. In Proceedings of the Thirty-Second Conference on Artificial Intelligence. Yonatan Bisk, Deniz Yuret, and Daniel Marcu. 2016. Natural language communication with robots. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. S.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing of the AFNLP. Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé, and John Langford. 2015. Learning to search better than your teacher. In Proceedings of the International Conference on Machine Learning. David Chen. 2012. Fast online lexicon learning for grounded language acquisition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. David L. Chen and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the National Conference on Artificial Intelligence. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. In Proceedings of the Workshop on Human Language Technology. Yann Dauphin, Harm de Vries, , and Yoshua Bengio. 2015. Equilibrated adaptive learning rates for nonconvex optimization. CoRR, abs/1502.04390. Daniel Fried, Jacob Andreas, and Dan Klein. 2018. Unified pragmatic models for generating and following instructions. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Proceedings of the DARPA speech and natural language workshop. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9. 2082 Michael Janner, Karthik Narasimhan, and Regina Barzilay. 2018. Representation learning for grounded spatial reasoning. Transactions of the Association for Computational Linguistics, 6:49–61. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional semantics. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Matthew MacMahon, Brian Stankiewics, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, action in route instructions. In Proceedings of the National Conference on Artificial Intelligence. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Association for the Advancement of Artificial Intelligence. Scott Miller, David Stallard, Robert Bobrow, and Richard Schwartz. 1996. A fully statistical approach to natural language interfaces. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to actions with reinforcement learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In Proceedings of the International Conference on Machine Learning. Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics. Alane Suhr, Srinivasan Iyer, and Yoav Artzi. 2018. Learning to map context-dependent sentences to executable formal queries. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Hao Tan and Mohit Bansal. 2018. Source-target inference models for spatial instruction understanding. In AAAI Conference on Artificial Intelligence. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31. Gökhan Tür, Dilek Hakkani-Tür, and Larry Heck. 2010. What is left to be understood in ATIS? In Proceedings of the Spoken Language Technology Workshop. Adam Vogel and Daniel Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Luke S. Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In Proceedings of the Joint Conference of the Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP.
2018
193
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2083–2093 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2083 Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding Bingfeng Luo1, Yansong Feng∗1, Zheng Wang2, Songfang Huang3, Rui Yan1 and Dongyan Zhao1 1ICST, Peking University, China 2MetaLab, Lancaster University, UK 3IBM China Research Lab, China {bf luo,fengyansong,ruiyan,zhaody}@pku.edu.cn [email protected], [email protected] Abstract The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: “Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?”. In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. 1 Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc. (Chang and Manning, 2014). As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate. As such, it is commonly used in industry, especially when the available training examples are limited – a problem known as few-shot learning (GC et al., 2015). While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified. As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods. We believe the use of REs can go beyond simple pattern matching. In addition to being a separate classifier to be ensembled, a RE also encodes a developer’s knowledge for the problem domain. The knowledge could be, for example, the informative words (clue words) within a RE’s surface form. We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning. This work investigates the use of REs to improve NNs – a learning framework that is widely used in many NLP tasks (Goldberg, 2017). The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs. This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016). This paper presents novel approaches to combine REs with a NN at different levels. At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2). At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec. 3.3). At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec. 3.4). We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling. To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data. Experimental results show that our approach is highly effective in utilizing the available 2084 flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city Figure 1: A sentence from the ATIS dataset. REs can be used to detect the intent and label slots. annotated data, yielding significantly better learning performance over the RE-unaware method. Our contributions are as follows. (1) We present the first work to systematically investigate methods for combining REs with NNs. (2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings. (3) We provide a set of guidance on how to combine REs with NNs and RE annotation. 2 Background 2.1 Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)? AIRLINE/, and underlined italic for words of sentences in our dataset like Boston. 2.2 Problem Definition Our work targets two SLU tasks: intent detection and slot filling. The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x1, ..., xn], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x1, ..., xn], to produce a corresponding labeling sequence, y = [y1, ..., yn], where yi is the slot label of the corresponding word, xi. Take the sentence in Fig. 1 as an example. A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information. A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme. 2.3 The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels). A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern. We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling). Specifically, our REtags for intent detection are the same as the intent labels. For example, in Fig. 1, we get a REtag of flight that is the same as the intent label flight. For slot filling, we use two different sets of REs. Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses). The translation from REtags to slot labels depends on how the corresponding REs are used. (1) When REs are used at the network module level (Sec. 3.3), the corresponding REtags are the same as the target slot labels. For instance, the slot RE in Fig. 1 will assign fromloc.city to the first RE group and toloc.city to the second one. Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../. (2) If REs are used in the input (Sec. 3.2) and the output layers (Sec. 3.4) of a NN, the corresponding REtag would be different from the target slot labels. In this context, the two RE groups in Fig. 1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city. Note that we could use the target slot labels as REtags for all the settings. The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective. Further, as shown in Sec. 4.2, using simplified REtags can also make the development of REs easier in our tasks. Intuitively, complicated REs can lead to better performance but require more efforts to generate. Generally, there are two aspects affecting RE complexity most: the number of RE groups1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group. Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \w+, that can match any word). For instance, the RE: /how long( \w+){1,2}? (it take|flight)/ has two RE groups: (how long) and (it take|flight). 2085 precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision. 3 Our Approach As depicted in Fig. 2, we propose to combine NNs and REs from three different angles. 3.1 Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016). Intent Detection. As shown in Fig. 2, the BLSTM takes as input the word embeddings [x1, ..., xn] of a n-word sentence, and produces a vector hi for each word i. A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = X i αihi, αi = exp(h⊺ i Wc) P i exp(h⊺ i Wc) (1) where αi is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix. Finally, s is fed to a softmax classifier for intent classification. Slot Filling. The model for slot filling is straightforward – the slot label prediction is generated by a softmax classier which takes in the BLSTM’s output hi and produces the slot label of word i. Note that attention aggregation in Fig. 2 is only employed by the network module level method presented in Sec. 3.3. 3.2 Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models. Intent Detection. Our REtag for intent detection is the same as our target intent label. Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE. This may result in several REtags that are conflict with each other. For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)? AIRLINE/ that outputs tag airline, and another RE: /list( \w+){0,3} flights?/ that outputs tag flight. To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input. There are two ways to use the aggregated embedding. We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig. 2(a)). To determine which strategy works best, we perform a pilot study. We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings. Thus, we adopt the second approach. Slot Filling. Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector fi for each word, and append it to the corresponding word embedding wi (as shown in 1 in Fig. 2(b)). Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city. 3.3 Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig. 2) to guide the attention module in NNs. Intent Detection. Taking the sentence in Fig. 1 for example, the RE: /ˆflights? from/ that leads to intent flight means that flights from are the key words to decide the intent flight. Therefore, the attention module in NNs should leverage these two words to get the correct prediction. To this end, we extend the base intent model by making two changes to incorporate the guidance from REs. First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused. Therefore, we let each intent label k use different attention ak, which is then used to generate the sentence embedding sk for that intent: sk = X i αkihi, αki = exp(h⊺ i Wack) P i exp(h⊺ i Wack) (2) where ck is a trainable vector for intent k which is used to compute attention ak, hi is the BLSTM output for word i, and Wa is a weight matrix. The probability pk that the input sentence expresses intent k is computed by: pk = exp(logitk) P k exp(logitk), logitk = wksk + bk (3) 2086 x1 x2 h1 h2 x3 h3 s BLSTM Intent: flight h4 h5 x4 x5 flights from Boston to Miami feat Attention Aggregation /^flights? from/ RE 1 2 3 RE Instance Softmax Classifier (a) Intent Detection RE x1 x2 h1 h2 x3 h3 s3 BLSTM Slot3: B-fromloc.city h4 h5 x4 x5 flights from Boston to Miami f1 f2 f3 f4 f5 1 2 3 Attention Aggregation /from __CITY to __CITY/ RE Instance Softmax Classifier (b) Slot Filling (predicting slot label for Boston) Figure 2: Overview of our methods. 1 , 2 , 3 refers to the methods in Sec. 3.2, 3.3, 3.4 respectively. where wk, logitk, bk are weight vector, logit, and bias for intent k, respectively. Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs). We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs. 2 and 3. We denote the logits computed by positive attentions as logitpk, and those by negative attentions as logitnk, the final logit for intent k can then be calculated as: logitk = logitpk −logitnk (4) To use REs to guide attention, we add an attention loss to the final loss: lossatt = X k X i tki log(αki) (5) where tki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word – otherwise tki is set to 1/lk, where lk is the number of clue words for intent k (if no matched RE leads to intent k, then tk∗= 0). We use Eq. 5 to compute the positive attention loss, lossatt p, for positive REs and negative attention loss, lossatt n, for negative ones. The final loss is computed as: loss = lossc + βplossatt p + βnlossatt n (6) where lossc is the original classification loss, βp and βn are weights for the two attention losses. Slot Filling. The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling. Because for slot filling, we need to compute attention for each word, which demands more computational and memory resources than doing that for intent detection2. Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention. Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq. 1, to generate a sentence embedding spi with regard to word i from positive attention: spi = X j αpijhj, αpij = exp(h⊺ jWsphi) P j exp(h⊺ jWsphi) (7) where hi and hj are the BLSTM outputs for word i and j respectively, Wsp is a weight matrix, and αpij is the positive attention value for word j with respect to word i. Further, by replacing Wsp with Wsn, we use Eq. 7 again to compute negative attention and generate the corresponding sentence embedding sni. Finally, the prediction pi for word i can be calculated as: pi = softmax((Wp[spi; hi] + bp) −(Wn[sni; hi] + bn)) (8) where Wp, Wn, bp, bn are weight matrices and bias vectors for positive and negative attention, respectively. Here we append the BLSTM output hi to spi and sni because the word i itself also plays a crucial part in identifying its slot label. 3.4 Using REs at the Output Level At the output level, REs are used to amend the output of NNs. At this level, we take the same 2Since we need to assign a label to each word, if we still compute attention for each slot label, we will have to compute 2 × L × n2 attention values for one sentence. Here, L is the number of tags and n is the sentence length. The BIO tagging format will further double the number of tags. 2087 approach used for intent detection and slot filling (see 3 in Fig. 2). As mentioned in Sec. 2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels. For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city. Let zk be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logitk = logit′ k + wkzk (9) where logit′ k is the logit produced by the original NN, and wk is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE. We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of wkzk better than probability. Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability3. This is also the reason why we choose to operate on logits in Sec. 3.3. 4 Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small? Q2: Does the use of REs still help when using the full training data? Q3: How can we choose from different combination methods? 4.1 Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach. This dataset is widely used in SLU research. It includes queries of flights, meal, etc. We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels. We also split words like Miami’s into Miami ’s during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding. This strategy is useful for fewshot learning. 3 An example can be found in the ensemble version that Juan et al. (2016) used in the Avazu Kaggle competition. To answer Q1 , we also exploit the full few-shot learning setting. Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings. However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count. To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies. That is k1-shot dataset will contain k2-shot dataset if k1 > k2. All settings use the original test set. Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods. Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched. This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data. As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods. Therefore, we do not distinguish full and partial few-shot learning for slot filling. 4.2 Preparing REs We use the syntax of REs in Perl in this work. Our REs are written by a paid annotator who is familiar with the domain. It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster. We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set. The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision. After that, the REs are fixed throughout the experiments. The majority of the time for writing the REs is proportional to the number of RE groups. It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE. It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average. By con2088 trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label. As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups. The performance of the REs can be found in the last line of Table 1. In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots). As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments. 4.3 Experimental Setup Hyper-parameters. Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016). Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100. The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings. We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011), and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001. Evaluation Metrics. We report accuracy and macro-F1 for intent detection, and micro/macroF1 for slot filling. Micro/macro-F1 are the harmonic mean of micro/macro precision and recall. Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction. Competitors and Naming Conventions. Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec. 5. Specifically, we compare our methods with the baseline BLSTM model (Sec. 3.1). Since our attention loss method (Sec. 3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well. Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec. 3.2), +posi and +neg refer to using positive and negative attention loss respectively, 4 For slot filling, we evaluate the REs that use the target slot labels as REtags. +both refers to using both postive and negative attention losses (Sec. 3.3), and +logit means using REtag to modify NN output (Sec. 3.4). Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al. (2016a), which is a general framework for distilling knowledge from FOL rules into NN (+hu16). Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al. (2017), which performs well in various few-shot datasets (+mem)5. Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016). 5 Experimental Results 5.1 Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario. Intent Detection. As shown in Table 1, except for 5-shot, all approaches improve the baseline BLSTM. Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods. We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited. However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones. Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture. As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case. We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters – both make logit easier to train. However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting. 5 We tune C and π0 of hu16, and choose (0.1, 0.3) for intent, and (1, 0.3) for slot. We tune memory-size and k of mem, and choose (1024, 64) for intent, and (2048, 64) for slot. 2089 Model Type Model Name Intent Slot 5-shot 10-shot 20-shot 5-shot 10-shot 20-shot Macro-F1 / Accuracy Macro-F1 / Accuracy Base Model BLSTM 45.28 / 60.02 60.62 / 64.61 63.60 / 80.52 60.78 / 83.91 74.28 / 90.19 80.57 / 93.08 Input Level +feat 49.40 / 63.72 64.34 / 73.46 65.16 / 83.20 66.84 / 88.96 79.67 / 93.64 84.95 / 95.00 +logit 46.01 / 58.68 63.51 / 77.83 69.22 / 89.25 63.68 / 86.18 76.12 / 91.64 83.71 / 94.43 Output Level +hu16 47.22 / 56.22 61.83 / 68.42 67.40 / 84.10 63.37 / 85.37 75.67 / 91.06 80.85 / 93.47 Network Module Level +two 40.44 / 57.22 60.72 / 75.14 62.88 / 83.65 60.38 / 83.63 73.22 / 90.08 79.58 / 92.57 +two+posi 50.90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 79.02 / 92.22 +mem 61.25 / 83.45 77.83 / 90.57 82.98 / 93.49 Few-Shot Model +mem+feat 65.08 / 88.07 80.64 / 93.47 85.45 / 95.39 RE Output REO 70.31 / 68.98 42.33 / 70.79 Table 1: Results on Full Few-Shot Learning Settings. For slot filling, we do not distinguish full and partial few-shot learning settings (see Sec. 4.1). To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a). This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data. Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train. It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO. This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data. Slot Filling. Different from intent detection, as shown in Table 1, our attention loss does not work for slot filling. The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 03 phrases in the context to provide supplementary information. However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent. Therefore, the attention loss and the attention related parameters are more of a burden than a benefit. As is shown in Fig. 1, the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much. By examining the attention values of +two trained on the full dataset, we find that instead of marking informative context words, the attention tends to concentrate on the target word itself. This observation further reinforces our hypothesis on the attention loss. On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better. However, different from intent detection, feat only outperforms logit by a margin. This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer. As a result, feat actually gathers more information from REs and can make better use of them than logit. Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario. We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible. Summary. The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance. Thus, the attention loss methods work best for intent detection and feat works best for slot filling. We also see that the improvements from REs decreases as having more training data. This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits. 5.2 Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method 2090 Model 5-shot 10-shot 20-shot Macro-F1 / Accuracy BLSTM 64.73 / 91.71 78.55 / 96.53 82.05 / 97.20 +hu16 65.22 / 91.94 84.49 / 96.75 84.80 / 97.42 +two 65.59 / 91.04 77.92 / 95.52 81.01 / 96.86 +two+both 66.62 / 92.05 85.75 / 96.98 87.97 / 97.76 +mem 67.54 / 91.83 82.16 / 96.75 84.69 / 97.42 +mem+posi 70.46 / 93.06 86.03 / 97.09 86.69 / 97.65 Table 2: Intent Detection Results on Partial FewShot Learning Setting. Model Intent Slot Macro-F1/Accuracy Macro-F1/Micro-F1 BLSTM 92.50 / 98.77 85.01 / 95.47 +feat 91.86 / 97.65 86.7 / 95.55 +logit 92.48 / 98.77 86.94 / 95.42 +hu16 93.09 / 98.77 85.74 / 95.33 +two 93.64 / 98.88 84.45 / 95.05 +two+both 96.20 / 98.99 85.44 / 95.27 +mem 93.42 / 98.77 85.72 / 95.37 +mem+posi/feat 94.36 / 98.99 87.82 / 95.90 L&L16 - / 98.43 - / 95.98 Table 3: Results on Full Dataset. The left side of ‘/’ applies for intent, and the right side for slot. (Kaiser et al., 2017) which achieves good results in various few-shot datasets. We adapt their opensource code, and add their memory module (mem) to our BLSTM model. Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot). As shown in Table 2, mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance. hu16 also works here, but worse than two+both. Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination. As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec. 4.1), we use them directly to run the memory module. As shown in the bottom of Table 1, mem also improves the base BLSTM, and gains further boost when it is combined with feat6. 5.3 Full Dataset To answer Q2, we also evaluate our methods on the full dataset. As seen in Table 3, for intent detection, while two+both still works, feat and logit no longer give improvements. This shows 6For compactness, we only combine the best method in each task with mem, but others can also be combined. Model Intent Slot Macro-F1 / Accuracy Macro-F1 / Micro-F1 Complex Simple Complex Simple BLSTM 63.60 / 80.52 80.57 / 93.08 +feat 65.16/83.20 66.51/80.40 84.95/95.00 83.88/94.71 +logit 69.22/89.25 65.09/83.09 83.71/94.43 83.22/93.94 +both 75.58/88.80 74.51/87.46 Table 4: Results on 20-Shot Data with Simple REs. +both refers to +two +both for short. that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data. However, as there is no guidance on attention in the annotations, the clue words from REs are still useful. Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM. As for slot filling, introducing feat and logit can still bring further improvements. This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data. Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained. Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information. However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods. Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs. Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods. We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks. And mem+feat is comparative to L&L16 in slot filling. 5.4 Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination. We choose to control the RE complexity by modifying the number of groups. Specifically, we reduce the number of groups for existing REs to decrease RE complexity. To mimic the process of writing simple 2091 REs from scratch, we try our best to keep the key RE groups. For intent detection, all the REs are reduced to at most 2 groups. As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY). As shown in Table 4, the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce. We can also see that using complex REs generally leads to better results compared to using simple REs. This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time7. 6 Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules. On the initialization side, Li et al. (2017) uses important n-grams to initialize the convolution filters. On the input side, Wang et al. (2017a) uses knowledge base rules to find relevant concepts for short texts to augment input. On the output side, Hu et al. (2016a; 2016b) and Guo et al. (2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework. Xiao et al. (2017), on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules. On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016), and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017). Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem. Our work thus offers new ways to exploit RE rules at different levels of a NN. NNs and REs. As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling. RE (Locascio et al., 2016). By contrast, our work aims to use REs to improve the prediction ability of a NN. Few-Shot Learning. Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016), or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future. Wang et al. (2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning. Unlike these previous studies, we seek to use the humangenerated REs to provide additional information. Natural Language Understanding. Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015). Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016). However, no work so far has combined REs and NNs to improve intent detection and slot filling. 7 Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks. Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings. We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance. Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling. We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results. Acknowledgement This work is supported by the National High Technology R&D Program of China (Grant No. 2015AA015403), the National Natural Science Foundation of China (Grant Nos. 61672057 and 61672058); the UK Engineering and Physical Sciences Research Council (EPSRC) under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng. 2092 References Taleb Alashkar, Songyao Jiang, Shuyang Wang, and Yun Fu. 2017. Examples-rules guided deep neural network for makeup recommendation. In AAAI, pages 941–947. Angel X Chang and Christopher D Manning. 2014. Tokensregex: Defining cascaded regular expressions over tokens. Tech. Rep. CSTR 2014-02. Thomas Demeester, Tim Rockt¨aschel, and Sebastian Riedel. 2016. Lifted rule injection for relation embeddings. arXiv preprint arXiv:1606.08359. Paul Suganthan GC, Chong Sun, Haojun Zhang, Frank Yang, Narasimhan Rampalli, Shishir Prasad, Esteban Arcaute, Ganesh Krishnan, Rohit Deep, Vijay Raghavendra, et al. 2015. Why big data industrial systems need rules and what we can do about it. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 265–276. ACM. Yoav Goldberg. 2017. Neural network methods for natural language processing. Synthesis Lectures on Human Language Technologies, 10(1):1–309. Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2017. Knowledge graph embedding with iterative guidance from soft rules. arXiv preprint arXiv:1711.11231. Charles T Hemphill, John J Godfrey, George R Doddington, et al. 1990. The atis spoken language systems pilot corpus. In Proceedings of the DARPA speech and natural language workshop, pages 96– 101. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016a. Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318. Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, and Eric P Xing. 2016b. Deep neural networks with massive learned knowledge. In EMNLP, pages 1670–1679. Yuchin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. 2016. Field-aware factorization machines for ctr prediction. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 43–50. ACM. Łukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. 2017. Learning to remember rare events. arXiv preprint arXiv:1703.03129. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. 2015. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, volume 2. Shen Li, Zhe Zhao, Tao Liu, Renfen Hu, and Xiaoyong Du. 2017. Initializing convolutional filters with semantic features for text classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1885–1890. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454. Nicholas Locascio, Karthik Narasimhan, Eduardo DeLeon, Nate Kushman, and Regina Barzilay. 2016. Neural generation of regular expressions from natural language with minimal domain knowledge. arXiv preprint arXiv:1608.03000. Gr´egoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 23(3):530–539. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword fifth edition, linguistic data consortium. Google Scholar. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Suman Ravuri and Andreas Stoicke. 2015. A comparative study of neural network models for lexical intent classification. In Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on, pages 368–374. IEEE. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Metalearning with memory-augmented neural networks. In International conference on machine learning, pages 1842–1850. Tobias Strauß, Gundram Leifert, Tobias Gr¨uning, and Roger Labahn. 2016. Regular expressions for decoding of neural network outputs. Neural Networks, 79:1–11. Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630–3638. Jin Wang, Zhongyuan Wang, Dawei Zhang, and Jun Yan. 2017a. Combining knowledge with deep convolutional neural networks for short text classification. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 2915– 2921. AAAI Press. Peng Wang, Lingqiao Liu, Chunhua Shen, Zi Huang, Anton van den Hengel, and Heng Tao Shen. 2017b. Multi-attention network for one shot learning. In 2093 2017 IEEE conference on computer vision and pattern recognition, CVPR, pages 22–25. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2017. Symbolic priors for rnn-based semantic parsing. In wenty-sixth International Joint Conference on Artificial Intelligence (IJCAI-17), pages 4186– 4192. Lingxi Xie, Jingdong Wang, Zhen Wei, Meng Wang, and Qi Tian. 2016. Disturblabel: Regularizing cnn on the loss layer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4753–4762. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In IJCAI, pages 2993–2999.
2018
194
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2094–2103 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2094 Token-level and sequence-level loss smoothing for RNN language models Maha Elbayad1,2 Laurent Besacier1 Jakob Verbeek2 Univ. Grenoble Alpes, CNRS, Grenoble INP, Inria, LIG, LJK, F-38000 Grenoble France 1 [email protected] 2 [email protected] Abstract Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from “exposure bias”: during training tokens are predicted given ground-truth sequences, while at test time prediction is conditioned on generated output sequences. To overcome these limitations we build upon the recent reward augmented maximum likelihood approach i.e. sequence-level smoothing that encourages the model to predict sentences close to the ground truth according to a given performance metric. We extend this approach to token-level loss smoothing, and propose improvements to the sequence-level smoothing approach. Our experiments on two different tasks, image captioning and machine translation, show that token-level and sequence-level loss smoothing are complementary, and significantly improve results. 1 Introduction Recurrent neural networks (RNNs) have recently proven to be very effective sequence modeling tools, and are now state of the art for tasks such as machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015), image captioning (Kiros et al., 2014; Vinyals et al., 2015; Anderson et al., 2017) and automatic speech recognition (Chorowski et al., 2015; Chiu et al., 2017). The basic principle of RNNs is to iteratively compute a vectorial sequence representation, by applying at each time-step the same trainable function to compute the new network state from the previous state and the last symbol in the sequence. These models are typically trained by maximizing the likelihood of the target sentence given an encoded source (text, image, speech). Maximum likelihood estimation (MLE), however, has two main limitations. First, the training signal only differentiates the ground-truth target output from all other outputs. It treats all other output sequences as equally incorrect, regardless of their semantic proximity from the ground-truth target. While such a “zero-one” loss is probably acceptable for coarse grained classification of images, e.g. across a limited number of basic object categories (Everingham et al., 2010) it becomes problematic as the output space becomes larger and some of its elements become semantically similar to each other. This is in particular the case for tasks that involve natural language generation (captioning, translation, speech recognition) where the number of possible outputs is practically unbounded. For natural language generation tasks, evaluation measures typically do take into account structural similarity, e.g. based on n-grams, but such structural information is not reflected in the MLE criterion. The second limitation of MLE is that training is based on predicting the next token given the input and preceding ground-truth output tokens, while at test time the model predicts conditioned on the input and the so-far generated output sequence. Given the exponentially large output space of natural language sentences, it is not obvious that the learned RNNs generalize well beyond the relatively sparse distribution of ground-truth sequences used during MLE optimization. This phenomenon is known as “exposure bias” (Ranzato et al., 2016; Bengio et al., 2015). MLE minimizes the KL divergence between a target Dirac distribution on the ground-truth sentence(s) and the model’s distribution. In this pa2095 per, we build upon the “loss smoothing” approach by Norouzi et al. (2016), which smooths the Dirac target distribution over similar sentences, increasing the support of the training data in the output space. We make the following main contributions: • We propose a token-level loss smoothing approach, using word-embeddings, to achieve smoothing among semantically similar terms, and we introduce a special procedure to promote rare tokens. • For sequence-level smoothing, we propose to use restricted token replacement vocabularies, and a “lazy evaluation” method that significantly speeds up training. • We experimentally validate our approach on the MSCOCO image captioning task and the WMT’14 English to French machine translation task, showing that on both tasks combining token-level and sequence-level loss smoothing improves results significantly over maximum likelihood baselines. In the remainder of the paper, we review the existing methods to improve RNN training in Section 2. Then, we present our token-level and sequence-level approaches in Section 3. Experimental evaluation results based on image captioning and machine translation tasks are laid out in Section 4. 2 Related work Previous work aiming to improve the generalization performance of RNNs can be roughly divided into three categories: those based on regularization, data augmentation, and alternatives to maximum likelihood estimation. Regularization techniques are used to increase the smoothness of the function learned by the network, e.g. by imposing an ℓ2 penalty on the network weights, also known as “weight decay”. More recent approaches mask network activations during training, as in dropout (Srivastava et al., 2014) and its variants adapted to recurrent models (Pham et al., 2014; Krueger et al., 2017). Instead of masking, batch-normalization (Ioffe and Szegedy, 2015) rescales the network activations to avoid saturating the network’s non-linearities. Instead of regularizing the network parameters or activations, it is also possible to directly regularize based on the entropy of the output distribution (Pereyra et al., 2017). Data augmentation techniques improve the robustness of the learned models by applying transformations that might be encountered at test time to the training data. In computer vision, this is common practice, and implemented by, e.g., scaling, cropping, and rotating training images (LeCun et al., 1998; Krizhevsky et al., 2012; Paulin et al., 2014). In natural language processing, examples of data augmentation include input noising by randomly dropping some input tokens (Iyyer et al., 2015; Bowman et al., 2015; Kumar et al., 2016), and randomly replacing words with substitutes sampled from the model (Bengio et al., 2015). Xie et al. (2017) introduced data augmentation schemes for RNN language models that leverage n-gram statistics in order to mimic KneserNey smoothing of n-grams models. In the context of machine translation, Fadaee et al. (2017) modify sentences by replacing words with rare ones when this is plausible according to a pretrained language model, and substitutes its equivalent in the target sentence using automatic word alignments. This approach, however, relies on the availability of additional monolingual data for language model training. The de facto standard way to train RNN language models is maximum likelihood estimation (MLE) (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015). The sequential factorization of the sequence likelihood generates an additive structure in the loss, with one term corresponding to the prediction of each output token given the input and the preceding ground-truth output tokens. In order to directly optimize for sequence-level structured loss functions, such as measures based on n-grams like BLEU or CIDER, Ranzato et al. (2016) use reinforcement learning techniques that optimize the expectation of a sequence-level reward. In order to avoid early convergence to poor local optima, they pre-train the model using MLE. Leblond et al. (2018) build on the learning to search approach to structured prediction (Daumé III et al., 2009; Chang et al., 2015) and adapts it to RNN training. The model generates candidate sequences at each time-step using all possible tokens, and scores these at sequence-level to derive a training signal for each time step. This leads to an approach that is structurally close to MLE, but computationally expensive. Norouzi et al. (2016) introduce a reward augmented maximum likelihood (RAML) approach, that incorpo2096 rates a notion of sequence-level reward without facing the difficulties of reinforcement learning. They define a target distribution over output sentences using a soft-max over the reward over all possible outputs. Then, they minimize the KL divergence between the target distribution and the model’s output distribution. Training with a general reward distribution is similar to MLE training, except that we use multiple sentences sampled from the target distribution instead of only the ground-truth sentences. In our work, we build upon the work of Norouzi et al. (2016) by proposing improvements to sequence-level smoothing, and extending it to token-level smoothing. Our token-level smoothing approach is related to the label smoothing approach of Szegedy et al. (2016) for image classification. Instead of maximizing the probability of the correct class, they train the model to predict the correct class with a large probability and all other classes with a small uniform probability. This regularizes the model by preventing overconfident predictions. In natural language generation with large vocabularies, preventing such “narrow” over-confident distributions is imperative, since for many tokens there are nearly interchangeable alternatives. 3 Loss smoothing for RNN training We briefly recall standard recurrent neural network training, before presenting sequence-level and token-level loss smoothing below. 3.1 Maximum likelihood RNN training We are interested in modeling the conditional probability of a sequence y = (y1, . . . , yT ) given a conditioning observation x, pθ(y|x) = T Y t=1 pθ(yt|x, y<t), (1) where y<t = (y1, . . . , yt−1), the model parameters are given by θ, and x is a source sentence or an image in the contexts of machine translation and image captioning, respectively. In a recurrent neural network, the sequence y is predicted based on a sequence of states ht, pθ(yt|x, y<t) = pθ(yt|ht), (2) where the RNN state is computed recursively as ht = ( fθ(ht−1, yt−1, x) for t ∈{1, ..T}, gθ(x) for t = 0. (3) The input is encoded by gθ and used to initialize the state sequence, and fθ is a non-linear function that updates the state given the previous state ht−1, the last output token yt−1, and possibly the input x. The state update function can take different forms, the ones including gating mechanisms such as LSTMs (Hochreiter and Schmidhuber, 1997) and GRUs (Chung et al., 2014) are particularly effective to model long sequences. In standard teacher-forced training, the hidden states will be computed by forwarding the ground truth sequence y∗i.e. in Eq. (3), the RNN has access to the true previous token y∗ t−1. In this case we will note the hidden states h∗ t . Given a ground-truth target sequence y∗, maximum likelihood estimation (MLE) of the network parameters θ amounts to minimizing the loss ℓMLE(y∗, x) = −ln pθ(y∗|x) (4) = − T X t=1 ln pθ(y∗ t |h∗ t ). (5) The loss can equivalently be expressed as the KLdivergence between a Dirac centered on the target output (with δa(x) = 1 at x = a and 0 otherwise) and the model distribution, either at the sequencelevel or at the token-level: ℓMLE(y∗, x) = DKL δy∗||pθ(y|x)  (6) = T X t=1 DKL δy∗ t ||pθ(yt|h∗ t )  . (7) Loss smoothing approaches considered in this paper consist in replacing the Dirac on the groundtruth sequence with distributions with larger support. These distributions can be designed in such a manner that they reflect which deviations from ground-truth predictions are preferred over others. 3.2 Sequence-level loss smoothing The reward augmented maximum likelihood approach of Norouzi et al. (2016) consists in replacing the sequence-level Dirac δy∗in Eq. (6) with a distribution r(y|y∗) ∝exp r(y, y∗)/τ, (8) where r(y, y∗) is a “reward” function that measures the quality of sequence y w.r.t. y∗, e.g. metrics used for evaluation of natural language processing tasks can be used, such as BLEU (Papineni et al., 2002) or CIDER (Vedantam et al., 2097 2015). The temperature parameter τ controls the concentration of the distribution around y∗. When m > 1 ground-truth sequences are paired with the same input x, the reward function can be adapted to fit this setting and be defined as r(y, {y∗(1), . . . , y∗(m)}). The sequence-level smoothed loss function is then given by ℓSeq(y∗, x) = DKL r(y|y∗)||pθ(y|x)  = H(r(y|y∗)) −Er[ln pθ(y|x)] , (9) where the entropy term H(r(y|y∗)) does not depend on the model parameters θ. In general, expectation in Eq. (9) is intractable due to the exponentially large output space, and replaced with a Monte-Carlo approximation: Er[−ln pθ(y|x)] ≈− L X l=1 ln pθ(yl|x). (10) Stratified sampling. Norouzi et al. (2016) show that when using the Hamming or edit distance as a reward, we can sample directly from r(y|y∗) using a stratified sampling approach. In this case sampling proceeds in three stages. (i) Sample a distance d from {0, . . . , T} from a prior distribution on d. (ii) Uniformly select d positions in the sequence to be modified. (iii) Sample the d substitutions uniformly from the token vocabulary. Details on the construction of the prior distribution on d for a reward based on the Hamming distance can be found in Appendix A. Importance sampling. For a reward based on BLEU or CIDER , we cannot directly sample from r(y|y∗) since the normalizing constant, or “partition function”, of the distribution is intractable to compute. In this case we can resort to importance sampling. We first sample L sequences yl from a tractable proposal distribution q(y|y∗). We then compute the importance weights ωl ≈ r(yl|y∗)/q(yl|y∗) PL k=1 r(yk|y∗)/q(yk|y∗) , (11) where r(yk|y∗) is the un-normalized reward distribution in Eq. (8). We finally approximate the expectation by reweighing the samples in the Monte Carlo approximation as Er[−ln pθ(y|x)] ≈− L X l=1 ωl ln pθ(yl|x). (12) In our experiments we use a proposal distribution based on the Hamming distance, which allows for tractable stratified sampling, and generates sentences that do not stray away from the ground truth. We propose two modifications to the sequencelevel loss smoothing of Norouzi et al. (2016): sampling to a restricted vocabulary (described in the following paragraph) and lazy sequence-level smoothing (described in section 3.4). Restricted vocabulary sampling. In the stratified sampling method for Hamming and edit distance rewards, instead of drawing from the large vocabulary V, containing typically in the order of 104 words or more, we can restrict ourselves to a smaller subset Vsub more adapted to our task. We considered three different possibilities for Vsub. V : the full vocabulary from which we sample uniformly (default), or draw from our token-level smoothing distribution defined below in Eq. (13). Vrefs: uniformly sample from the set of tokens that appear in the ground-truth sentence(s) associated with the current input. Vbatch: uniformly sample from the tokens that appear in the ground-truth sentences across all inputs that appear in a given training mini-batch. Uniformly sampling from Vbatch has the effect of boosting the frequencies of words that appear in many reference sentences, and thus approximates to some extent sampling substitutions from the uni-gram statistics of the training set. 3.3 Token-level loss smoothing While the sequence-level smoothing can be directly based on performance measures of interest such as BLEU or CIDEr, the support of the smoothed distribution is limited to the number of samples drawn during training. We propose smoothing the token-level Diracs δy∗ t in Eq. (7) to increase its support to similar tokens. Since we apply smoothing to each of the tokens independently, this approach implicitly increases the support to an exponential number of sequences, unlike the sequence-level smoothing approach. This comes at the price, however, of a naive token-level independence assumption in the smoothing. We define the smoothed token-level distribution, similar as the sequence-level one, as a softmax over a token-level “reward” function, r(yt|y∗ t ) ∝exp r(yt, y∗ t )/τ, (13) 2098 where τ is again a temperature parameter. As a token-level reward r(yt, y∗ t ) we use the cosine similarity between yt and y∗ t in a semantic wordembedding space. In our experiments we use GloVe (Pennington et al., 2014); preliminary experiments with word2vec (Mikolov et al., 2013) yielded somewhat worse results. Promoting rare tokens. We can further improve the token-level smoothing by promoting rare tokens. To do so, we penalize frequent tokens when smoothing over the vocabulary, by subtracting β freq(yt) from the reward, where freq(·) denotes the term frequency and β is a non-negative weight. This modification encourages frequent tokens into considering the rare ones. We experimentally found that it is also beneficial for rare tokens to boost frequent ones, as they tend to have mostly rare tokens as neighbors in the wordembedding space. With this in mind, we define a new token-level reward as: rfreq(yt, y∗ t ) = r(yt, y∗ t ) (14) −β min  freq(yt) freq(y∗ t ), freq(y∗ t ) freq(yt)  , where the penalty term is strongest if both tokens have similar frequencies. 3.4 Combining losses In both loss smoothing methods presented above, the temperature parameter τ controls the concentration of the distribution. As τ gets smaller the distribution peaks around the ground-truth, while for large τ the uniform distribution is approached. We can, however, not separately control the spread of the distribution and the mass reserved for the ground-truth output. We therefore introduce a second parameter α ∈[0, 1] to interpolate between the Dirac on the ground-truth and the smooth distribution. Using ¯α = 1 −α, the sequence-level and token-level loss functions are then defined as ℓα Seq(y∗, x) = αℓSeq(y∗, x) + ¯αℓMLE(y∗, x) (15) = αEr[ℓMLE(y, x)] + ¯αℓMLE(y∗, x) ℓα Tok(y∗, x) = αℓTok(y∗, x) + ¯αℓMLE(y∗, x) (16) To benefit from both sequence-level and tokenlevel loss smoothing, we also combine them by applying token-level smoothing to the different sequences sampled for the sequence-level smoothing. We introduce two mixing parameters α1 and α2. The first controls to what extent sequencelevel smoothing is used, while the second controls to what extent token-level smoothing is used. The combined loss is defined as ℓα1,α2 Seq, Tok(y∗, x, r) = α1Er[ℓTok(y, x)] + ¯α1ℓTok(y∗, x) = α1Er[α2ℓTok(y, x) + ¯α2ℓMLE(y, x)] + ¯α1(α2ℓTok(y∗, x) + ¯α2ℓMLE(y∗, x)). (17) In our experiments, we use held out validation data to set mixing and temperature parameters. Algorithm 1 Sequence-level smoothing algorithm Input: x, y∗ Output: ℓα seq(x, y∗) Encode x to initialize the RNN Forward y∗in the RNN to compute the hidden states h∗ t Compute the MLE loss ℓMLE(y∗, x) for l ∈{1, . . . , L} do Sample yl ∼r(˙|y∗) if Lazy then Compute ℓ(yl, x) = −P t log pθ(yl t|h∗ t ) else Forward yl in the RNN to get its hidden states hl t Compute ℓ(yl, x) = ℓMLE(yl, x) end if end for ℓα Seq(x, y∗) = ¯αℓMLE(y∗, x) + α L P l ℓ(yl, x) Lazy sequence smoothing. Although sequencelevel smoothing is computationally efficient compared to reinforcement learning approaches (Ranzato et al., 2016; Rennie et al., 2017), it is slower compared to MLE. In particular, we need to forward each of the samples yl through the RNN in teacher-forcing mode so as to compute its hidden states hl t, which are used to compute the sequence MLE loss as ℓMLE(yl, x) = − T X t=1 ln pθ(yl t|hl t). (18) To speed up training, and since we already forward the ground truth sequence in the RNN to evaluate the MLE part of ℓα Seq(y∗, x), we propose to use the same hidden states h∗ t to compute both the MLE and the sequence-level smoothed loss. In this case: ℓlazy(yl, x) = − T X t=1 ln pθ(yl t|h∗ t ) (19) In this manner, we only have a single instead of L + 1 forwards-passes in the RNN. We provide the pseudo-code for training in Algorithm 1. 2099 Without attention Loss Reward Vsub BLEU-1 BLEU-4 CIDER MLE 70.63 30.14 93.59 MLE + γH 70.79 30.29 93.61 Tok Glove sim 71.94 31.27 95.79 Tok Glove sim rfreq 72.39 31.76 97.47 Seq Hamming V 71.76 31.16 96.37 Seq Hamming Vbatch 71.46 31.15 96.53 Seq Hamming Vrefs 71.80 31.63 96.22 Seq, lazy Hamming V 70.81 30.43 94.26 Seq, lazy Hamming Vbatch 71.85 31.13 96.65 Seq, lazy Hamming Vrefs 71.96 31.23 95.34 Seq CIDER V 71.05 30.46 94.40 Seq CIDER Vbatch 71.51 31.17 95.78 Seq CIDER Vrefs 71.93 31.41 96.81 Seq, lazy CIDER V 71.43 31.18 96.32 Seq, lazy CIDER Vbatch 71.47 31.00 95.56 Seq, lazy CIDER Vrefs 71.82 31.06 95.66 Tok-Seq Hamming V 70.79 30.43 96.34 Tok-Seq Hamming Vbatch 72.28 31.65 96.73 Tok-Seq Hamming Vrefs 72.69 32.30 98.01 Tok-Seq CIDER V 70.80 30.55 96.89 Tok-Seq CIDER Vbatch 72.13 31.71 96.92 Tok-Seq CIDER Vrefs 73.08 32.82 99.92 With attention BLEU-1 BLEU-4 CIDER 73.40 33.11 101.63 72.68 32.15 99.77 73.49 32.93 102.33 74.01 33.25 102.81 73.12 32.71 101.25 73.26 32.73 101.90 73.53 32.59 102.33 73.29 32.81 101.58 73.43 32.95 102.03 73.53 33.09 101.89 73.08 32.51 101.84 73.50 33.04 102.98 73.42 32.91 102.23 73.55 33.19 102.94 73.18 32.60 101.30 73.92 33.10 102.64 73.68 32.87 101.11 73.86 33.32 102.90 73.56 33.00 101.72 73.31 32.40 100.33 73.61 32.67 101.41 74.28 33.34 103.81 Table 1: MS-COCO ’s test set evaluation measures. 4 Experimental evaluation In this section, we compare sequence prediction models trained with maximum likelihood (MLE) with our token and sequence-level loss smoothing on two different tasks: image captioning and machine translation. 4.1 Image captioning 4.1.1 Experimental setup. We use the MS-COCO datatset (Lin et al., 2014), which consists of 82k training images each annotated with five captions. We use the standard splits of Karpathy and Li (2015), with 5k images for validation, and 5k for test. The test set results are generated via beam search (beam size 3) and are evaluated with the MS-COCO captioning evaluation tool. We report CIDER and BLEU scores on this internal test set. We also report results obtained on the official MS-COCO server that additionally measures METEOR (Denkowski and Lavie, 2014) and ROUGE-L (Lin, 2004). We experiment with both non-attentive LSTMs (Vinyals et al., 2015) and the ResNet baseline of the stateof-the-art top-down attention (Anderson et al., 2017). The MS-COCO vocabulary consists of 9,800 words that occur at least 5 times in the training set. Additional details and hyperparameters can be found in Appendix B.1. 4.1.2 Results and discussion Restricted vocabulary sampling In this section, we evaluate the impact of the vocabulary subset from which we sample the modified sentences for sequence-level smoothing. We experiment with two rewards: CIDER , which scores w.r.t. all five available reference sentences, and Hamming distance reward taking only a single reference into account. For each reward we train our (Seq) models with each of the three subsets detailed previously in Section 3.2, Restricted vocabulary sampling. From the results in Table 1 we note that for the inattentive models, sampling from Vrefs or Vbatch has a better performance than sampling from the full vocabulary on all metrics. In fact, using these subsets introduces a useful bias to the model and improves performance. This improvement is most notable using the CIDER reward that scores candidate sequences w.r.t. to multiple references, which stabilizes the scoring of the candidates. With an attentive decoder, no matter the reward, re-sampling sentences with words from Vref rather than the full vocabulary V is better for both reward functions, and all metrics. Additional experimental results, presented in Appendix B.2, obtained with a BLEU-4 reward, in its single and 2100 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDER SPICE c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 Google NIC+ (Vinyals et al., 2015) 71.3 89.5 54.2 80.2 40.7 69.4 30.9 58.7 25.4 34.6 53.0 68.2 94.3 94.6 18.2 63.6 Hard-Attention (Xu et al., 2015) 70.5 88.1 52.8 77.9 38.3 65.8 27.7 53.7 24.1 32.2 51.6 65.4 86.5 89.3 17.2 59.8 ATT-FCN+ (You et al., 2016) 73.1 90.0 56.5 81.5 42.4 70.9 31.6 59.9 25.0 33.5 53.5 68.2 94.3 95.8 18.2 63.1 Review Net+ (Yang et al., 2016) 72.0 90.0 55.0 81.2 41.4 70.5 31.3 59.7 25.6 34.7 53.3 68.6 96.5 96.9 18.5 64.9 Adaptive+ (Lu et al., 2017) 74.8 92.0 58.4 84.5 44.4 74.4 33.6 63.7 26.4 35.9 55.0 70.5 104.2 105.9 19.7 67.3 SCST:Att2all+† (Rennie et al., 2017) 78.1 93.7 61.9 86.0 47.0 75.9 35.2 64.5 27.0 35.5 56.3 70.7 114.7 116.7 LSTM-A3+†◦(Yao et al., 2017) 78.7 93.7 62.7 86.7 47.6 76.5 35.6 65.2 27.0 35.4 56.4 70.5 116 118 Up-Down+†◦(Anderson et al., 2017) 80.2 95.2 64.1 88.8 49.1 79.4 36.9 68.5 27.6 36.7 57.1 72.4 117.9 120.5 Ours: Tok-Seq CIDER 72.6 89.7 55.7 80.9 41.2 69.8 30.2 58.3 25.5 34.0 53.5 68.0 96.4 99.4 Ours: Tok-Seq CIDER + 74.9 92.4 58.5 84.9 44.8 75.1 34.3 64.7 26.5 36.1 55.2 71.1 103.9 104.2 Table 2: MS-COCO ’s server evaluation . (+) for ensemble submissions, (†) for submissions with CIDEr optimization and (◦) for models using additional data. multiple references variants, further corroborate this conclusion. Lazy training. From the results of Table 1, we see that lazy sequence-level smoothing is competitive with exact non-lazy sequence-level smoothing, while requiring roughly equivalent training time as MLE. We provide detailed timing results in Appendix B.3. Overall For reference, we include in Table 1 baseline results obtained using MLE, and our implementation of MLE with entropy regularization (MLE+γH) (Pereyra et al., 2017), as well as the RAML approach of Norouzi et al. (2016) which corresponds to sequence-level smoothing based on the Hamming reward and sampling replacements from the full vocabulary (Seq, Hamming, V) We observe that entropy smoothing is not able to improve performance much over MLE for the model without attention, and even deteriorates for the attention model. We improve upon RAML by choosing an adequate subset of vocabulary for substitutions. We also report the performances of token-level smoothing, where the promotion of rare tokens boosted the scores in both attentive and nonattentive models. For sequence-level smoothing, choosing a taskrelevant reward with importance sampling yielded better results than plain Hamming distance. Moreover, we used the two smoothing schemes (Tok-Seq) and achieved the best results with CIDER as a reward for sequence-level smoothing combined with a token-level smoothing that promotes rare tokens improving CIDER from 93.59 (MLE) to 99.92 for the model without attention, and improving from 101.63 to 103.81 with attention. Qualitative results. In Figure 1 we showcase captions obtained with MLE and our three variants of smoothing i.e. token-level (Tok), sequencelevel (Seq) and the combination (Tok-Seq). We note that the sequence-level smoothing tend to generate lengthy captions overall, which is maintained in the combination. On the other hand, the token-level smoothing allows for a better recognition of objects in the image that stems from the robust training of the classifier e.g. the ’cement block’ in the top right image or the carrots in the bottom right. More examples are available in Appendix B.4 Comparison to the state of the art. We compare our model to state-of-the-art systems on the MS-COCO evaluation server in Table 2. We submitted a single model (Tok-Seq, CIDER , Vrefs) as well as an ensemble of five models with different initializations trained on the training set plus 35k images from the dev set (a total of 117k images) to the MS-COCO server. The three best results on the server (Rennie et al., 2017; Yao et al., 2017; Anderson et al., 2017) are trained in two stages where they first train using MLE, before switching to policy gradient methods based on CIDEr. Anderson et al. (2017) reported an increase of 5.8% of CIDER on the test split after the CIDER optimization. Moreover, Yao et al. (2017) uses additional information about image regions to train the attributes classifiers, while Anderson et al. (2017) pre-trains its bottom-up attention model on the Visual Genome dataset (Krishna et al., 2017). Lu et al. (2017); Yao et al. (2017) use the same CNN encoder as ours (ResNet152), (Vinyals et al., 2015; Yang et al., 2016) use Inception-v3 (Szegedy et al., 2016) for image encoding and Rennie et al. (2017); Anderson et al. 2101 Figure 1: Examples of generated captions with the baseline MLE and our models with attention. (2017) use Resnet-101, both of which have similar performances to ResNet-152 on ImageNet classification (Canziani et al., 2016). 4.2 Machine translation 4.2.1 Experimental setup. For this task we validate the effectiveness of our approaches on two different datasets. The first is WMT’14 English to French, in its filtered version, with 12M sentence pairs obtained after dynamically selecting a “clean” subset of 348M words out of the original “noisy” 850M words (Bahdanau et al., 2015; Cho et al., 2014; Sutskever et al., 2014). The second benchmark is IWSLT’14 German to English consisting of around 150k pairs for training. In all our experiments we use the attentive model of (Bahdanau et al., 2015) The hyperparameters of each of these models as well as any additional pre-processing can be found in Appendix C.1 To assess the translation quality we report the BLEU-4 metric. 4.2.2 Results and analysis Loss Reward Vsub WMT’14 IWSLT’14 MLE 30.03 27.55 tok Glove sim 30.16 27.69 tok Glove sim rfreq 30.19 27.83 Seq Hamming V 30.85 27.98 Seq Hamming Vbatch 31.18 28.54 Seq BLEU-4 Vbatch 31.29 28.56 Tok-Seq Hamming Vbatch 31.36 28.70 Tok-Seq BLEU-4 Vbatch 31.39 28.74 Table 3: Tokenized BLEU score on WMT’14 En-Fr evaluated on the news-test-2014 set. And Tokenzied, case-insensitive BLEU on IWSLT’14 De-En. We present our results in Table 3. On both benchmarks, we improve on both MLE and RAML approach of Norouzi et al. (2016) (Seq, Hamming, V): using the smaller batch-vocabulary for replacement improves results, and using importance sampling based on BLEU-4 further boosts results. In this case, unlike in the captioning experiment, token-level smoothing brings smaller improvements. The combination of both smoothing approaches gives best results, similar to what was observed for image captioning, improving the MLE BLEU-4 from 30.03 to 31.39 on WMT’14 and from 27.55 to 28.74 on IWSLT’14. The outputs of our best model are compared to the MLE in some examples showcased in Appendix C. 5 Conclusion We investigated the use of loss smoothing approaches to improve over maximum likelihood estimation of RNN language models. We generalized the sequence-level smoothing RAML approach of Norouzi et al. (2016) to the tokenlevel by smoothing the ground-truth target across semantically similar tokens. For the sequencelevel, which is computationally expensive, we introduced an efficient “lazy” evaluation scheme, and introduced an improved re-sampling strategy. Experimental evaluation on image captioning and machine translation demonstrates the complementarity of sequence-level and token-level loss smoothing, improving over both the maximum likelihood and RAML. Acknowledgment. This work has been partially supported by the grant ANR-16-CE23-0006 “Deep in France” and LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01). 2102 References P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. 2017. Bottomup and top-down attention for image captioning and visual question answering. arXiv preprint arXiv:1707.07998. D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In NIPS. S. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio. 2015. Generating sentences from a continuous space. In CoNLL. A. Canziani, A. Paszke, and E. Culurciello. 2016. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678. K.-W. Chang, A. Krishnamurthy, A. Agarwal, H. Daumé III, and J. Langford. 2015. Learning to search better than your teacher. In ICML. C.-C. Chiu, T. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R.-J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani. 2017. State-of-the-art speech recognition with sequence-to-sequence models. arXiv preprint arXiv:1712.01769. K. Cho, B. van Merrienboer, Ç. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Empirical Methods in Natural Language Processing. J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. 2015. Attention-based models for speech recognition. In NIPS. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS Deep Learning Workshop. H. Daumé III, J. Langford, and D. Marcu. 2009. Search-based structured prediction. Machine Learning, 75(3):297–325. M. Denkowski and A. Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Workshop on statistical machine translation. M. Everingham, L. van Gool, C. Williams, J. Winn, and A. Zisserman. 2010. The pascal visual object classes (VOC) challenge. IJCV, 88(2):303–338. M. Fadaee, A. Bisazza, and C. Monz. 2017. Data augmentation for low-resource neural machine translation. In ACL. K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In CVPR. S. Hochreiter and J. Schmidhuber. 1997. Long shortterm memory. Neural Computation, 9(8):1735– 1780. S. Ioffe and C. Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. M. Iyyer, V. Manjunatha, J. Boyd-Graber, and H. Daumé III. 2015. Deep unordered composition rivals syntactic methods for text classification. In ACL. A. Karpathy and Fei-Fei Li. 2015. Deep visualsemantic alignments for generating image descriptions. In CVPR. D. Kingma and J. Ba. 2015. Adam: A method for stochastic optimization. In ICLR. R. Kiros, R. Salakhutdinov, and R. Zemel. 2014. Multimodal neural language models. In ICML. R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 123(1):32–73. A. Krizhevsky, I. Sutskever, and G. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In NIPS. D. Krueger, T. Maharaj, J. Kramár, M. Pezeshki, N. Ballas, N. Ke, A. Goyal, Y. Bengio, H. Larochelle, A. Courville, and C. Pal. 2017. Zoneout: Regularizing RNNs by randomly preserving hidden activations. In ICLR. A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In ICML. R. Leblond, J.-B. Alayrac, A. Osokin, and S. LacosteJulien. 2018. SeaRnn: Training RNNs with globallocal losses. In ICLR. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, pages 2278–2324. C.-Y. Lin. 2004. Rouge: a package for automatic evaluation of summaries. In ACL Workshop Text Summarization Branches Out. T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Zitnick. 2014. Microsoft COCO: common objects in context. In ECCV. 2103 J. Lu, C. Xiong, D. Parikh, and R. Socher. 2017. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In CVPR. T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vector space. In ICLR. M. Norouzi, S. Bengio, Z. Chen, N. Jaitly, M. Schuster, Y. Wu, and D. Schuurmans. 2016. Reward augmented maximum likelihood for neural structured prediction. In NIPS. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. M. Paulin, J. Revaud, Z. Harchaoui, F. Perronnin, and C. Schmid. 2014. Transformation pursuit for image classification. In CVPR. M. Pedersoli, T. Lucas, C. Schmid, and J. Verbeek. 2017. Areas of attention for image captioning. In ICCV. J. Pennington, R. Socher, and C. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing. G. Pereyra, G. Tucker, J. Chorowski, L. Kaiser, and G. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In ICLR. V. Pham, T. Bluche, C. Kermorvant, and J. Louradour. 2014. Dropout improves recurrent neural networks for handwriting recognition. In Frontiers in Handwriting Recognition. M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR. S. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. 2017. Self-critical sequence training for image captioning. In CVPR. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR. I. Sutskever, O. Vinyals, and Q. Le. 2014. Sequence to sequence learning with neural networks. In NIPS. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. 2016. Rethinking the inception architecture for computer vision. In CVPR. R. Vedantam, C. Zitnick, and D. Parikh. 2015. CIDEr: Consensus-based image description evaluation. In CVPR. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. 2015. Show and tell: A neural image caption generator. In CVPR. Z. Xie, S. Wang, J. Li, D. Lévy, A. Nie, D. Jurafsky, and A. Ng. 2017. Data noising as smoothing in neural network language models. In ICLR. K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. Z. Yang, Y. Yuan, Y. Wu, R. Salakhutdinov, and W. Cohen. 2016. Encode, review, and decode: Reviewer module for caption generation. In NIPS. T. Yao, Y. Pan, Y. Li, Z. Qiu, and T. Mei. 2017. Boosting image captioning with attributes. In ICLR. Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo. 2016. Image captioning with semantic attention. In CVPR.
2018
195
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2104–2115 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2104 Numeracy for Language Models: Evaluating and Improving their Ability to Predict Numbers Georgios P. Spithourakis Department of Computer Science University College London [email protected] Sebastian Riedel Department of Computer Science University College London [email protected] Abstract Numeracy is the ability to understand and work with numbers. It is a necessary skill for composing and understanding documents in clinical, scientific, and other technical domains. In this paper, we explore different strategies for modelling numerals with language models, such as memorisation and digit-by-digit composition, and propose a novel neural architecture that uses a continuous probability density function to model numerals from an open vocabulary. Our evaluation on clinical and scientific datasets shows that using hierarchical models to distinguish numerals from words improves a perplexity metric on the subset of numerals by 2 and 4 orders of magnitude, respectively, over nonhierarchical models. A combination of strategies can further improve perplexity. Our continuous probability density function model reduces mean absolute percentage errors by 18% and 54% in comparison to the second best strategy for each dataset, respectively. 1 Introduction Language models (LMs) are statistical models that assign a probability over sequences of words. Language models can often help with other tasks, such as speech recognition (Mikolov et al., 2010; Prabhavalkar et al., 2017), machine translation (Luong et al., 2015; Gülçehre et al., 2017), text summarisation (Filippova et al., 2015; Gambhir and Gupta, 2017), question answering (Wang et al., 2017), semantic error detection (Rei and Yannakoudakis, 2017; Spithourakis et al., 2016a), and fact checking (Rashkin et al., 2017). Numeracy and literacy refer to the ability to comprehend, use, and attach meaning to numbers and words, respectively. Language models exhibit literacy by being able to assign higher probabilities to sentences that Figure 1: Modelling numerals with a categorical distribution over a fixed vocabulary maps all out-ofvocabulary numerals to the same type, e.g. UNK, and does not reflect the smoothness of the underlying continuous distribution of certain attributes. are both grammatical and realistic, as in this example: ‘I eat an apple’ (grammatical and realistic) ‘An apple eats me’ (unrealistic) ‘I eats an apple’ (ungrammatical) Likewise, a numerate language model should be able to rank numerical claims based on plausibility: ’John’s height is 1.75 metres’ (realistic) ’John’s height is 999.999 metres’ (unrealistic) Existing approaches to language modelling treat numerals similarly to other words, typically using categorical distributions over a fixed vocabulary. 2105 However, this maps all unseen numerals to the same unknown type and ignores the smoothness of continuous attributes, as shown in Figure 1. In that respect, existing work on language modelling does not explicitly evaluate or optimise for numeracy. Numerals are often neglected and low-resourced, e.g. they are often masked (Mitchell and Lapata, 2009), and there are only 15,164 (3.79%) numerals among GloVe’s 400,000 embeddings pretrained on 6 billion tokens (Pennington et al., 2014). Yet, numbers appear ubiquitously, from children’s magazines (Joram et al., 1995) to clinical reports (Bigeard et al., 2015), and grant objectivity to sciences (Porter, 1996). Previous work finds that numerals have higher out-of-vocabulary rates than other words and proposes solutions for representing unseen numerals as inputs to language models, e.g. using numerical magnitudes as features (Spithourakis et al., 2016b,a). Such work identifies that the perplexity of language models on the subset of numerals can be very high, but does not directly address the issue. This paper focuses on evaluating and improving the ability of language models to predict numerals. The main contributions of this paper are as follows: 1. We explore different strategies for modelling numerals, such as memorisation and digit-bydigit composition, and propose a novel neural architecture based on continuous probability density functions. 2. We propose the use of evaluations that adjust for the high out-of-vocabulary rate of numerals and account for their numerical value (magnitude). 3. We evaluate on a clinical and a scientific corpus and provide a qualitative analysis of learnt representations and model predictions. We find that modelling numerals separately from other words can drastically improve the perplexity of LMs, that different strategies for modelling numerals are suitable for different textual contexts, and that continuous probability density functions can improve the LM’s prediction accuracy for numbers. 2 Language Models Let s1,s2,...,sL denote a document, where st is the token at position t. A language model estimates the probability of the next token given previous tokens, i.e. p(st|s1,...,st−1). Neural LMs estimate this probability by feeding embeddings, i.e. vectors that represent each token, into a Recurrent Neural Network (RNN) (Mikolov et al., 2010). Token Embeddings Tokens are most commonly represented by a D-dimensional dense vector that is unique for each word from a vocabulary V of known words. This vocabulary includes special symbols (e.g. ‘UNK’) to handle out-of-vocabulary tokens, such as unseen words or numerals. Let ws be the one-hot representation of token s, i.e. a sparse binary vector with a single element set to 1 for that token’s index in the vocabulary, and E∈RD×|V| be the token embeddings matrix. The token embedding for s is the vector etoken s =Ews. Character-Based Embeddings A representation for a token can be build from its constituent characters (Luong and Manning, 2016; Santos and Zadrozny, 2014). Such a representation takes into account the internal structure of tokens. Let d1,d2,...,dN be the characters of token s. A character-based embedding for s is the final hidden state of a D-dimensional character-level RNN: echars s =RNN(d0,d1,...dL). Recurrent and Output Layer The computation of the conditional probability of the next token involves recursively feeding the embedding of the current token est and the previous hidden state ht−1 into a D-dimensional token-level RNN to obtain the current hidden state ht. The output probability is estimated using the softmax function, i.e. p(st|ht)=softmax(ψ(st))= 1 Zeψ(st) Z = P s′∈V eψ(s′), (1) where ψ(.) is a score function. Training and Evaluation Neural LMs are typically trained to minimise the cross entropy on the training corpus: Htrain=−1 N X st∈train logp(st|s<t) (2) A common performance metric for LMs is per token perplexity (Eq. 3), evaluated on a test corpus. It can also be interpreted as the branching factor: the size of an equally weighted distribution with equivalent uncertainty, i.e. how many sides you need on a fair die to get the same uncertainty as the model distribution. PPtest=exp(Htest) (3) 3 Strategies for Modelling Numerals In this section we describe models with different strategies for generating numerals and propose the 2106 use of number-specific evaluation metrics that adjust for the high out-of-vocabulary rate of numerals and account for numerical values. We draw inspiration from theories of numerical cognition. The triple code theory (Dehaene et al., 2003) postulates that humans process quantities through two exact systems (verbal and visual) and one approximate number system that semantically represents a number on a mental number line. Tzelgov et al. (2015) identify two classes of numbers: i) primitives, which are holistically retrieved from long-term memory; and ii) non-primitives, which are generated online. An in-depth review of numerical and mathematical cognition can be found in Kadosh and Dowker (2015) and Campbell (2005). 3.1 Softmax Model and Variants This class of models assumes that numerals come from a finite vocabulary that can be memorised and retrieved later. The softmax model treats all tokens (words and numerals) alike and directly uses Equation 1 with score function: ψ(st)=hT t etoken st =hT t Eoutwst, (4) where Eout ∈RD×|V| is an output embeddings matrix. The summation in Equation 1 is over the complete target vocabulary, which requires mapping any out-of-vocabulary tokens to special symbols, e.g. ‘UNKword’ and ‘UNKnumeral’. Softmax with Digit-Based Embeddings The softmax+rnn variant considers the internal syntax of a numeral’s digits by adjusting the score function: ψ(st)=hT t etoken st +hT t echars st =hT t Eoutwst+hT t ERNN out wst, (5) where the columns of ERNN out are composed of character-based embeddings for in-vocabulary numerals and token embeddings for the remaining vocabulary. The character set comprises digits (0-9), the decimal point, and an end-of-sequence character. The model still requires normalisation over the whole vocabulary, and the special unknown tokens are still needed. Hierarchical Softmax A hierarchical softmax (Morin and Bengio, 2005a) can help us decouple the modelling of numerals from that of words. The probability of the next token st is decomposed to that of its class ct and the probability of the exact token from within the class: p(st|ht)= P ct∈C p(ct|ht)p(st|ct,ht) p(ct|ht)=σ hT t b  (6) where the valid token classes are C = {word, numeral}, σ is the sigmoid function and b is a D-dimensional vector. Each of the two branches of p(st|ct,ht) can now be modelled by independently normalised distributions. The hierarchical variants (h-softmax and h-softmax+rnn) use two independent softmax distributions for words and numerals. The two branches share no parameters, and thus words and numerals will be embedded into separate spaces. The hierarchical approach allows us to use any well normalised distribution to model each of its branches. In the next subsections, we examine different strategies for modelling the branch of numerals, i.e. p(st|ct = numeral,ht). For simplicity, we will abbreviate this to p(s). 3.2 Digit-RNN Model Let d1,d2...dN be the digits of numeral s. A digit-bydigit composition strategy estimates the probability of the numeral from the probabilities of its digits: p(s)=p(d1)p(d2|d1)...p(dN|d<N) (7) The d-RNN model feeds the hidden state ht of the token-level RNN into a character-level RNN (Graves, 2013; Sutskever et al., 2011) to estimate this probability. This strategy can accommodate an open vocabulary, i.e. it eliminates the need for an UNKnumeral symbol, as the probability is normalised one digit at a time over the much smaller vocabulary of digits (digits 0-9, decimal separator, and end-of-sequence). 3.3 Mixture of Gaussians Model Inspired by the approximate number system and the mental number line (Dehaene et al., 2003), our proposed MoG model computes the probability of numerals from a probability density function (pdf) over real numbers, using a mixture of Gaussians for the underlying pdf: q(v)= K X k=1 πkNk(v;µk,σ2 k) πk =softmax BTht  , (8) where K is the number of components, πk are mixture weights that depend on hidden state ht of the token-level RNN, Nk is the pdf of the normal distribution with mean µk ∈R and variance σ2 k ∈R, and B∈RD×K is a matrix. The difficulty with this approach is that for any continuous random variable, the probability that it equals a specific value is always zero. To resolve this, 2107 Figure 2: Mixture of Gaussians model. The probability of a numeral is decomposed into the probability of its decimal precision and the probability that an underlying number will produce the numeral when rounded at the given precision. we consider a probability mass function (pmf) that discretely approximates the pdf: eQ(v|r)= v+ϵr Z v−ϵr q(u)du=F(v+ϵr)−F(v−ϵr), (9) where F(.) is the cumulative density function of q(.), and ϵr = 0.5×10−r is the number’s precision. The level of discretisation r, i.e. how many decimal digits to keep, is a random variable in N with distribution p(r). The mixed joint density is: p(s)=p(v,r)=p(r)eQ(v|r) (10) Figure 2 summarises this strategy, where we model the level of discretisation by converting the numeral into a pattern and use a RNN to estimate the probability of that pattern sequence: p(r)=p(SOS INT_PART . r decimal digits z }| { \d ... \d EOS) (11) 3.4 Combination of Strategies Different mechanisms might be better for predicting numerals in different contexts. We propose a combination model that can select among different strategies for modelling numerals: p(s)= X ∀m∈M αmp(s|m) αm=softmax ATht  , (12) where M={h-softmax, d-RNN, MoG}, and A∈RD×|M|. Since both d-RNN and MoG are openvocabulary models, the unknown numeral token can now be removed from the vocabulary of h-softmax. 3.5 Evaluating the Numeracy of LMs Numeracy skills are centred around the understanding of numbers and numerals. A number is a mathematical object with a specific magnitude, whereas a numeral is its symbolic representation, usually in the positional decimal Hindu–Arabic numeral system (McCloskey and Macaruso, 1995). In humans, the link between numerals and their numerical values boosts numerical skills (Griffin et al., 1995). Perplexity Evaluation Test perplexity evaluated only on numerals will be informative of the symbolic component of numeracy. However, model comparisons based on naive evaluation using Equation 3 might be problematic: perplexity is sensitive to outof-vocabulary (OOV) rate, which might differ among models, e.g. it is zero for open-vocabulary models. As an extreme example, in a document where all words are out of vocabulary, the best perplexity is achieved by a trivial model that predicts everything as unknown. Ueberla (1994) proposed Adjusted Perplexity (APP; Eq. 14), also known as unknown-penalised perplexity (Ahn et al., 2016), to cancel the effect of the out-of-vocabulary rate on perplexity. The APP is the perplexity of an adjusted model that uniformly redistributes the probability of each out-of-vocabulary class over all different types in that class: p′(s)= ( p(s) 1 |OOVc| if s∈OOVc p(s) otherwise (13) where OOVc is an out-of-vocabulary class (e.g. words and numerals), and |OOVc| is the cardinality of each OOV set. Equivalently, adjusted perplexity can be calculated as: APPtest=exp Htest+ X c Hc adjust ! Hc adjust=− X t |st∈OOVc| N log 1 |OOVc| (14) where N is the total number of tokens in the test set and |s ∈OOVc| is the count of tokens from the test set belonging in each OOV set. 2108 Evaluation on the Number Line While perplexity looks at symbolic performance on numerals, this evaluation focuses on numbers and particularly on their numerical value, which is their most prominent semantic content (Dehaene et al., 2003; Dehaene and Cohen, 1995). Let vt be the numerical value of token st from the test corpus. Also, let ˆvt be the value of the most probable numeral under the model st = argmax(p(st|ht,ct=num)). Any evaluation metric from the regression literature can be used to measure the models performance. To evaluate on the number line, we can use any evaluation metric from the regression literature. In reverse order of tolerance to extreme errors, some of the most popular are Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Median Absolute Error (MdAE): ei = vi−ˆvi RMSE = s 1 N NP i=1 e2 i MAE = 1 N NP i=1 |ei| MdAE = median{|ei|} (15) The above are sensitive to the scale of the data. If the data contains values from different scales, percentage metrics are often preferred, such as the Mean/Median Absolute Percentage Error (MAPE/MdAPE): pei = vi−ˆvi vi MAPE = 1 N NP i=1 |pei| MdAPE = median{|pei|} (16) 4 Data To evaluate our models, we created two datasets with documents from the clinical and scientific domains, where numbers abound (Bigeard et al., 2015; Porter, 1996). Furthermore, to ensure that the numbers will be informative of some attribute, we only selected texts that reference tables. Clinical Data Our clinical dataset comprises clinical records from the London Chest Hospital. The records where accompanied by tables with 20 numeric attributes (age, heart volumes, etc.) that they partially describe, as well as include numbers not found in the tables. Numeric tokens constitute only a small proportion of each sentence (4.3%), but account for a large part of the unique tokens vocabulary (>40%) and suffer high OOV rates. Scientific Data Our scientific dataset comprises paragraphs from Cornell’s ARXIV 1 repository of scientific articles, with more than half a million converted papers in 37 scientific sub-fields. We used the preprocessed ARXMLIV (Stamerjohanns et al., 2010; Stamerjohanns and Kohlhase, 2008) 2 version, where papers have been converted from LATEX into a custom XML format using the LATEXML 3 tool. We then kept all paragraphs with at least one reference to a table and a number. Clinical Scientific Train Dev Test Train Dev Test #inst 11170 1625 3220 14694 2037 4231 maxLen 667 594 666 2419 1925 1782 avgLen 210.1 209.1 206.9 210.1 215.9 212.1 %word 95.7 95.7 95.7 96.1 96.1 96.0 %nums 4.3 4.3 4.3 3.9 3.9 4.0 min 0.0 0.0 0.0 0.0 0.0 0.0 median 59.5 59.0 60.0 5.0 4.0 4.5 mean 300.6 147.7 464.8 ∼1021 ∼107 ∼107 max ∼107 ∼105 ∼107 ∼1026 ∼1011 ∼1011 Table 1: Statistical description of the clinical and scientific datasets: Number of instances (i.e. paragraphs), maximum and average lengths, proportions of words and numerals, descriptive statistics of numbers. For both datasets, we lowercase tokens and normalise numerals by omitting the thousands separator ("2,000" becomes "2000") and leading zeros ("007" becomes "7"). Special mathematical symbols are tokenised separately, e.g. negation (“-1” as “-”, “1”), fractions (“3/4” as “3”, “/”, “4”), etc. For this reason, all numbers were non-negative. Table 1 shows descriptive statistics for both datasets. 5 Experimental Results and Discussion We set the vocabularies to the 1,000 and 5,000 most frequent token types for the clinical and scientific datasets, respectively. We use gated token-character embeddings (Miyamoto and Cho, 2016) for the input of numerals and token embeddings for the input and output of words, since the scope of our paper is numeracy. We set the models’ hidden dimensions to D = 50 and initialise all token embeddings to pretrained GloVe (Pennington et al., 2014). All our 1ARXIV.ORG. Cornell University Library at http://arxiv.org/, visited December 2016 2ARXMLIV. Project home page at http://arxmliv.kwarc.info/, visited December 2016 3LATEXML. http://dlmf.nist.gov, visited December 2016 2109 Clinical Scientific words numerals total words numerals total Model PP APP PP APP PP APP PP APP PP APP PP APP softmax 4.08 5.99 12.04 58443.72 4.28 8.91 33.96 51.83 127.12 3505856.25 35.79 80.62 softmax+rnn 4.03 5.91 11.57 56164.81 4.21 8.77 33.54 51.20 119.68 3300688.50 35.28 79.47 h-softmax 4.00 4.96 11.78 495.95 4.19 6.05 34.73 49.81 122.67 550.98 36.51 54.80 h-softmax+rnn 4.03 4.99 11.65 490.14 4.22 6.09 34.04 48.83 120.83 542.70 35.80 53.73 d-RNN 3.99 4.95 263.22 263.22 4.79 5.88 34.08 48.89 519.80 519.80 37.98 53.70 MoG 4.03 4.99 226.46 226.46 4.79 5.88 34.14 48.97 683.16 683.16 38.45 54.37 combination 4.01 4.96 197.59 197.59 4.74 5.82 33.64 48.25 520.95 520.95 37.50 53.03 Table 2: Test set perplexities for the clinical and scientific data. Adjusted perplexities (APP) are directly comparable across all data and models, but perplexities (PP) are sensitive to the varying out-of-vocabulary rates. Clinical Scientific Model RMSE MAE MdAE MAPE% MdAPE% MdAE MAPE% MdAPE% mean 1043.68 294.95 245.59 2353.11 409.47 ∼1020 ∼1023 ∼1022 median 1036.18 120.24 34.52 425.81 52.05 4.20 8039.15 98.65 softmax 997.84 80.29 12.70 621.78 22.41 3.00 1947.44 80.62 softmax+rnn 991.38 74.44 13.00 503.57 23.91 3.50 15208.37 80.00 h-softmax 1095.01 167.19 14.00 746.50 25.00 3.00 1652.21 80.00 h-softmax+rnn 1001.04 83.19 12.30 491.85 23.44 3.00 2703.49 80.00 d-RNN 1009.34 70.21 9.00 513.81 17.90 3.00 1287.27 52.45 MoG 998.78 57.11 6.92 348.10 13.64 2.10 590.42 90.00 combination 989.84 69.47 9.00 552.06 17.86 3.00 2332.50 88.89 Table 3: Test set regression evaluation for the clinical and scientific data. Mean absolute percentage error (MAPE) is scale independent and allows for comparison across data, whereas root mean square and mean absolute errors (RMSE, MAE) are scale dependent. Medians (MdAE, MdAPE) are informative of the distribution of errors. RNNs are LSTMs (Hochreiter and Schmidhuber, 1997) with the biases of LSTM forget gate were initialised to 1.0 (Józefowicz et al., 2015). We train using mini-batch gradient decent with the Adam optimiser (Kingma and Ba, 2014) and regularise with early stopping and 0.1 dropout rate (Srivastava, 2013) in the input and output of the token-based RNN. For the mixture of Gaussians, we select the mean and variances to summarise the data at different granularities by fitting 7 separate mixture of Gaussian models on all numbers, each with twice as many components as the previous, for a total of 27+1 −1 = 256 components. These models are initialised at percentile points from the data and trained with the expectation-minimisation algorithm. The means and variances are then fixed and not updated when we train the language model. 5.1 Quantitative Results Perplexities Table 2 shows perplexities evaluated on the subsets of words, numerals and all tokens of the test data. Overall, all models performed better on the clinical than on the scientific data. On words, all models achieve similar perplexities in each dataset. On numerals, softmax variants perform much better than other models in PP, which is an artefact of the high OOV-rate of numerals. APP is significantly worse, especially for non-hierarchical variants, which perform about 2 and 4 orders of magnitude worse than hierarchical ones. For open-vocabulary models, i.e. d-RNN, MoG, and combination, PP is equivalent to APP. On numerals, d-RNN performed better than softmax variants in both datasets. The MoG model performed twice as well as softmax variants on the clinical dataset, but had the third worse performance in the scientific dataset. The combination model had the best overall APP results for both datasets. Evaluations on the Number Line To factor out model specific decoding processes for finding the best next numeral, we use our models to rank a set 2110 of candidate numerals: we compose the union of in-vocabulary numbers and 100 percentile points from the training set, and we convert numbers into numerals by considering all formats up to n decimal points. We select n to represent 90% of numerals seen at training, which yields n=3 and n=4 for the clinical and scientific data, respectively. Table 3 shows evaluation results, where we also include two naive baselines of constant predictions: with the mean and median of the training data. For both datasets, RMSE and MAE were too sensitive to extreme errors to allow drawing safe conclusions, particularly for the scientific dataset, where both metrics were in the order of 109. MdAE can be of some use, as 50% of the errors are absolutely smaller than that. Along percentage metrics, MoG achieved the best MAPE in both datasets (18% and 54% better that the second best) and was the only model to perform better than the median baseline for the clinical data. However, it had the worst MdAPE, which means that MoG mainly reduced larger percentage errors. The d-RNN model came third and second in the clinical and scientific datasets, respectively. In the latter it achieved the best MdAPE, i.e. it was effective at reducing errors for 50% of the numbers. The combination model did not perform better than its constituents. This is possibly because MoG is the only strategy that takes into account the numerical magnitudes of the numerals. 5.2 Learnt Representations Softmax versus Hierarchical Softmax Figure 3 visualises the cosine similarities of the output token embeddings of numerals for the softmax and h-softmax models. Simple softmax enforced high similarities among all numerals and the unknown numeral token, so as to make them more dissimilar to words, since the model embeds both in the same space. This is not the case for h-softmax that uses two different spaces: similarities are concentrated along the diagonal and fan out as the magnitude grows, with the exception of numbers with special meaning, e.g. years and percentile points. Digit embeddings Figure 4 shows the cosine similarities between the digits of the d-RNN output mode. We observe that each primitive digit is mostly similar to its previous and next digit. Similar behaviour was found for all digit embeddings of all models. 5.3 Predictions from the Models Next Numeral Figure 5 shows the probabilities of different numerals under each model for two Figure 3: Numeral embeddings for the softmax (top) and h-softmax (bottom) models on the clinical data. Numerals are sorted by value. Figure 4: Cosine similarities for d-RNN’s output digit embeddings trained on the scientific data. examples from the clinical development set. Numerals are grouped by number of decimal points. The h-softmax model’s probabilities are spiked, d-RNNs are saw-tooth like and MoG’s are smooth, with the occasional spike, whenever a narrow component allows for it. Probabilities rapidly decrease for more decimal digits, which is reminiscent of the theoretical expectation that the probability of en exact value for a continuous variable is zero. Selection of Strategy in Combination Model Table 4 shows development set examples with high selection probabilities for each strategy of the combination model, along with numerals with the highest average selection per mode. The h-softmax model is responsible for mostly integers with special functions, 2111 Clinical Scientific h-softmax Examples: “late enhancement ( > 75 %)”, “late gadolinium enhancement ( < 25 %)”, “infarction ( 2 out of 17 segments )”, “infarct with 4 out of 17 segments nonviable”, “adenosine stress perfusion @ 140 mcg”, “stress perfusion ( adenosine 140 mcg” Numerals: 50, 17, 100, 75, 25, 1, 140, 2012, 2010, 2011, 8, 5, 2009, 2013, 7, 6, 2, 3, 2008, 4... Examples: “sharp et al . 2004”, “li et al . 2003”, “3.5 × 10ˆ4”, “0.3 × 10ˆ16” Numerals: 1992, 2001, 1995, 2003, 2009, 1993, 2010, 1994, 1998, 2002, 2006, 1997, 2005, 1990, 10, 2008, 2007, 2004, 1983, 1991... d-RNN Examples: “aortic root is dilated ( measured 37 x 37 mm”, “ascending aorta is not dilated ( 32 x 31 mm” Numerals: 42, 33, 31, 43, 44, 21, 38, 36, 46, 37, 32, 39, 26, 28, 23, 29, 45, 40, 49, 94... Examples: “ngc 6334 stars”, “ngc 2366 shows a wealth of small structures” Numerals: 294, 4000, 238, 6334, 2363, 1275, 2366, 602, 375, 1068, 211, 6.4, 8.7, 600, 96, 0.65, 700, 1.17, 4861, 270... MoG Examples: “stroke volume 46.1 ml”, “stroke volume 65.6 ml”, “stroke volume 74.5 ml”, “end diastolic volume 82.6 ml”, “end diastolic volume 99.09 ml”, “end diastolic volume 138.47 ml” Numerals: 74.5, 69.3, 95.9, 96.5, 72.5, 68.6, 82.1, 63.7, 78.6, 69.6, 69.5, 82.2, 68.3, 73.2, 63.2, 82.6, 77.7, 80.7, 70.7, 70.4... Examples: “hip 12961 and gl 676 a are orbited by giant planets,” “velocities of gl 676”, “velocities of hip 12961” Numerals: 12961, 766, 7409, 4663, 44.3, 1819, 676, 1070, 5063, 323, 264, 163296, 2030, 77, 1.15, 196, 0.17, 148937, 0.43, 209458... Table 4: Examples of numerals with highest probability in each strategy of the combination model. Figure 5: Example model predictions for the h-softmax (top), d-RNN (middle) and MoG (bottom) models. Examples from the clinical development set. e.g. years, typical drug dosages, percentile points, etc. In the clinical data, d-RNN picks up two-digit integers (mostly dimensions) and MoG is activated for continuous attributes, which are mostly out of vocabulary. In the scientific data, d-RNN and MoG 0 5 10 15 20 25 30 35 0 1 2 3 4 5 6 7 8 9 Scientific 1st digit 0 5 10 15 20 25 30 35 0 1 2 3 4 5 6 7 8 9 4th digit Model Data Benford 0 5 10 15 20 25 30 35 0 1 2 3 4 5 6 7 8 9 4th digit 0 5 10 15 20 25 30 35 0 1 2 3 4 5 6 7 8 9 Clinical 1st digit Figure 6: Distributions of significant digits from d-RNN model, data, and theoretical expectation (Benford’s law). showed affinity to different indices from catalogues of astronomical objects: d-RNN mainly to NGC (Dreyer, 1888) and MoG to various other indices, such as GL (Gliese, 1988) and HIP (Perryman et al., 1997). In this case, MoG was wrongly selected for numerals with a labelling function, which also highlights a limitation of evaluating on the number line, when a numeral is not used to represent its magnitude. Significant Digits Figure 5 shows the distributions of the most significant digits under the d-RNN model 2112 and from data counts. The theoretical estimate has been overlayed, according to Benford’s law (Benford, 1938), also called the first-digit law, which applies to many real-life numerals. The law predicts that the first digit is 1 with higher probability (about 30%) than 9 (< 5%) and weakens towards uniformity at higher digits. Model probabilities closely follow estimates from the data. Violations from Benford’s law can be due to rounding (Beer, 2009) and can be used as evidence for fraud detection (Lu et al., 2006). 6 Related Work Numerical quantities have been recognised as important for textual entailment (Lev et al., 2004; Dagan et al., 2013). Roy et al. (2015) proposed a quantity entailment sub-task that focused on whether a given quantity can be inferred from a given text and, if so, what its value should be. A common framework for acquiring common sense about numerical attributes of objects has been to collect a corpus of numerical values in pre-specified templates and then model attributes as a normal distribution (Aramaki et al., 2007; Davidov and Rappoport, 2010; Iftene and Moruz, 2010; Narisawa et al., 2013; de Marneffe et al., 2010). Our model embeds these approaches into a LM that has a sense for numbers. Other tasks that deal with numerals are numerical information extraction and solving mathematical problems. Numerical relations have at least one argument that is a number and the aim of the task is to extract all such relations from a corpus, which can range from identifying a few numerical attributes (Nguyen and Moschitti, 2011; Intxaurrondo et al., 2015) to generic numerical relation extraction (Hoffmann et al., 2010; Madaan et al., 2016). Our model does not extract values, but rather produces an probabilistic estimate. Much work has been done in solving arithmetic (Mitra and Baral, 2016; Hosseini et al., 2014; Roy and Roth, 2016), geometric (Seo et al., 2015), and algebraic problems (Zhou et al., 2015; Koncel-Kedziorski et al., 2015; Upadhyay et al., 2016; Upadhyay and Chang, 2016; Shi et al., 2015; Kushman et al., 2014) expressed in natural language. Such models often use mathematical background knowledge, such as linear system solvers. The output of our model is not based on such algorithmic operations, but could be extended to do so in future work. In language modelling, generating rare or unknown words has been a challenge, similar to our unknown numeral problem. Gulcehre et al. (2016) and Gu et al. (2016) adopted pointer networks (Vinyals et al., 2015) to copy unknown words from the source in translation and summarisation tasks. Merity et al. (2016) and Lebret et al. (2016) have models that copy from context sentences and from Wikipedia’s infoboxes, respectively. Ahn et al. (2016) proposed a LM that retrieves unknown words from facts in a knowledge graph. They draw attention to the inappropriateness of perplexity when OOV-rates are high and instead propose an adjusted perplexity metric that is equivalent to APP. Other methods aim at speeding up LMs to allow for larger vocabularies (Chen et al., 2015), such as hierarchical softmax (Morin and Bengio, 2005b), target sampling (Jean et al., 2014), etc., but still suffer from the unknown word problem. Finally, the problem is resolved when predicting one character at a time, as done by the character-level RNN (Graves, 2013; Sutskever et al., 2011) used in our d-RNN model. 7 Conclusion In this paper, we investigated several strategies for LMs to model numerals and proposed a novel openvocabulary generative model based on a continuous probability density function. We provided the first thorough evaluation of LMs on numerals on two corpora, taking into account their high out-of-vocabulary rate and numerical value (magnitude). We found that modelling numerals separately from other words through a hierarchical softmax can substantially improve the perplexity of LMs, that different strategies are suitable for different contexts, and that a combination of these strategies can help improve the perplexity further. Finally, we found that using a continuous probability density function can improve prediction accuracy of LMs for numbers by substantially reducing the mean absolute percentage metric. Our approaches in modelling and evaluation can be used in future work in tasks such as approximate information extraction, knowledge base completion, numerical fact checking, numerical question answering, and fraud detection. Our code and data are available at: https://github.com/uclmr/ numerate-language-models. Acknowledgments The authors would like to thank the anonymous reviewers for their insightful comments and also Steffen Petersen for providing the clinical dataset and advising us on the clinical aspects of this work. This research was supported by the Farr Institute of Health Informatics Research and an Allen Distinguished Investigator award. 2113 References Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. arXiv preprint arXiv:1608.00318 . Eiji Aramaki, Takeshi Imai, Kengo Miyo, and Kazuhiko Ohe. 2007. Uth: Svm-based semantic relation classification using physical sizes. In Proceedings of the 4th International Workshop on Semantic Evaluations. Association for Computational Linguistics, pages 464–467. TW Beer. 2009. Terminal digit preference: beware of benford’s law. Journal of clinical pathology 62(2):192–192. Frank Benford. 1938. The law of anomalous numbers. Proceedings of the American philosophical society pages 551–572. Elise Bigeard, Vianney Jouhet, Fleur Mougin, Frantz Thiessard, and Natalia Grabar. 2015. Automatic extraction of numerical values from unstructured data in ehrs. In MIE. pages 50–54. Jamie ID Campbell. 2005. Handbook of mathematical cognition. Psychology Press. Welin Chen, David Grangier, and Michael Auli. 2015. Strategies for training large vocabulary neural language models. arXiv preprint arXiv:1512.04906 . Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures on Human Language Technologies 6(4):1–220. Dmitry Davidov and Ari Rappoport. 2010. Extraction and approximation of numerical attributes from the web. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1308–1317. Marie-Catherine de Marneffe, Christopher D Manning, and Christopher Potts. 2010. Was it good? it was provocative. learning the meaning of scalar adjectives. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 167–176. Stanislas Dehaene and Laurent Cohen. 1995. Towards an anatomical and functional model of number processing. Mathematical cognition 1(1):83–120. Stanislas Dehaene, Manuela Piazza, Philippe Pinel, and Laurent Cohen. 2003. Three parietal circuits for number processing. Cognitive neuropsychology 20(3-6):487–506. John Louis Emil Dreyer. 1888. A new general catalogue of nebulæ and clusters of stars, being the catalogue of the late sir john fw herschel, bart, revised, corrected, and enlarged. Memoirs of the Royal Astronomical Society 49:1. Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015. pages 360–368. Mahak Gambhir and Vishal Gupta. 2017. Recent automatic text summarization techniques: a survey. Artif. Intell. Rev. 47(1):1–66. Wilhelm Gliese. 1988. The third catalogue of nearby stars. Stand. Star Newsl. 13, 13 13. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 . Sharon Griffin, Robbie Case, and Allesandra Capodilupo. 1995. Teaching for understanding: The importance of the central conceptual structures in the elementary mathematics curriculum. . Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequenceto-sequence learning. arXiv preprint arXiv:1603.06393 . Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. arXiv preprint arXiv:1603.08148 . Çaglar Gülçehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, and Yoshua Bengio. 2017. On integrating a language model into neural machine translation. Computer Speech & Language 45:137–148. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. Raphael Hoffmann, Congle Zhang, and Daniel S Weld. 2010. Learning 5000 relational extractors. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 286–295. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 523–533. Adrian Iftene and Mihai-Alex Moruz. 2010. Uaic participation at rte-6 . Ander Intxaurrondo, Eneko Agirre, Oier Lopez De Lacalle, and Mihai Surdeanu. 2015. Diamonds in the rough: Event extraction from imperfect microblog data. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 641–650. 2114 Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007 . Elana Joram, Lauren B Resnick, and Anthony J Gabriele. 1995. Numeracy as cultural practice: An examination of numbers in magazines for children, teenagers, and adults. Journal for Research in Mathematics Education pages 346–361. Rafal Józefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015. pages 2342–2350. Roi Cohen Kadosh and Ann Dowker. 2015. The Oxford handbook of numerical cognition. Oxford Library of Psychology. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics 3:585–597. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 271–281. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. arXiv preprint arXiv:1603.07771 . Iddo Lev, Bill MacCartney, Christopher D Manning, and Roger Levy. 2004. Solving logic puzzles: From robust processing to precise semantics. In Proceedings of the 2nd Workshop on Text Meaning and Interpretation. Association for Computational Linguistics, pages 9–16. Fletcher Lu, J. Efrim Boritz, and H. Dominic Covvey. 2006. Adaptive fraud detection using benford’s law. In Advances in Artificial Intelligence, 19th Conference of the Canadian Society for Computational Studies of Intelligence, Canadian AI 2006. pages 347–358. Minh-Thang Luong and Christopher D Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1054–1063. Thang Luong, Michael Kayser, and Christopher D. Manning. 2015. Deep neural language models for machine translation. In Proceedings of the 19th Conference on Computational Natural Language Learning, CoNLL 2015. pages 305–309. Aman Madaan, Ashish Mittal, Ganesh Ramakrishnan, Sunita Sarawagi, et al. 2016. Numerical relation extraction with minimal supervision. In Thirtieth AAAI Conference on Artificial Intelligence. Michael McCloskey and Paul Macaruso. 1995. Representing and using numerical information. American Psychologist 50(5):351. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843 . Tomas Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association. pages 1045–1048. Jeff Mitchell and Mirella Lapata. 2009. Language models based on semantic composition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1. Association for Computational Linguistics, pages 430–439. Arindam Mitra and Chitta Baral. 2016. Learning to use formulas to solve simple arithmetic problems. In ACL. Yasumasa Miyamoto and Kyunghyun Cho. 2016. Gated word-character recurrent language model. arXiv preprint arXiv:1606.01700 . Frederic Morin and Yoshua Bengio. 2005a. Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005. Frederic Morin and Yoshua Bengio. 2005b. Hierarchical probabilistic neural network language model. In Aistats. Citeseer, volume 5, pages 246–252. Katsuma Narisawa, Yotaro Watanabe, Junta Mizuno, Naoaki Okazaki, and Kentaro Inui. 2013. Is a 204 cm man tall or small? acquisition of numerical common sense from the web. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 382–391. Truc-Vien T Nguyen and Alessandro Moschitti. 2011. End-to-end relation extraction using distant supervision from external semantic repositories. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. Association for Computational Linguistics, pages 277–282. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532–1543. Michael AC Perryman, L Lindegren, J Kovalevsky, E Hoeg, U Bastian, PL Bernacca, M Crézé, F Donati, M Grenon, M Grewing, et al. 1997. The hipparcos catalogue. Astronomy and Astrophysics 323:L49–L52. 2115 Theodore M Porter. 1996. Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press. Rohit Prabhavalkar, Kanishka Rao, Tara N. Sainath, Bo Li, Leif Johnson, and Navdeep Jaitly. 2017. A comparison of sequence-to-sequence models for speech recognition. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association. pages 939–943. Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017. pages 2931–2937. Marek Rei and Helen Yannakoudakis. 2017. Auxiliary objectives for neural error detection models. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, BEA@EMNLP 2017. pages 33–43. Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems. arXiv preprint arXiv:1608.01413 . Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transactions of the Association for Computational Linguistics 3:1–13. Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML-14). pages 1818–1826. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1466–1476. Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and reasoning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal. Georgios P. Spithourakis, Isabelle Augenstein, and Sebastian Riedel. 2016a. Numerically grounded language models for semantic error correction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016. pages 987–992. Georgios P Spithourakis, Steffen E Petersen, and Sebastian Riedel. 2016b. Clinical text prediction with numerically grounded conditional language models. EMNLP 2016 page 6. Nitish Srivastava. 2013. Improving neural networks with dropout. University of Toronto 182. Heinrich Stamerjohanns and Michael Kohlhase. 2008. Transforming the arχiv to xml. In International Conference on Intelligent Computer Mathematics. Springer, pages 574–582. Heinrich Stamerjohanns, Michael Kohlhase, Deyan Ginev, Catalin David, and Bruce Miller. 2010. Transforming large collections of scientific publications to xml. Mathematics in Computer Science 3(3):299–307. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pages 1017–1024. Joseph Tzelgov, Dana Ganor-Stern, Arava Y Kallai, and Michal Pinhas. 2015. Primitives and non-primitives of numerical representations. Oxford library of psychology. The Oxford handbook of numerical cognition pages 45–66. Joerg Ueberla. 1994. Analysing a simple language model· some general conclusions for language models for speech recognition. Computer Speech & Language 8(2):153–176. Shyam Upadhyay and Ming-Wei Chang. 2016. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. arXiv preprint arXiv:1609.07197 . Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen-tau Yih. 2016. Learning from explicit and implicit supervision jointly for algebra word problems. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 297–306. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700. Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. CoRR abs/1706.01450. Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (Lisbon, Portugal. pages 817–822.
2018
196
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2116–2125 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2116 To Attend or not to Attend: A Case Study on Syntactic Structures for Semantic Relatedness Amulya Gupta Iowa State University [email protected] Zhu Zhang Iowa State University [email protected] Abstract With the recent success of Recurrent Neural Networks (RNNs) in Machine Translation (MT), attention mechanisms have become increasingly popular. The purpose of this paper is two-fold; firstly, we propose a novel attention model on Tree Long Short-Term Memory Networks (Tree-LSTMs), a tree-structured generalization of standard LSTM. Secondly, we study the interaction between attention and syntactic structures, by experimenting with three LSTM variants: bidirectionalLSTMs, Constituency Tree-LSTMs, and Dependency Tree-LSTMs. Our models are evaluated on two semantic relatedness tasks: semantic relatedness scoring for sentence pairs (SemEval 2012, Task 6 and SemEval 2014, Task 1) and paraphrase detection for question pairs (Quora, 2017).1 1 Introduction Recurrent Neural Networks (RNNs), in particular Long Short-Term Memory Networks (LSTMs) (Hochreiter and Schmidhuber, 1997), have demonstrated remarkable accomplishments in Natural Language Processing (NLP) in recent years. Several tasks such as information extraction, question answering, and machine translation have benefited from them. However, in their vanilla forms, these networks are constrained by the sequential order of tokens in a sentence. To mitigate this limitation, structural (dependency or constituency) information in a sentence was exploited and witnessed partial success in various tasks (Goller and Kuchler, 1996; Yamada and 1Our code for experiments on the SICK dataset is publicly available at https://github.com/amulyahwr/ acl2018 Knight, 2001; Quirk et al., 2005; Socher et al., 2011; Tai et al., 2015). On the other hand, alignment techniques (Brown et al., 1993) and attention mechanisms (Bahdanau et al., 2014) act as a catalyst to augment the performance of classical Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) models, respectively. In short, both approaches focus on sub-strings of source sentence which are significant for predicting target words while translating. Currently, the combination of linear RNNs/LSTMs and attention mechanisms has become a de facto standard architecture for many NLP tasks. At the intersection of sentence encoding and attention models, some interesting questions emerge: Can attention mechanisms be employed on tree structures, such as Tree-LSTMs (Tai et al., 2015)? If yes, what are the possible tree-based attention models? Do different tree structures (in particular constituency vs. dependency) have different behaviors in such models? With these questions in mind, we present our investigation and findings in the context of semantic relatedness tasks. 2 Background 2.1 Long Short-Term Memory Networks (LSTMs) Concisely, an LSTM network (Hochreiter and Schmidhuber, 1997) (Figure 1) includes a memory cell at each time step which controls the amount of information being penetrated into the cell, neglected, and yielded by the cell. Various LSTM networks (Greff et al., 2017) have been explored till now; we focus on one representative form. To be more precise, we consider a LSTM memory cell involving: an input gate it, a forget gate ft, and an output gate ot at time step t. Apart from 2117 w0 w1 h0 c0 h1 c1 y0 y1 h2 c2 ... Figure 1: A linear LSTM network. wt is the word embedding, ht is the hidden state vector, ct is the memory cell vector and yt is the final processed output at time step t. the hidden state ht−1 and input embedding wt of the current word, the recursive function in LSTM also takes the previous time’s memory cell state, ct−1, into account, which is not the case in simple RNN. The following equations summarize a LSTM memory cell at time step t: it = σ(wtW i + ht−1Ri + bi) (1) ft = σ(wtW f + ht−1Rf + bf) (2) ot = σ(wtW o + ht−1Ro + bo) (3) ut = tanh(wtW u + ht−1Ru + bu) (4) ct = it ⊙ut + ft ⊙ct−1 (5) ht = ot ⊙tanh(ct) (6) where: • (W i, W f, W o, W u) ∈RD x d represent input weight matrices, where d is the dimension of the hidden state vector and D is the dimension of the input word embedding, wt . • (Ri, Rf, Ro, Ru) ∈Rd x d represent recurrent weight matrices and (bi, bf, bo, bu) ∈ Rd represent biases. • ct ∈Rd is the new memory cell vector at time step t. As can be seen in Eq. 5, the input gate it limits the new information, ut, by employing the element wise multiplication operator ⊙. Moreover, the forget gate ft regulates the amount of information from the previous state ct−1. Therefore, the current memory state ct includes both new and previous time step’s information but partially. John ate an apple nsubj dobj det Figure 2: a. Left: A constituency tree; b. Right: A dependency tree A natural extension of LSTM network is a bidirectional LSTM (bi-LSTM), which lets the sequence pass through the architecture in both directions and aggregate the information at each time step. Again, it strictly preserves the sequential nature of LSTMs. 2.2 Linguistically Motivated Sentence Structures Most computational linguists have developed a natural inclination towards hierarchical structures of natural language, which follow guidelines collectively referred to as syntax. Typically, such structures manifest themselves in parse trees. We investigate two popular forms: Constituency and Dependency trees. 2.2.1 Constituency structure Briefly, constituency trees (Figure 2:a) indicate a hierarchy of syntactic units and encapsulate phrase grammar rules. Moreover, these trees explicitly demonstrate groups of phrases (e.g., Noun Phrases) in a sentence. Additionally, they discriminate between terminal (lexical) and non-terminal nodes (non-lexical) tokens. 2.2.2 Dependency structure In short, dependency trees (Figure 2:b) describe the syntactic structure of a sentence in terms of the words (lemmas) and associated grammatical relations among the words. Typically, these dependency relations are explicitly typed, which makes the trees valuable for practical applications such as information extraction, paraphrase detection and semantic relatedness. 2.3 Tree Long Short-Term Memory Network (Tree-LSTM) Child-Sum Tree-LSTM (Tai et al., 2015) is an epitome of structure-based neural network which explicitly capture the structural information in a sentence. Tai et al. demonstrated that information at 2118 HC0 IP OP UP HC1 wP CC0 CC1 fC1 fC0 CP HP IP UP OP Parent node Child node Child node wP : word embedding of parent node HP ,HC0, HC1: hidden state vectors of parent, first child and second child respectively CP ,CC0, CC1: memory cell state vectors of parent, first child and second child respectively IP, OP: Input and Output gate vectors for parent node respectively fC0, fC1 : Forget gate vectors for first and second child respectively Figure 3: A compositional view of parent node in Tree-LSTM network. a parent node can be consolidated selectively from each of its child node. Architecturally, each gated vector and memory state update of the head node is dependent on the hidden states of its children in the Tree-LSTM. Assuming a good tree structure of a sentence, each node j of the structure incorporates the following equations.: ˜hj = X k∈C(j) hk (7) ij = σ(wjW i + ˜hjRi + bi) (8) fjk = σ(wjW f + hkRf + bf) (9) oj = σ(wjW o + ˜hjRo + bo) (10) uj = tanh(wjW u + ˜hjRu + bu) (11) cj = ij ⊙uj + X k∈C(j) fjk ⊙ck (12) hj = oj ⊙tanh(cj) (13) where: • wj ∈RD represents word embedding of all nodes in Dependency structure and only terminal nodes in Constituency structure. 2 • (W i, W f, W o, W u) ∈RD x d represent input weight matrices. • (Ri, Rf, Ro, Ru) ∈Rd x d represent recurrent weight matrices, and (bi, bf, bo, bu) ∈ Rd represent biases. 2wj is ignored for non-terminal nodes in a Constituency structure by removing the wW terms in Equations 8-11. w0 w1 h0 c0 h1 c1 w’0 w’1 h’1 c’1 a1 (Global align weights) c1 (context vector) ĥ’1 Attention layer Figure 4: Global attention model • cj ∈Rd is the new memory state vector of node j. • C(j) is the set of children of node j. • fjk ∈Rd is the forget gate vector for child k of node j. Referring to Equation 12, the new memory cell state, cj of node j, receives new information, uj, partially. More importantly, it includes the partial information from each of its direct children, set C(j), by employing the corresponding forget gate, fjk. When the Child-Sum Tree model is deployed on a dependency tree, it is referred to as Dependency Tree-LSTM, whereas a constituency-treebased instantiation is referred to as Constituency Tree-LSTM. 2.4 Attention Mechanisms Alignment models were first introduced in statistical machine translation (SMT) (Brown et al., 1993), which connect sub-strings in the source sentence to sub-strings in the target sentence. Recently, attention techniques (which are effectively soft alignment models) in neural machine translation (NMT) (Bahdanau et al., 2014) came into prominence, where attention scores are calculated by considering words of source sentence while decoding words in target language. Although effective attention mechanisms (Luong et al., 2015) such as Global Attention Model (GAM) (Figure 4) and Local Attention Model (LAM) have been developed, such techniques have not been explored over Tree-LSTMs. 3 Inter-Sentence Attention on Tree-LSTMs We present two types of tree-based attention models in this section. With trivial adaptation, they can 2119 be deployed in the sequence setting (degenerated trees). 3.1 Modified Decomposable Attention (MDA) Parikh et al. (2016)’s original decomposable intersentence attention model only used word embeddings to construct the attention matrix, without any structural encoding of sentences. Essentially, the model incorporated three components: Attend: Input representations (without sequence or structural encoding) of both sentences, L and R, are soft-aligned. Compare: A set of vectors is produced by separately comparing each sub-phrase of L to subphrases in R. Vector representation of each subphrase in L is a non-linear combination of representation of word in sentence L and its aligned sub-phrase in sentence R. The same holds true for the set of vectors for sentence R. Aggregate: Both sets of sub-phrases vectors are summed up separately to form final sentence representation of sentence L and sentence R. We decide to augment the original decomposable inter-sentence attention model and generalize it into the tree (and sequence) setting. To be more specific, we consider two input sequences: L = (l1, l2....llenL), R = (r1, r2....rlenR) and their corresponding input representations: ¯L = (¯l1, ¯l2....¯llenL), ¯R = (¯r1, ¯r2....¯rlenR); where lenL and lenR represents number of words in L and R, respectively. 3.1.1 MDA on dependency structure Let’s assume sequences L and R have dependency tree structures DL and DR. In this case, lenL and lenR represents number of nodes in DL and DR, respectively. After using a Tree-LSTM to encode tree representations, which results in: D ′ L = (¯l ′ 1, ¯l ′ 2....¯l ′ lenL), D ′ R = (¯r ′ 1, ¯r ′ 2....¯r ′ lenR), we gather unnormalized attention weights, eij and normalize them as follows: eij = ¯l ′ i(¯r ′ j)T (14) βi = lenR X j=1 exp(eij) PlenR k=1 exp(eik) ∗¯r ′ j (15) αj = lenL X i=1 exp(eij) PlenL k=1 exp(ekj) ∗¯l ′ i (16) From the equations above, we can infer that the attention matrix will have a dimension lenL x lenR. In contrast to the original model, we compute the final representations of the each sentence by concatenating the LSTM-encoded representation of root with the attention-weighted representation of the root 3: h ′′ L = G([¯l ′ rootL; βrootL]) (17) h ′′ R = G([¯r ′ rootR; αrootR]) (18) where G is a feed-forward neural network. h ′′ L and h ′′ R are final vector representations of input sequences L and R, respectively. 3.1.2 MDA on constituency structure Let’s assume sequences L and R have constituency tree structures CL and CR. Moreover, assume CL and CR have total number of nodes as NL (> lenL) and NR (> lenR), respectively. As in 3.1.1, the attention mechanism is employed after encoding the trees CL and CR. While encoding trees, terminal and non-terminal nodes are handled in the same way as in the original TreeLSTM model (see 2.3). It should be noted that we collect hidden states of all the nodes (NL and NR) individually in CL and CR during the encoding process. Hence, hidden states matrix will have dimension NL x d for tree CL whereas for tree CR, it will have dimension NR x d; where d is dimension of each hidden state. Therefore, attention matrix will have a dimension NL x NR. Finally, we employ Equations 14-18 to compute the final representations of sequences L and R. 3.2 Progressive Attention (PA) In this section, we propose a novel attention mechanism on Tree-LSTM, inspired by (Quirk et al., 2005) and (Yamada and Knight, 2001). 3.2.1 PA on dependency structure Let’s assume a dependency tree structure of sentence L = (l1, l2....llenL) is available as DL; where lenL represents number of nodes in DL. Similarly, tree DR corresponds to the sentence R = (r1, r2....rlenR); where lenR represents number of nodes in DR. In PA, the objective is to produce the final vector representation of tree DR conditional on the hidden state vectors of all nodes of DL. Similar to 3In the sequence setting, we compute the corresponding representations for the last word in the sentence. 2120 the encoding process in NMT, we encode R by attending each node of DR to all nodes in DL. Let’s name this process Phase1. Next, Phase2 is performed where L is encoded in the similar way to get the final vector representation of DL. Referring to Figure 5 and assuming Phase1 is being executed, a hidden state matrix, HL, is obtained by concatenating the hidden state vector of every node in tree DL, where the number of nodes in DL = 3. Next, tree DR is processed by calculating the hidden state vector at every node. Assume that the current node being processed is nR2 of DR, which has a hidden state vector, hR2. Before further processing, normalized weights are calculated based on hR2 and HL. Formally, Hpj = stack[hpj] (19) conpj = concat[Hpj, Hq] (20) apj = softmax(tanh(conpjWc +b)∗Wa) (21) where: • p, q ∈{L, R} and q ̸= p • Hq ∈Rx x d represents a matrix obtained by concatenating hidden state vectors of nodes in tree Dq; x is lenq of sentence q. • Hpj ∈Rx x d represents a matrix obtained by stacking hidden state, hpj, vertically x times. • conpj ∈Rx x 2d represents the concatenated matrix. • apj ∈Rx represents the normalized attention weights at node j of tree Dp; where Dp is the dependency structure of sentence p. • Wc ∈R2d x d and Wa ∈Rd represent learned weight matrices. The normalized attention weights in above equations provide an opportunity to align the subtree at the current node, nR2, in DR to sub-trees available at all nodes in DL. Next, a gated mechanism is employed to compute the final vector representation at node nR2. Formally, h ′ pj = (x−1) X 0 ((1 −apj) ∗Hq + (apj) ∗Hpj) (22) where: • h ′ pj ∈Rd represents the final vector representation of node j in tree Dp • P(x−1) 0 represents column-wise sum Assuming the final vector representation of tree DR is h ′ R, the exact same steps are followed for Phase2 with the exception that the entire process is now conditional on tree DR. As a result, the final vector representation of tree DL, h ′ L, is computed. Lastly, the following equations are applied to vectors h ′ L and h ′ R, before calculating the angle and distance similarity (see Section 4). h ′′ L = tanh(h ′ L + hL) (23) h ′′ R = tanh(h ′ R + hR) (24) where: • hL ∈Rd represents the vector representation of tree DL without attention. • hR ∈Rd represents the vector representation of tree DR without attention. 3.2.2 PA on constituency structure Let CL and CR represent constituency trees of L and R, respectively; where CL and CR have total number of nodes NL (> lenL) and NR (> lenR). Additionally, let’s assume that trees CL and CR have the same configuration of nodes as in Section 3.1.2, and the encoding of terminal and nonterminal nodes follow the same process as in Section 3.1.2. Assuming we have already encoded all NL nodes of tree CL using Tree-LSTM, we will have the hidden state matrix, HL, with dimension NL x d. Next, while encoding any node of CR, we consider HL which results in an attention vector having shape NL. Using Equations 19-22 4, we retrieve the final hidden state of the current node. Finally, we compute the representation of sentence R based on attention to sentence L. We perform Phase2 with the same process, except that we now condition on sentence R. In summary, the progressive attention mechanism refers to all nodes in the other tree while encoding a node in the current tree, instead of waiting till the end of the structural encoding to establish cross-sentence attention, as was done in the decomposable attention model. 4At this point, we will consider Cq and Cp instead of Dq and Dp, respectively, in Equations 19-22. Additionally, x will be equal to total number of nodes in the constituency tree. 2121 nL1 nL0 nL2 hL2 hL0 hL1 hL0 hL2 hL1 3 x 150 HL Sentence L nR1 nR0 nR2 hR2 hR0 hR1 Sentence R HL wtL0 wtL1 wtL2 3 x 1 aR2 (normalized) Phase 1 Phase 2 h+ hx output Start 1-wtL0 1-wtL1 1-wtL2 h’R2 nR0 nR1 nR2 hR2 hR1 hR0 hR1 hR2 hR0 3 x 150 HR Sentence R nL0 nL1 nL2 hL2 hL1 hL0 Sentence L HR wtR1 wtR0 wtR2 3 x 1 aL2 (normalized) 1-wtR1 1-wtR0 1-wtR2 h’L2 h’’L h’’R Start hL hR h’R h’L Figure 5: Progressive Attn-Tree-LSTM model 4 Evaluation Tasks We evaluate our models on two tasks: (1) semantic relatedness scoring for sentence pairs (SemEval 2012, Task 6 and SemEval 2014, Task 1) and (2) paraphrase detection for question pairs (Quora, 2017). 4.1 Semantic Relatedness for Sentence Pairs In SemEval 2012, Task 6 and SemEval 2014, Task 1, every sentence pair has a real-valued score that depicts the extent to which the two sentences are semantically related to each other. Higher score implies higher semantic similarity between the two sentences. Vector representations h ′′ L and h ′′ R are produced by using our Modified Decomp-Attn or Progressive-Attn models. Next, a similarity score, ˆy between h ′′ L and h ′′ R is computed using the same neural network (see below), for the sake of fair comparison between our models and the original Tree-LSTM (Tai et al., 2015). hx = h ′′ L ⊙h ′′ R (25) h+ = |h ′′ L −h ′′ R| (26) hs = σ(hxW x + h+W + + bh) (27) ˆpθ = softmax(hsW p + bp) (28) ˆy = rT ˆpθ (29) where: • rT = [1, 2..S] • hx ∈Rd measures the sign similarity between h ′′ L and h ′′ R • h+ ∈Rd measures the absolute distance between h ′′ L and h ′′ R Following (Tai et al., 2015), we convert the regression problem into a soft classification. We also use the same sparse distribution, p, which was defined in the original Tree-LSTM to transform the gold rating for a sentence pair, such that y = rT p and ˆy = rT ˆpθ ≈y. The loss function is the KLdivergence between p and ˆp: J(θ) = Pm k=1 KL(pk||ˆpk θ) m + λ||θ||2 2 2 (30) • m is the number of sentence pairs in the dataset. • λ represents the regularization penalty. 4.2 Paraphrase Detection for Question Pairs In this task, each question pair is labeled as either paraphrase or not, hence the task is binary classification. We use Eqs. 25 - 28 to compute the 2122 predicted distribution ˆpθ. The predicted label, ˆy, will be: ˆy = arg maxy ˆpθ (31) The loss function is the negative log-likelihood: J(θ) = − Pm k=1 yk log ˆyk m + λ||θ||2 2 2 (32) 5 Experiments 5.1 Semantic Relatedness for Sentence Pairs We utilized two different datasets: • The Sentences Involving Compositional Knowledge (SICK) dataset (Marelli et al. (2014)), which contains a total of 9,927 sentence pairs. Specifically, the dataset has a split of 4500/500/4927 among training, dev, and test. Each sentence pair has a score S ∈[1,5], which represents an average of 10 different human judgments collected by crowd-sourcing techniques. • The MSRpar dataset (Agirre et al., 2012), which consists of 1,500 sentence pairs. In this dataset, each pair is annotated with a score S ∈[0,5] and has a split of 750/750 between training and test. We used the Stanford Parsers (Chen and Manning, 2014; Bauer) to produce dependency and constituency parses of sentences. Moreover, we initialized the word embeddings with 300dimensional Glove vectors (Pennington et al., 2014); the word embeddings were held fixed during training. We experimented with different optimizers, among which AdaGrad performed the best. We incorporated a learning rate of 0.025 and regularization penalty of 10−4 without dropout. 5.2 Paraphrase Detection for Question Pairs For this task, we utilized the Quora dataset (Iyer; Kaggle, 2017). Given a pair of questions, the objective is to identify whether they are semantic duplicates. It is a binary classification problem where a duplicate question pair is labeled as 1 otherwise as 0. The training set contains about 400,000 labeled question pairs, whereas the test set consists of 2.3 million unlabeled question pairs. Moreover, the training dataset has only 37% positive samples; average length of a question is 10 words. Due to hardware and time constraints, we extracted 50,000 pairs from the original training while maintaining the same positive/negative ratio. A stratified 80/20 split was performed on this subset to produce the training/test set. Finally, 5% of the training set was used as a validation set in our experiments. We used an identical training configuration as for the semantic relatedness task since the essence of both the tasks is practically the same. We also performed pre-processing to clean the data and then parsed the sentences using Stanford Parsers. 6 Results 6.1 Semantic Relatedness for Sentence Pairs Table 1 summarizes our results. According to (Marelli et al., 2014), we compute three evaluation metrics: Pearson’s r, Spearman’s ρ and Mean Squared Error (MSE). We compare our attention models against the original Tree-LSTM (Tai et al., 2015), instantiated on both constituency trees and dependency trees. We also compare earlier baselines with our models, and the best results are in bold. Since Tree-LSTM is a generalization of Linear LSTM, we also implemented our attention models on Linear Bidirectional LSTM (BiLSTM). All results are average of 5 runs. It is witnessed that the Progressive-Attn mechanism combined with Constituency Tree-LSTM is overall the strongest contender, but PA failed to yield any performance gain on Dependency Tree-LSTM in either dataset. 6.2 Paraphrase Detection for Question Pairs Table 2 summarizes our results where best results are highlighted in bold within each category. It should be noted that Quora is a new dataset and we have done our analysis on only 50,000 samples. Therefore, to the best of our knowledge, there is no published baseline result yet. For this task, we considered four standard evaluation metrics: Accuracy, F1-score, Precision and Recall. The Progressive-Attn + Constituency Tree-LSTM model still exhibits the best performance by a small margin, but the Progressive-Attn mechanism works surprisingly well on the linear bi-LSTM. 6.3 Effect of the Progressive Attention Model Table 3 illustrates how various models operate on two sentence pairs from SICK test dataset. As we can infer from the table, the first pair demonstrates an instance of the active-passive voice phenomenon. In this case, the linear LSTM and vanilla Tree-LSTMs really struggle to perform. 2123 Table 1: Results on test dataset for SICK and MSRpar semantic relatedness task. Mean scores are presented based on 5 runs (standard deviation in parenthesis). Categories of results: (1) Previous models (2) Dependency structure (3) Constituency structure (4) Linear structure Dataset Model Pearson’s r Spearman’s ρ MSE SICK Illinois-LH (2014) 0.7993 0.7538 0.3692 UNAL-NLP (2014) 0.8070 0.7489 0.3550 Meaning factory (2014) 0.8268 0.7721 0.3224 ECNU (2014) 0.8414 Dependency Tree-LSTM (2015) 0.8676 (0.0030) 0.8083 (0.0042) 0.2532 (0.0052) Decomp-Attn (Dependency) 0.8239 (0.0120) 0.7614 (0.0103) 0.3326 (0.0223) Progressive-Attn (Dependency) 0.8424 (0.0042) 0.7733 (0.0066) 0.2963 (0.0077) Constituency Tree-LSTM (2015) 0.8582 (0.0038) 0.7966 (0.0053) 0.2734 (0.0108) Decomp-Attn (Constituency) 0.7790 (0.0076) 0.7074 (0.0091) 0.4044 (0.0152) Progressive-Attn (Constituency) 0.8625 (0.0032) 0.7997 (0.0035) 0.2610 (0.0057) Linear Bi-LSTM 0.8398 (0.0020) 0.7782 (0.0041) 0.3024 (0.0044) Decomp-Attn (Linear) 0.7899 (0.0055) 0.7173 (0.0097) 0.3897 (0.0115) Progressive-Attn (Linear) 0.8550 (0.0017) 0.7873 (0.0020) 0.2761 (0.0038) MSRpar ParagramPhrase (2015) 0.426 Projection (2015) 0.437 GloVe (2015) 0.477 PSL (2015) 0.416 ParagramPhrase-XXL (2015) 0.448 Dependency Tree-LSTM 0.4921 (0.0112) 0.4519 (0.0128) 0.6611 (0.0219) Decomp-Attn (Dependency) 0.4016 (0.0124) 0.3310 (0.0118) 0.7243 (0.0099) Progressive-Attn (Dependency) 0.4727 (0.0112) 0.4216 (0.0092) 0.6823 (0.0159) Constituency Tree-LSTM 0.3981 (0.0176) 0.3150 (0.0204) 0.7407 (0.0170) Decomp-Attn (Constituency) 0.3991 (0.0147) 0.3237 (0.0355) 0.7220 (0.0185) Progressive-Attn (Constituency) 0.5104 (0.0191) 0.4764 (0.0112) 0.6436 (0.0346) Linear Bi-LSTM 0.3270 (0.0303) 0.2205 (0.0111) 0.8098 (0.0579) Decomp-Attn (Linear) 0.3763 (0.0332) 0.3025 (0.0587) 0.7290 (0.0206) Progressive-Attn (Linear) 0.4773 (0.0206) 0.4453 (0.0250) 0.6758 (0.0260) Table 2: Results on test dataset for Quora paraphrase detection task. Mean scores are presented based on 5 runs (standard deviation in parenthesis). Categories of results: (1) Dependency structure (2) Constituency structure (3) Linear structure Model Accuracy F-1 score Precision Recall (class=1) (class=1) (class=1) Dependency Tree-LSTM 0.7897 (0.0009) 0.7060 (0.0050) 0.7298 (0.0055) 0.6840 (0.0139) Decomp-Attn (Dependency) 0.7803 (0.0026) 0.6977 (0.0074) 0.7095 (0.0083) 0.6866 (0.0199) Progressive-Attn (Dependency) 0.7896 (0.0025) 0.7113 (0.0087) 0.7214 (0.0117) 0.7025 (0.0266) Constituency Tree-LSTM 0.7881 (0.0042) 0.7065 (0.0034) 0.7192 (0.0216) 0.6846 (0.0380) Decomp-Attn (Constituency) 0.7776 (0.0004) 0.6942 (0.0050) 0.7055 (0.0069) 0.6836 (0.0164) Progressive-Attn (Constituency) 0.7956 (0.0020) 0.7192 (0.0024) 0.7300 (0.0079) 0.7089 (0.0104) Linear Bi-LSTM 0.7859 (0.0024) 0.7097 (0.0047) 0.7112 (0.0129) 0.7089 (0.0219) Decomp-Attn (Linear) 0.7861 (0.0034) 0.7074 (0.0109) 0.7151 (0.0135) 0.7010 (0.0315) Progressive-Attn (Linear) 0.7949 (0.0031) 0.7182 (0.0162) 0.7298 (0.0115) 0.7092 (0.0469) However, when our progressive attention mechanism is integrated into syntactic structures (dependency or constituency), we witness a boost in the semantic relatedness score. Such desirable behavior is consistently observed in multiple activepassive voice pairs. The second pair points to a possible issue in data annotation. Despite the presence of strong negation, the gold-standard score is 4 out of 5 (indicating high relatedness). Interestingly, the Progressive-Attn + Dependency TreeLSTM model favors the negation facet and outputs a low relatedness score. 7 Discussion In this section, let’s revisit our research questions in light of the experimental results. First, can attention mechanisms be built for Tree-LSTMs? Does it work? The answer is yes. Our novel progressive-attention Tree-LSTM model, when instantiated on constituency trees, 2124 Table 3: Effect of the progressive attention model Test Pair Gold BiLSTM Const. Tree Dep. Tree ID (no attn) (PA) (no attn) (PA) (no attn) (PA) 1 S1: The badger is burrowing a hole. S2: A hole is being burrowed by the badger. 4.9 2.60 3.02 3.52 4.34 3.41 4.63 2 S1: There is no man screaming. S2: A man is screaming. 4 3.44 3.20 3.65 3.50 3.51 2.15 significantly outperforms its counterpart without attention. The same model can also be deployed on sequences (degenerated trees) and achieve quite impressive results. Second, the performance gap between the two attention models is quite striking, in the sense that the progressive model completely dominate its decomposable counterpart. The difference between the two models is the pacing of attention, i.e., when to refer to nodes in the other tree while encoding a node in the current tree. The progressive attention model garners it’s empirical superiority by attending while encoding, instead of waiting till the end of the structural encoding to establish cross-sentence attention. In retrospect, this may justify why the original decomposable attention model in (Parikh et al., 2016) achieved competitive results without any LSTM-type encoding. Effectively, they implemented a naive version of our progressive attention model. Third, do structures matter/help? The overall trend in our results is quite clear: the tree-based models exhibit convincing empirical strength; linguistically motivated structures are valuable. Admittedly though, on the relatively large Quora dataset, we observe some diminishing returns of incorporating structural information. It is not counter-intuitive that the sheer size of data can possibly allow structural patterns to emerge, hence lessen the need to explicitly model syntactic structures in neural architectures. Last but not least, in trying to assess the impact of attention mechanisms (in particular the progressive attention model), we notice that the extra mileage gained on different structural encodings is different. Specifically, performance lift on Linear Bi-LSTM > performance lift on Constituency Tree-LSTM, and PA struggles to see performance lift on dependency Tree-LSTM. Interestingly enough, this observation is echoed by an earlier study (Gildea, 2004), which showed that tree-based alignment models work better on constituency trees than on dependency trees. In summary, our results and findings lead to several intriguing questions and conjectures, which call for investigation beyond the scope of our study: • Is it reasonable to conceptualize attention mechanisms as an implicit form of structure, which complements the representation power of explicit syntactic structures? • If yes, does there exist some trade-off between the modeling efforts invested into syntactic and attention structures respectively, which seemingly reveals itself in our empirical results? • The marginal impact of attention on dependency Tree-LSTMs suggests some form of saturation effect. Does that indicate a closer affinity between dependency structures (relative to constituency structures) and compositional semantics (Liang et al., 2013)? • If yes, why is dependency structure a better stepping stone for compositional semantics? Is it due to the strongly lexicalized nature of the grammar? Or is it because the dependency relations (grammatical functions) embody more semantic information? 8 Conclusion In conclusion, we proposed a novel progressive attention model on syntactic structures, and demonstrated its superior performance in semantic relatedness tasks. Our work also provides empirical ingredients for potentially profound questions and debates on syntactic structures in linguistics. 2125 References Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 385–393. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. ICLR’2015. John Bauer. Shift-reduce constituency parser. Johannes Bjerva, Johan Bos, Rob Van der Goot, and Malvina Nissim. 2014. The meaning factory: Formal semantics for recognizing textual entailment and determining semantic similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 642–646. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740–750. Daniel Gildea. 2004. Dependencies vs. constituents for tree-based alignment. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. In Neural Networks, 1996., IEEE International Conference on, volume 1, pages 347–352. IEEE. Klaus Greff, Rupesh K Srivastava, Jan Koutn´ık, Bas R Steunebrink, and J¨urgen Schmidhuber. 2017. Lstm: A search space odyssey. IEEE transactions on neural networks and learning systems. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Csernai Iyer, Dandekar. First quora dataset release: Question pairs. Sergio Jimenez, George Duenas, Julia Baquero, and Alexander Gelbukh. 2014. Unal-nlp: Combining soft cardinality features for semantic textual similarity, relatedness and entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 732–742. Kaggle. 2017. Quora question pairs. Alice Lai and Julia Hockenmaier. 2014. Illinois-lh: A denotational and distributional approach to semantics. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 329–334. Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Comput. Linguist., 39(2):389–446. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of compositional distributional semantic models. In LREC, pages 216–223. Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal smt. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 271–279. Association for Computational Linguistics. Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, and Christopher D Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In NIPS, volume 24, pages 801–809. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal paraphrastic sentence embeddings. arXiv preprint arXiv:1511.08198. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pages 523–530. Association for Computational Linguistics. Jiang Zhao, Tiantian Zhu, and Man Lan. 2014. Ecnu: One stone two birds: Ensemble of heterogenous measures for semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 271–277.
2018
197
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2126–2136 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2126 What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties Alexis Conneau Facebook AI Research Université Le Mans [email protected] German Kruszewski Facebook AI Research [email protected] Guillaume Lample Facebook AI Research Sorbonne Universités [email protected] Loïc Barrault Université Le Mans [email protected] Marco Baroni Facebook AI Research [email protected] Abstract Although much effort has recently been devoted to training high-quality sentence embeddings, we still have a poor understanding of what they are capturing. “Downstream” tasks, often based on sentence classification, are commonly used to evaluate the quality of sentence representations. The complexity of the tasks makes it however difficult to infer what kind of information is present in the representations. We introduce here 10 probing tasks designed to capture simple linguistic features of sentences, and we use them to study embeddings generated by three different encoders trained in eight distinct ways, uncovering intriguing properties of both encoders and training methods. 1 Introduction Despite Ray Mooney’s quip that you cannot cram the meaning of a whole %&!$# sentence into a single $&!#* vector, sentence embedding methods have achieved impressive results in tasks ranging from machine translation (Sutskever et al., 2014; Cho et al., 2014) to entailment detection (Williams et al., 2018), spurring the quest for “universal embeddings” trained once and used in a variety of applications (e.g., Kiros et al., 2015; Conneau et al., 2017; Subramanian et al., 2018). Positive results on concrete problems suggest that embeddings capture important linguistic properties of sentences. However, real-life “downstream” tasks require complex forms of inference, making it difficult to pinpoint the information a model is relying upon. Impressive as it might be that a system can tell that the sentence “A movie that doesn’t aim too high, but it doesn’t need to” (Pang and Lee, 2004) expresses a subjective viewpoint, it is hard to tell how the system (or even a human) comes to this conclusion. Complex tasks can also carry hidden biases that models might lock onto (Jabri et al., 2016). For example, Lai and Hockenmaier (2014) show that the simple heuristic of checking for explicit negation words leads to good accuracy in the SICK sentence entailment task. Model introspection techniques have been applied to sentence encoders in order to gain a better understanding of which properties of the input sentences their embeddings retain (see Section 5). However, these techniques often depend on the specifics of an encoder architecture, and consequently cannot be used to compare different methods. Shi et al. (2016) and Adi et al. (2017) introduced a more general approach, relying on the notion of what we will call probing tasks. A probing task is a classification problem that focuses on simple linguistic properties of sentences. For example, one such task might require to categorize sentences by the tense of their main verb. Given an encoder (e.g., an LSTM) pre-trained on a certain task (e.g., machine translation), we use the sentence embeddings it produces to train the tense classifier (without further embedding tuning). If the classifier succeeds, it means that the pre-trained encoder is storing readable tense information into the embeddings it creates. Note that: (i) The probing task asks a simple question, minimizing interpretability problems. (ii) Because of their simplicity, it is easier to control for biases in probing tasks than in downstream tasks. (iii) The probing task methodology is agnostic with respect to the encoder architecture, as long as it produces a vector representation of sentences. We greatly extend earlier work on probing tasks as follows. First, we introduce a larger set of probing tasks (10 in total), organized by the type of linguistic properties they probe. Second, we systematize the probing task methodology, controlling for 2127 a number of possible nuisance factors, and framing all tasks so that they only require single sentence representations as input, for maximum generality and to ease result interpretation. Third, we use our probing tasks to explore a wide range of state-of-the-art encoding architectures and training methods, and further relate probing and downstream task performance. Finally, we are publicly releasing our probing data sets and tools, hoping they will become a standard way to study the linguistic properties of sentence embeddings.1 2 Probing tasks In constructing our probing benchmarks, we adopted the following criteria. First, for generality and interpretability, the task classification problem should only require single sentence embeddings as input (as opposed to, e.g., sentence and word embeddings, or multiple sentence representations). Second, it should be possible to construct large training sets in order to train parameter-rich multi-layer classifiers, in case the relevant properties are non-linearly encoded in the sentence vectors. Third, nuisance variables such as lexical cues or sentence length should be controlled for. Finally, and most importantly, we want tasks that address an interesting set of linguistic properties. We thus strove to come up with a set of tasks that, while respecting the previous constraints, probe a wide range of phenomena, from superficial properties of sentences such as which words they contain to their hierarchical structure to subtle facets of semantic acceptability. We think the current task set is reasonably representative of different linguistic domains, but we are not claiming that it is exhaustive. We expect future work to extend it. The sentences for all our tasks are extracted from the Toronto Book Corpus (Zhu et al., 2015), more specifically from the random pre-processed portion made available by Paperno et al. (2016). We only sample sentences in the 5-to-28 word range. We parse them with the Stanford Parser (2017-06-09 version), using the pre-trained PCFG model (Klein and Manning, 2003), and we rely on the part-of-speech, constituency and dependency parsing information provided by this tool where needed. For each task, we construct training sets containing 100k sentences, and 10k-sentence val1https://github.com/facebookresearch/ SentEval/tree/master/data/probing idation and test sets. All sets are balanced, having an equal number of instances of each target class. Surface information These tasks test the extent to which sentence embeddings are preserving surface properties of the sentences they encode. One can solve the surface tasks by simply looking at tokens in the input sentences: no linguistic knowledge is called for. The first task is to predict the length of sentences in terms of number of words (SentLen). Following Adi et al. (2017), we group sentences into 6 equal-width bins by length, and treat SentLen as a 6-way classification task. The word content (WC) task tests whether it is possible to recover information about the original words in the sentence from its embedding. We picked 1000 mid-frequency words from the source corpus vocabulary (the words with ranks between 2k and 3k when sorted by frequency), and sampled equal numbers of sentences that contain one and only one of these words. The task is to tell which of the 1k words a sentence contains (1k-way classification). This setup allows us to probe a sentence embedding for word content without requiring an auxiliary word embedding (as in the setup of Adi and colleagues). Syntactic information The next batch of tasks test whether sentence embeddings are sensitive to syntactic properties of the sentences they encode. The bigram shift (BShift) task tests whether an encoder is sensitive to legal word orders. In this binary classification problem, models must distinguish intact sentences sampled from the corpus from sentences where we inverted two random adjacent words (“What you are doing out there?”). The tree depth (TreeDepth) task checks whether an encoder infers the hierarchical structure of sentences, and in particular whether it can group sentences by the depth of the longest path from root to any leaf. Since tree depth is naturally correlated with sentence length, we de-correlate these variables through a structured sampling procedure. In the resulting data set, tree depth values range from 5 to 12, and the task is to categorize sentences into the class corresponding to their depth (8 classes). As an example, the following is a long (22 tokens) but shallow (max depth: 5) sentence: “[1 [2 But right now, for the time being, my past, my fears, and my thoughts [3 were [4 my [5business]]].]]” (the outermost brackets correspond to the ROOT and S nodes in the parse). 2128 In the top constituent task (TopConst), sentences must be classified in terms of the sequence of top constituents immediately below the sentence (S) node. An encoder that successfully addresses this challenge is not only capturing latent syntactic structures, but clustering them by constituent types. TopConst was introduced by Shi et al. (2016). Following them, we frame it as a 20-way classification problem: 19 classes for the most frequent top constructions, and one for all other constructions. As an example, “[Then] [very dark gray letters on a black screen] [appeared] [.]” has top constituent sequence: “ADVP NP VP .”. Note that, while we would not expect an untrained human subject to be explicitly aware of tree depth or top constituency, similar information must be implicitly computed to correctly parse sentences, and there is suggestive evidence that the brain tracks something akin to tree depth during sentence processing (Nelson et al., 2017). Semantic information These tasks also rely on syntactic structure, but they further require some understanding of what a sentence denotes. The Tense task asks for the tense of the main-clause verb (VBP/VBZ forms are labeled as present, VBD as past). No target form occurs across the train/dev/test split, so that classifiers cannot rely on specific words (it is not clear that Shi and colleagues, who introduced this task, controlled for this factor). The subject number (SubjNum) task focuses on the number of the subject of the main clause (number in English is more often explicitly marked on nouns than verbs). Again, there is no target overlap across partitions. Similarly, object number (ObjNum) tests for the number of the direct object of the main clause (again, avoiding lexical overlap). To solve the previous tasks correctly, an encoder must not only capture tense and number, but also extract structural information (about the main clause and its arguments). We grouped Tense, SubjNum and ObjNum with the semantic tasks, since, at least for models that treat words as unanalyzed input units (without access to morphology), they must rely on what a sentence denotes (e.g., whether the described event took place in the past), rather than on structural/syntactic information. We recognize, however, that the boundary between syntactic and semantic tasks is somewhat arbitrary. In the semantic odd man out (SOMO) task, we modified sentences by replacing a random noun or verb o with another noun or verb r. To make the task more challenging, the bigrams formed by the replacement with the previous and following words in the sentence have frequencies that are comparable (on a log-scale) with those of the original bigrams. That is, if the original sentence contains bigrams wn−1o and own+1, the corresponding bigrams wn−1r and rwn+1 in the modified sentence will have comparable corpus frequencies. No sentence is included in both original and modified format, and no replacement is repeated across train/dev/test sets. The task of the classifier is to tell whether a sentence has been modified or not. An example modified sentence is: “ No one could see this Hayes and I wanted to know if it was real or a spoonful (orig.: ploy).” Note that judging plausibility of a syntactically well-formed sentence of this sort will often require grasping rather subtle semantic factors, ranging from selectional preference to topical coherence. The coordination inversion (CoordInv) benchmark contains sentences made of two coordinate clauses. In half of the sentences, we inverted the order of the clauses. The task is to tell whether a sentence is intact or modified. Sentences are balanced in terms of clause length, and no sentence appears in both original and inverted versions. As an example, original “They might be only memories, but I can still feel each one” becomes: “I can still feel each one, but they might be only memories.” Often, addressing CoordInv requires an understanding of broad discourse and pragmatic factors. Row Hum. Eval. of Table 2 reports humanvalidated “reasonable” upper bounds for all the tasks, estimated in different ways, depending on the tasks. For the surface ones, there is always a straightforward correct answer that a human annotator with enough time and patience could find. The upper bound is thus estimated at 100%. The TreeDepth, TopConst, Tense, SubjNum and ObjNum tasks depend on automated PoS and parsing annotation. In these cases, the upper bound is given by the proportion of sentences correctly annotated by the automated procedure. To estimate this quantity, one linguistically-trained author checked the annotation of 200 randomly sampled test sentences from each task. Finally, the BShift, SOMO and CoordInv manipulations can accidentally generate acceptable sentences. For 2129 example, one modified SOMO sentence is: “He pulled out the large round onion (orig.: cork) and saw the amber balm inside.”, that is arguably not more anomalous than the original. For these tasks, we ran Amazon Mechanical Turk experiments in which subjects were asked to judge whether 1k randomly sampled test sentences were acceptable or not. Reported human accuracies are based on majority voting. See Appendix for details. 3 Sentence embedding models In this section, we present the three sentence encoders that we consider and the seven tasks on which we train them. 3.1 Sentence encoder architectures A wide variety of neural networks encoding sentences into fixed-size representations exist. We focus here on three that have been shown to perform well on standard NLP tasks. BiLSTM-last/max For a sequence of T words {wt}t=1,...,T , a bidirectional LSTM computes a set of T vectors {ht}t. For t ∈[1, . . . , T], ht is the concatenation of a forward LSTM and a backward LSTM that read the sentences in two opposite directions. We experiment with two ways of combining the varying number of (h1, . . . , hT ) to form a fixed-size vector, either by selecting the last hidden state of hT or by selecting the maximum value over each dimension of the hidden units. The choice of these models are motivated by their demonstrated efficiency in seq2seq (Sutskever et al., 2014) and universal sentence representation learning (Conneau et al., 2017), respectively.2 Gated ConvNet We also consider the nonrecurrent convolutional equivalent of LSTMs, based on stacked gated temporal convolutions. Gated convolutional networks were shown to perform well as neural machine translation encoders (Gehring et al., 2017) and language modeling decoders (Dauphin et al., 2017). The encoder is composed of an input word embedding table that is augmented with positional encodings (Sukhbaatar et al., 2015), followed by a stack of temporal convolutions with small kernel size. The output of each convolutional layer is filtered by a gating mechanism, similar to the one of LSTMs. Finally, 2We also experimented with a unidirectional LSTM, with consistently poorer results. max-pooling along the temporal dimension is performed on the output feature maps of the last convolution (Collobert and Weston, 2008). 3.2 Training tasks Seq2seq systems have shown strong results in machine translation (Zhou et al., 2016). They consist of an encoder that encodes a source sentence into a fixed-size representation, and a decoder which acts as a conditional language model and that generates the target sentence. We train Neural Machine Translation systems on three language pairs using about 2M sentences from the Europarl corpora (Koehn, 2005). We pick English-French, which involves two similar languages, English-German, involving larger syntactic differences, and English-Finnish, a distant pair. We also train with an AutoEncoder objective (Socher et al., 2011) on Europarl source English sentences. Following Vinyals et al. (2015), we train a seq2seq architecture to generate linearized grammatical parse trees (see Table 1) from source sentences (Seq2Tree). We use the Stanford parser to generate trees for Europarl source English sentences. We train SkipThought vectors (Kiros et al., 2015) by predicting the next sentence given the current one (Tang et al., 2017), on 30M sentences from the Toronto Book Corpus, excluding those in the probing sets. Finally, following Conneau et al. (2017), we train sentence encoders on Natural Language Inference using the concatenation of the SNLI (Bowman et al., 2015) and MultiNLI (Bowman et al., 2015) data sets (about 1M sentence pairs). In this task, a sentence encoder is trained to encode two sentences, which are fed to a classifier and whose role is to distinguish whether the sentences are contradictory, neutral or entailed. Finally, as in Conneau et al. (2017), we also include Untrained encoders with random weights, which act as random projections of pre-trained word embeddings. 3.3 Training details BiLSTM encoders use 2 layers of 512 hidden units (∼4M parameters), Gated ConvNet has 8 convolutional layers of 512 hidden units, kernel size 3 (∼12M parameters). We use pre-trained fastText word embeddings of size 300 (Mikolov et al., 2018) without fine-tuning, to isolate the impact of encoder architectures and to handle words outside the training sets. Training task performance and further details are in Appendix. 2130 task source target AutoEncoder I myself was out on an island in the Swedish archipelago , at Sandhamn . I myself was out on an island in the Swedish archipelago , at Sand@ ham@ n . NMT En-Fr I myself was out on an island in the Swedish archipelago , at Sandhamn . Je me trouvais ce jour là sur une île de l’ archipel suédois , à Sand@ ham@ n . NMT En-De We really need to up our particular contribution in that regard . Wir müssen wirklich unsere spezielle Hilfs@ leistung in dieser Hinsicht aufstocken . NMT En-Fi It is too early to see one system as a universal panacea and dismiss another . Nyt on liian aikaista nostaa yksi järjestelmä jal@ usta@ lle ja antaa jollekin toiselle huono arvo@ sana . SkipThought the old sami was gone , and he was a different person now . the new sami didn ’t mind standing barefoot in dirty white , sans ra@ y-@ bans and without beautiful women following his every move . Seq2Tree Dikoya is a village in Sri Lanka . (ROOT (S (NP NNP )NP (VP VBZ (NP (NP DT NN )NP (PP IN (NP NNP NNP )NP )PP )NP )VP . )S )ROOT Table 1: Source and target examples for seq2seq training tasks. 4 Probing task experiments Baselines Baseline and human-bound performance are reported in the top block of Table 2. Length is a linear classifier with sentence length as sole feature. NB-uni-tfidf is a Naive Bayes classifier using words’ tfidf scores as features, NBbi-tfidf its extension to bigrams. Finally, BoVfastText derives sentence representations by averaging the fastText embeddings of the words they contain (same embeddings used as input to the encoders).3 Except, trivially, for Length on SentLen and the NB baselines on WC, there is a healthy gap between top baseline performance and human upper bounds. NB-uni-tfidf evaluates to what extent our tasks can be addressed solely based on knowledge about the distribution of words in the training sentences. Words are of course to some extent informative for most tasks, leading to relatively high performance in Tense, SubjNum and ObjNum. Recall that the words containing the probed features are disjoint between train and test partitions, so we are not observing a confound here, but rather the effect of the redundancies one expects in natural language data. For example, for Tense, since sentences often contain more than one verb in the same tense, NB-uni-tfidf can exploit nontarget verbs as cues: the NB features most associated to the past class are verbs in the past tense (e.g “sensed”, “lied”, “announced”), and similarly for present (e.g “uses”, “chuckles”, “frowns”). Using bigram features (NB-bi-tfidf) brings in general little or no improvement with respect to the unigram baseline, except, trivially, for the BShift 3Similar results are obtained summing embeddings, and using GloVe embeddings (Pennington et al., 2014). task, where NB-bi-tfidf can easily detect unlikely bigrams. NB-bi-tfidf has below-random performance on SOMO, confirming that the semantic intruder is not given away by superficial bigram cues. Our first striking result is the good overall performance of Bag-of-Vectors, confirming early insights that aggregated word embeddings capture surprising amounts of sentence information (Pham et al., 2015; Arora et al., 2017; Adi et al., 2017). BoV’s good WC and SentLen performance was already established by Adi et al. (2017). Not surprisingly, word-order-unaware BoV performs randomly in BShift and in the more sophisticated semantic tasks SOMO and CoordInv. More interestingly, BoV is very good at the Tense, SubjNum, ObjNum, and TopConst tasks (much better than the word-based baselines), and well above chance in TreeDepth. The good performance on Tense, SubjNum and ObjNum has a straightforward explanation we have already hinted at above. Many sentences are naturally “redundant”, in the sense that most tensed verbs in a sentence are in the same tense, and similarly for number in nouns. In 95.2% Tense, 75.9% SubjNum and 78.7% ObjNum test sentences, the target tense/number feature is also the majority one for the whole sentence. Word embeddings capture features such as number and tense (Mikolov et al., 2013), so aggregated word embeddings will naturally track these features’ majority values in a sentence. BoV’s TopConst and TreeDepth performance is more surprising. Accuracy is well above NB, showing that BoV is exploiting cues beyond specific words strongly associated to the target classes. We conjecture that more abstract word features captured 2131 Task SentLen WC TreeDepth TopConst BShift Tense SubjNum ObjNum SOMO CoordInv Baseline representations Majority vote 20.0 0.5 17.9 5.0 50.0 50.0 50.0 50.0 50.0 50.0 Hum. Eval. 100 100 84.0 84.0 98.0 85.0 88.0 86.5 81.2 85.0 Length 100 0.2 18.1 9.3 50.6 56.5 50.3 50.1 50.2 50.0 NB-uni-tfidf 22.7 97.8 24.1 41.9 49.5 77.7 68.9 64.0 38.0 50.5 NB-bi-tfidf 23.0 95.0 24.6 53.0 63.8 75.9 69.1 65.4 39.9 55.7 BoV-fastText 66.6 91.6 37.1 68.1 50.8 89.1 82.1 79.8 54.2 54.8 BiLSTM-last encoder Untrained 36.7 43.8 28.5 76.3 49.8 84.9 84.7 74.7 51.1 64.3 AutoEncoder 99.3 23.3 35.6 78.2 62.0 84.3 84.7 82.1 49.9 65.1 NMT En-Fr 83.5 55.6 42.4 81.6 62.3 88.1 89.7 89.5 52.0 71.2 NMT En-De 83.8 53.1 42.1 81.8 60.6 88.6 89.3 87.3 51.5 71.3 NMT En-Fi 82.4 52.6 40.8 81.3 58.8 88.4 86.8 85.3 52.1 71.0 Seq2Tree 94.0 14.0 59.6 89.4 78.6 89.9 94.4 94.7 49.6 67.8 SkipThought 68.1 35.9 33.5 75.4 60.1 89.1 80.5 77.1 55.6 67.7 NLI 75.9 47.3 32.7 70.5 54.5 79.7 79.3 71.3 53.3 66.5 BiLSTM-max encoder Untrained 73.3 88.8 46.2 71.8 70.6 89.2 85.8 81.9 73.3 68.3 AutoEncoder 99.1 17.5 45.5 74.9 71.9 86.4 87.0 83.5 73.4 71.7 NMT En-Fr 80.1 58.3 51.7 81.9 73.7 89.5 90.3 89.1 73.2 75.4 NMT En-De 79.9 56.0 52.3 82.2 72.1 90.5 90.9 89.5 73.4 76.2 NMT En-Fi 78.5 58.3 50.9 82.5 71.7 90.0 90.3 88.0 73.2 75.4 Seq2Tree 93.3 10.3 63.8 89.6 82.1 90.9 95.1 95.1 73.2 71.9 SkipThought 66.0 35.7 44.6 72.5 73.8 90.3 85.0 80.6 73.6 71.0 NLI 71.7 87.3 41.6 70.5 65.1 86.7 80.7 80.3 62.1 66.8 GatedConvNet encoder Untrained 90.3 17.1 30.3 47.5 62.0 78.2 72.2 70.9 61.4 59.6 AutoEncoder 99.4 16.8 46.3 75.2 71.9 87.7 88.5 86.5 73.5 72.4 NMT En-Fr 84.8 41.3 44.6 77.6 67.9 87.9 88.8 86.6 66.1 72.0 NMT En-De 89.6 49.0 50.5 81.7 72.3 90.4 91.4 89.7 72.8 75.1 NMT En-Fi 89.3 51.5 49.6 81.8 70.9 90.4 90.9 89.4 72.4 75.1 Seq2Tree 96.5 8.7 62.0 88.9 83.6 91.5 94.5 94.3 73.5 73.8 SkipThought 79.1 48.4 45.7 79.2 73.4 90.7 86.6 81.7 72.4 72.3 NLI 73.8 29.2 43.2 63.9 70.7 81.3 77.5 74.4 73.3 71.0 Table 2: Probing task accuracies. Classification performed by a MLP with sigmoid nonlinearity, taking pre-learned sentence embeddings as input (see Appendix for details and logistic regression results). by the embeddings (such as the part of speech of a word) might signal different syntactic structures. For example, sentences in the “WHNP SQ .” top constituent class (e.g., “How long before you leave us again?”) must contain a wh word, and will often feature an auxiliary or modal verb. BoV can rely on this information to noisily predict the correct class. Encoding architectures Comfortingly, proper encoding architectures clearly outperform BoV. An interesting observation in Table 2 is that different encoder architectures trained with the same objective, and achieving similar performance on the training task,4 can lead to linguistically different embeddings, as indicated by the probing tasks. Coherently with the findings of Conneau et al. (2017) for the downstream tasks, this sug4See Appendix for details on training task performance. gests that the prior imposed by the encoder architecture strongly preconditions the nature of the embeddings. Complementing recent evidence that convolutional architectures are on a par with recurrent ones in seq2seq tasks (Gehring et al., 2017), we find that Gated ConvNet’s overall probing task performance is comparable to that of the best LSTM architecture (although, as shown in Appendix, the LSTM has a slight edge on downstream tasks). We also replicate the finding of Conneau et al. (2017) that BiLSTM-max outperforms BiLSTM-last both in the downstream tasks (see Appendix) and in the probing tasks (Table 2). Interestingly, the latter only outperforms the former in SentLen, a task that captures a superficial aspect of sentences (how many words they contain), that could get in the way of inducing more useful linguistic knowledge. 2132 Training tasks We focus next on how different training tasks affect BiLSTM-max, but the patterns are generally representative across architectures. NMT training leads to encoders that are more linguistically aware than those trained on the NLI data set, despite the fact that we confirm the finding of Conneau and colleagues that NLI is best for downstream tasks (Appendix). Perhaps, NMT captures richer linguistic features useful for the probing tasks, whereas shallower or more adhoc features might help more in our current downstream tasks. Suggestively, the one task where NLI clearly outperforms NMT is WC. Thus, NLI training is better at preserving shallower word features that might be more useful in downstream tasks (cf. Figure 2 and discussion there). Unsupervised training (SkipThought and AutoEncoder) is not on a par with supervised tasks, but still effective. AutoEncoder training leads, unsurprisingly, to a model excelling at SentLen, but it attains low performance in the WC prediction task. This curious result might indicate that the latter information is stored in the embeddings in a complex way, not easily readable by our MLP. At the other end, Seq2Tree is trained to predict annotation from the same parser we used to create some of the probing tasks. Thus, its high performance on TopConst, Tense, SubjNum, ObjNum and TreeDepth is probably an artifact. Indeed, for most of these tasks, Seq2Tree performance is above the human bound, that is, Seq2Tree learned to mimic the parser errors in our benchmarks. For the more challenging SOMO and CoordInv tasks, that only indirectly rely on tagging/parsing information, Seq2Tree is comparable to NMT, that does not use explicit syntactic information. Perhaps most interestingly, BiLSTM-max already achieves very good performance without any training (Untrained row in Table 2). Untrained BiLSTM-max also performs quite well in the downstream tasks (Appendix). This architecture must encode priors that are intrinsically good for sentence representations. Untrained BiLSTM-max exploits the input fastText embeddings, and multiplying the latter by a random recurrent matrix provides a form of positional encoding. However, good performance in a task such as SOMO, where BoV fails and positional information alone should not help (the intruder is randomly distributed across the sentence), suggests that other architectural biases are at work. Intriguingly, a preliminary comparison of untrained BiLSTM-max and human subjects on the SOMO sentences evaluated by both reveals that, whereas humans have a bias towards finding sentences acceptable (62% sentences are rated as untampered with, vs. 48% ground-truth proportion), the model has a strong bias in the opposite direction (it rates 83% of the sentences as modified). A cursory look at contrasting errors confirms, unsurprisingly, that those made by humans are perfectly justified, while model errors are opaque. For example, the sentence “I didn’t come here to reunite (orig. undermine) you” seems perfectly acceptable in its modified form, and indeed subjects judged it as such, whereas untrained BiLSTM-max “correctly” rated it as a modified item. Conversely, it is difficult to see any clear reason for the latter tendency to rate perfectly acceptable originals as modified. We leave a more thorough investigation to further work. See similar observations on the effectiveness of untrained ConvNets in vision by Ulyanov et al. (2017). Probing task comparison A good encoder, such as NMT-trained BiLSTM-max, shows generally good performance across probing tasks. At one extreme, performance is not particularly high on the surface tasks, which might be an indirect sign of the encoder extracting “deeper” linguistic properties. At the other end, performance is still far from the human bounds on TreeDepth, BShift, SOMO and CoordInv. The last 3 tasks ask if a sentence is syntactically or semantically anomalous. This is a daunting job for an encoder that has not been explicitly trained on acceptability, and it is interesting that the best models are, at least to a certain extent, able to produce reasonable anomaly judgments. The asymmetry between the difficult TreeDepth and easier TopConst is also interesting. Intuitively, TreeDepth requires more nuanced syntactic information (down to the deepest leaf of the tree) than TopConst, that only requires identifying broad chunks. Figure 1 reports how probing task accuracy changes in function of encoder training epochs. The figure shows that NMT probing performance is largely independent of target language, with strikingly similar development patterns across French, German and Finnish. Note in particular the similar probing accuracy curves in French and Finnish, while the corresponding BLEU scores (in lavender) are consistently higher in the former lan2133 0 20 40 60 80 100 NMT En-Fr - BiLSTM-max NMT En-De - BiLSTM-max 1 10 20 30 40 50 0 20 40 60 80 100 NMT En-Fi - BiLSTM-max 1 10 20 30 40 50 SkipThought - BiLSTM-max Epoch Accuracy SentLen WordContent TreeDepth TopConst Tense SOMO BLEU (or PPL) Figure 1: Probing task scores after each training epoch, for NMT and SkipThought. We also report training score evolution: BLEU for NMT; perplexity (PPL) for SkipThought. guage. For both NMT and SkipThought, WC performance keeps increasing with epochs. For the other tasks, we observe instead an early flattening of the NMT probing curves, while BLEU performance keeps increasing. Most strikingly, SentLen performance is actually decreasing, suggesting again that, as a model captures deeper linguistic properties, it will tend to forget about this superficial feature. Finally, for the challenging SOMO task, the curves are mostly flat, suggesting that what BiLSTM-max is able to capture about this task is already encoded in its architecture, and further training doesn’t help much. Probing vs. downstream tasks Figure 2 reports correlation between performance on our probing tasks and the downstream tasks available in the SentEval5 suite, which consists of classification (MR, CR, SUBJ, MPQA, SST2, SST5, TREC), natural language inference (SICK-E), semantic relatedness (SICK-R, STSB), paraphrase detection (MRPC) and semantic textual similarity (STS 2012 to 2017) tasks. Strikingly, WC is significantly positively correlated with all downstream tasks. This suggests that, at least for current models, the latter do not require extracting particularly abstract knowledge from the data. Just relying on the words contained in the input sentences 5https://github.com/facebookresearch/ SentEval can get you a long way. Conversely, there is a significant negative correlation between SentLen and most downstream tasks. The number of words in a sentence is not informative about its linguistic contents. The more models abstract away from such information, the more likely it is they will use their capacity to capture more interesting features, as the decrease of the SentLen curve along training (see Figure 1) also suggests. CoordInv and, especially, SOMO, the tasks requiring the most sophisticated semantic knowledge, are those that positively correlate with the largest number of downstream tasks after WC. We observe intriguing asymmetries: SOMO correlates with the SICK-E sentence entailment test, but not with SICK-R, which is about modeling sentence relatedness intuitions. Indeed, logical entailment requires deeper semantic analysis than modeling similarity judgments. TopConst and the number tasks negatively correlate with various similarity and sentiment data sets (SST, STS, SICK-R). This might expose biases in these tasks: SICK-R, for example, deliberately contains sentence pairs with opposite voice, that will have different constituent structure but equal meaning (Marelli et al., 2014). It might also mirrors genuine factors affecting similarity judgments (e.g., two sentences differing only in object number are very similar). Remarkably, TREC question type classification is the downstream task correlating with most probing tasks. Question classification is certainly an outlier among our downstream tasks, but we must leave a full understanding of this behaviour to future work (this is exactly the sort of analysis our probing tasks should stimulate). 5 Related work Adi et al. (2017) introduced SentLen, WC and a word order test, focusing on a bag-of-vectors baseline, an autoencoder and skip-thought (all trained on the same data used for the probing tasks). We recast their tasks so that they only require a sentence embedding as input (two of their tasks also require word embeddings, polluting sentencelevel evaluation), we extend the evaluation to more tasks, encoders and training objectives, and we relate performance on the probing tasks with that on downstream tasks. Shi et al. (2016) also use 3 probing tasks, including Tense and TopConst. It is not clear that they controlled for the same factors we considered (in particular, lexical overlap and 2134 Figure 2: Spearman correlation matrix between probing and downstream tasks. Correlations based on all sentence embeddings we investigated (more than 40). Cells in gray denote task pairs that are not significantly correlated (after correcting for multiple comparisons). sentence length), and they use much smaller training sets, limiting classifier-based evaluation to logistic regression. Moreover, they test a smaller set of models, focusing on machine translation. Belinkov et al. (2017a), Belinkov et al. (2017b) and Dalvi et al. (2017) are also interested in understanding the type of linguistic knowledge encoded in sentence and word embeddings, but their focus is on word-level morphosyntax and lexical semantics, and specifically on NMT encoders and decoders. Sennrich (2017) also focuses on NMT systems, and proposes a contrastive test to assess how they handle various linguistic phenomena. Other work explores the linguistic behaviour of recurrent networks and related models by using visualization, input/hidden representation deletion techniques or by looking at the word-by-word behaviour of the network (e.g., Nagamine et al., 2015; Hupkes et al., 2017; Li et al., 2016; Linzen et al., 2016; Kàdàr et al., 2017; Li et al., 2017). These methods, complementary to ours, are not agnostic to encoder architecture, and cannot be used for general-purpose cross-model evaluation. Finally, Conneau et al. (2017) propose a largescale, multi-task evaluation of sentence embeddings, focusing entirely on downstream tasks. 6 Conclusion We introduced a set of tasks probing the linguistic knowledge of sentence embedding methods. Their purpose is not to encourage the development of ad-hoc models that attain top performance on them, but to help exploring what information is captured by different pre-trained encoders. We performed an extensive linguistic evaluation of modern sentence encoders. Our results suggest that the encoders are capturing a wide range of properties, well above those captured by a set of strong baselines. We further uncovered interesting patterns of correlation between the probing tasks and more complex “downstream” tasks, and presented a set of intriguing findings about the linguistic properties of various embedding methods. For example, we found that Bag-of-Vectors is surprisingly good at capturing sentence-level properties, thanks to redundancies in natural linguistic input. We showed that different encoder architectures trained with the same objective with similar performance can result in different embeddings, pointing out the importance of the architecture prior for sentence embeddings. In particular, we found that BiLSTM-max embeddings are already capturing interesting linguistic knowledge before training, and that, after training, they detect semantic acceptability without having been exposed to anomalous sentences before. We hope that our publicly available probing task set will become a standard benchmarking tool of the linguistic properties of new encoders, and that it will stir research towards a better understanding of what they learn. In future work, we would like to extend the probing tasks to other languages (which should be relatively easy, given that they are automatically generated), investigate how multi-task training affects probing task performance and leverage our probing tasks to find more linguistically-aware universal encoders. Acknowledgments We thank David Lopez-Paz, Holger Schwenk, Hervé Jégou, Marc’Aurelio Ranzato and Douwe Kiela for useful comments and discussions. References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In Proceedings of ICLR Conference Track. Toulon, France. Published online: https://openreview.net/group? id=ICLR.cc/2017/conference. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In Proceedings of ICLR Conference Track. Toulon, France. Published 2135 online: https://openreview.net/group? id=ICLR.cc/2017/conference. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. Advances in neural information processing systems (NIPS) . Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017a. What do neural machine translation models learn about morphology? In Proceedings of ACL. Vancouver, Canada, pages 861–872. Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017b. Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks. In Proceedings of IJCNLP. Taipei, Taiwan, pages 1–10. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Proceedings of EMNLP . Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine learning. ACM, pages 160–167. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of EMNLP. Copenhagen, Denmark, pages 670–680. Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, and Stephan Vogel. 2017. Understanding and improving morphological learning in the neural machine translation decoder. In Proceedings of IJCNLP. Taipei, Taiwan, pages 142–151. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. Proceedings of the 34th International Conference on Machine Learning . Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of ICML. Sydney, Australia, pages 1243–1252. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2017. Visualisation and diagnostic classifiers reveal how recurrent and recursive neural networks process hierarchical structure. http:// arxiv.org/abs/1711.10203. Allan Jabri, Armand Joulin, and Laurens van der Maaten. 2016. Revisiting visual question answering baselines. In Proceedings of ECCV. Amsterdam, the Netherlands, pages 727–739. Àkos Kàdàr, Grzegorz Chrupała, and Afra Alishahi. 2017. Representation of linguistic form and function in recurrent neural networks. Computational Linguistics 43(4):761–780. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems. pages 3294–3302. Dan Klein and Christopher Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL. Sapporo, Japan, pages 423–430. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit. volume 5, pages 79–86. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. Association for Computational Linguistics, pages 177– 180. Alice Lai and Julia Hockenmaier. 2014. Illinois-LH: A denotational and distributional approach to semantics. In Proceedings of SemEval. Dublin, Ireland, pages 329–334. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of NAACL. San Diego, CA, pages 681–691. Jiwei Li, Monroe Will, and Dan Jurafsky. 2017. Efficient estimation of word representations in vector space. https://arxiv.org/abs/1612. 08220. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics 4:521– 535. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of LREC. Rekjavik, Iceland, pages 216–223. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of LREC. Miyazaki, Japan. 2136 Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of NAACL. Atlanta, Georgia, pages 746–751. Tasha Nagamine, Michael L. Seltzer, and Nima Mesgarani. 2015. Exploring how deep neural networks form phonemic categories. In Proceedings of INTERSPEECH. Dresden, Germany, pages 1912– 1916. Matthew Nelson, Imen El Karoui, Kristof Giber, Xiaofang Yang, Laurent Cohen, Hilda Koopman, Sydney Cash, Lionel Naccache, John Hale, Christophe Pallier, and Stanislas Dehaene. 2017. Neurophysiological dynamics of phrase-structure building during sentence processing. Proceedings of the National Academy of Sciences 114(18):E3669–E3678. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL. Barcelona, Spain, pages 271–278. Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of ACL. Berlin, Germany, pages 1525–1534. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Doha, Qatar, pages 1532–1543. Nghia The Pham, Germán Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly optimizing word representations for lexical and sentential tasks with the C-PHRASE model. In Proceedings of ACL. Beijing, China, pages 971–981. Rico Sennrich. 2017. How grammatical is characterlevel neural machine translation? assessing MT quality with contrastive translation pairs. In Proceedings of EACL (Short Papers). Valencia, Spain, pages 376–382. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proceedings of EMNLP. Austin, Texas, pages 1526– 1534. Richard Socher, Eric Huang, Jeffrey Pennin, Andrew Ng, and Christopher Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS. Granada, Spain, pages 801–809. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In International Conference on Learning Representations. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems. pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS. Montreal, Canada, pages 3104–3112. Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, and Virginia R de Sa. 2017. Trimming and improving skip-thought vectors. Proceedings of the 2nd Workshop on Representation Learning for NLP . Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2017. Deep image prior. https://arxiv. org/abs/1711.10925. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems. pages 2773–2781. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. arXiv preprint arXiv:1606.04199 . Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of ICCV. Santiago, Chile, pages 19–27.
2018
198
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2137–2147 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2137 Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning Pengda Qin♯, Weiran Xu♯, William Yang Wang♭ ♯Beijing University of Posts and Telecommunications, China ♭University of California, Santa Barbara, USA {qinpengda, xuweiran}@bupt.edu.cn {william}@cs.ucsb.edu Abstract Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost—The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution—We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. 1 Introduction Relation extraction is a core task in information extraction and natural language understanding. The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005). For example, given a sentence Negative set Positive set False Positive Negative set Positive set False Positive Policy Based Agent Policy Gradient Training Redistribute Training Dataset 𝑅𝑒𝑤𝑎𝑟𝑑 𝐴𝑐𝑡𝑖𝑜𝑛 Classifier 𝑇𝑟𝑎𝑖𝑛 Figure 1: Our deep reinforcement learning framework aims at dynamically recognizing false positive samples, and moving them from the positive set to the negative set during distant supervision. “Barack Obama is married to Michelle Obama.”, a relation classifier aims at predicting the relation of “spouse”. In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization. A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue—It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances. Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data. In recent years, neural network approaches (Zeng et al., 2014, 2015) have been proposed to train the relation extractor under these noisy conditions. To suppress the noisy(Roth et al., 2013), recent stud2138 ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples. However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place. In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision. More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier. Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy. Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model. Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010). Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction. • Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors. • We show that our method can boost the performances of recently proposed neural relation extractors. In Section 2, we will discuss related works on distant supervision relation extraction. Next, we will describe our robust distant supervision framework in Section 3. In Section 4, empirical evaluation results are shown. And finally, we conclude in Section 5. 2 Related Work Mintz et al. (2009) is the first study that combines dependency path and feature aggregation for distant supervision. However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations. To alleviate this issue, Hoffmann et al. (2011) address this issue, and propose a model to jointly learn with multiple relations. Surdeanu et al. (2012) further propose a multi-instance multi-label learning framework to improve the performance. Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise. Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014, 2015) are introduced, and the hope is to model noisy distant supervision process in the hidden layers. However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances. Recently, Lin et al. (2016) propose an attention mechanism to select plausible instances from a set of noisy instances. However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set. Ji et al. (2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights. Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives. In this work, we take a radical approach to solve this problem—We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place. After our ACL submission, we notice that a contemporaneous study Feng et al. (2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities. In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier. Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers. 3 Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples. Comparing to 2139 a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016), we consider an RL agent for robust distant supervision relation extraction. We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy. Next, we describe the retraining strategy for our RL agent. The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier. Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset. Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set. However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type. Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type. For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017). First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP). However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other. In other words, we cannot merely use the information of the sentence being processed as the state. Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017). The other component, RL agent, is parameterized with a policy network πθ(s, a) = p(a|s; θ). The probability distribution of actions A = {aremove, aremain} is calculated by policy network based on state vectors. What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small. First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset. Second, the stochastic policy of the policy network is capable of preventing the agent from getting stuck in an intermediate state. The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method. States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states. The semantic and syntactic information of sentence is represented by a continuous real-valued vector. According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015), we utilize both word embedding and position embedding to convert sentence into vector. With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states. We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action. Actions At each step, our agent is required to determine whether the instance is false positive for target relation type. Each relation type has a agent1. There are two actions for each agent: whether to remove or retain the current instance from the training set. With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance. Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve. Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent. Compared to accuracy, we adopt the F1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced. Thus, the reward can be formulated as the 1We also tried the strategy that just builds a single agent for all relation types: a binary classifier(TP/FP) or a multiclass classifier(rela1/rela2/.../FP). But, it has the limitation in the performance. We found that our one-agent-for-onerelation strategy obtained better performance than the single agent strategy. 2140 RL Agent Train Relation Classifier 𝐹" #$" 𝐹" # × +𝓡# + ×(−𝓡#) Noisy dataset 𝑃./# Cleaned dataset 𝑃#$" Cleaned dataset 𝑃# Removed part Removed part Train 𝓡# = 𝛼(𝐹" # - 𝐹" #$") Relation Classifier RL Agent Epoch 𝑖−1 : Epoch 𝑖: Noisy dataset 𝑃./# +𝑁./# {𝑃6 #$", 𝑁6 #$"} 𝑁#$" 𝑁./# + 𝑁# {𝑃6 #, 𝑁6 #} Figure 2: The proposed policy-based reinforcement learning framework. The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori. In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as Nori t and Nori v . In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples. Then, a new relation classifier is trained under the new dataset {P i t , Ni t}. With this relation classifier, F1 score is calculated from the new validation set {P i v, Ni v}, where P i v is also filtered by the current agent. After that, the current reward is measured as the difference of F1 between the adjacent epochs. difference between the adjacent epochs: Ri = α(F i 1 −F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F1 gets improved; otherwise, the agent will receive a negative reward. Under this setting, the value of reward is proportional to the difference of F1, and α is used to convert this difference into a rational numeric range. Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed. In order to avoid the randomness of F1, we use the average F1 of last five epochs to calculate the reward. Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type. Thus, it is analogous to a binary relation classifier. CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016), so we adopt a simple CNN with window size cw and kernel size ck, to model policy network π(s; θ). The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification. Naturally, the simpler network is adopted. 3.1 Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair. In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type. If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties. In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent. 2141 3.1.1 Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016), is a common strategy in RL related works to accelerate the training of RL agents. Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning. For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent. However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs. However, this is expensive, and it is not the original intention of distant supervision. Under this circumstance, we propose a compromised solution. With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset. So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set. In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples. It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction. Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action. (2) J(θ) = X i yilog[π(a = yi|si; θ)] + (1 −yi)log[1 −π(a = yi|si; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning. So, we stop this training process when the accuracy reaches 85% ∼90%. Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern. 3.1.2 Retraining Agent with Rewards As shown in Figure 2, in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method. What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset. We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy. Correspondingly, the training negative set Nori t and the validation negative set Nori v are constructed by randomly selected from the DS negative dataset. In every epoch, the agent removes a noisy sample set Ψi from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set Pt = P ori t −Ψi. Because Ψi is recognized as the wrong-labeled samples, we redistribute it into the negative set Nt = Nori t + Ψi. Under this setting, the scale of training set is constant for each epoch. Now we utilize the cleaned data {Pt, Nt} to train a relation classifier. The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances. Therefore, we use the validation set {P ori v , Nori v } to measure the performance of the current agent. First, this validation set is filtered and redistributed by the current agent as {Pv, Nv}; the F1 score of the current relation classifier is calculated from it. Finally, the difference of F1 scores between the current and previous epoch is used to calculate reward. Next, we will introduce several strategies to train a more robust RL agent. Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances. Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease. Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances. Loss function The quality of the RL agent is reflected by the quality of the removed part. After the pre-training process, the agent just possesses 2142 Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm. Require: Positive set {P ori t , P ori v }, Negative set {Nori t , Nori v }, the fixed number of removal γt, γv 1: Load parameters θ from pre-trained policy network 2: Initialize s∗as the all-zero vector with the same dimension of sj 3: for epoch i = 1 →N do 4: for sj ∈P ori t do 5: esj = concatenation(sj, s∗) 6: Randomly sample aj ∼π(a| esj; θ); compute pj = π(a = 0| esj; θ) 7: if aj == 0 then 8: Save tuple tj = ( esj, pj) in T and recompute the average vector of removed sentences s∗ 9: end if 10: end for 11: Rank T based on pj from high to low, obtain Trank 12: for ti in Trank[: γt] do 13: Add ti[0] into Ψi 14: end for 15: P i t = P ori t −Ψi, Ni t = Nori t + Ψi, and generate the new validation set {P i v, Ni v} with current agent 16: Train the relation classifier based on {P i t , Ni t} 17: Calculate F i 1 on the new validation set {P i v, Ni v}, and Save F i 1, Ψi 18: R = α(F i 1 −F i−1 1 ) 19: Ωi−1 = Ψi−1 −Ψi ∩Ψi−1; Ωi = Ψi −Ψi ∩Ψi−1 20: 21: Updata θ: g ∝▽θ PΩi log π(a|s; θ)R + ▽θ PΩi−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous. Particularly, this indistinguishable part is the criterion to reflect the quality of the agent. Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F1 scores. Therefore, we definite two sets: Ωi−1 = Ψi−1 −(Ψi ∩Ψi−1) (3) Ωi = Ψi −(Ψi ∩Ψi−1) (4) where Ψi is the removed part of epoch i. Ωi−1 and Ωi are represented with the different colors in Figure 2. If F1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i −1. In other words, Ωi is more negative than Ωi−1. Thus, we assign the positive reward to Ωi and the negative reward to Ωi−1, and vice versa. In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ωi X log π(a|s; θ)R + Ωi−1 X log π(a|s; θ)(−R) 3.2 Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator. These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types. We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset. For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set. 4 Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re2143 distribute training dataset by moving false positive samples to negative sample set. Therefore, our experiments are intended to demonstrate that our RL agents possess this capability. 4.1 Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset2, which is first presented in Riedel et al. (2010). This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT). Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005). The sentences from the years 2005-2006 are used as the training corpus and sentences from 2007 are used as the testing corpus. There are 52 actual relations and a special relation NA which indicates there is no relation between the head and tail entities. The sentences of NA are from the entity pairs that exist in the same sentence of the actual relations but do not appear in the Freebase. Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation. Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus. 4.2 Experimental Settings 4.2.1 Policy-based Agent The action space of our RL agent just includes two actions. Therefore, the agent can be modeled as a binary classifier. We adopt a single-window CNN as this policy network. The detailed hyperparameter settings are presented in Table 1. As for word embeddings, we directly use the word embedding file released by Lin et al. (2016)3, which just keeps the words that appear more than 100 times in NYT. Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (“-” and “+” represent the left and right side of the entities). The learning rate of reinforcement learning is 2e−5. For each relation type, the fixed number γt, γv are according to the pre-trained agent. When one relation type has too many distantsupervised positive sentences (for example, /lo2http://iesl.cs.umass.edu/riedel/ecml/ 3https://github.com/thunlp/NRE Hyperparameter Value Window size cw 3 Kernel size ck 100 Batch size 64 Regulator α 100 Table 1: Hyperparameter settings. ID Relation Original Pretrain RL 1 /peo/per/pob 55.60 53.63 55.74 2 /peo/per/n 78.85 80.80 83.63 3 /peo/per/pl 86.65 89.62 90.76 4 /loc/loc/c 80.78 83.79 85.39 5 /loc/cou/ad 90.9 88.1 89.86 6 /bus/per/c 81.03 82.56 84.22 7 /loc/cou/c 88.10 93.78 95.19 8 /loc/adm/c 86.51 85.56 86.63 9 /loc/nei/n 96.51 97.20 98.23 10 /peo/dec/p 82.2 83.0 84.6 Table 2: Comparison of F1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively. The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent. For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector. 4.2.2 Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set. The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets Nori t and Nori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets. 4.3 The Effectiveness of Reinforcement Learning In Table 2, we list the F1 scores before and after adopting the proposed RL method. Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos2144 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Recall 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision CNN+ONE CNN+ONE+RL CNN+ATT CNN+ATT+RL Figure 3: Aggregate PR curves of CNN˙based model. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision PCNN+ONE PCNN+ONE+RL PCNN+ATT PCNN+ATT+RL Figure 4: Aggregate PR curves of PCNN˙based model. itive instances4. Because of the randomness of deep neural network on the small-scale dataset, we just train policy-based agents for these 10 relation types. First, compared with Original case, most of the Pretrain agents yield obvious improvements: It not only demonstrates the rationality of our pretraining strategy, but also verifies our hypothesis that most of the positive samples in Riedel dataset are true positive. More significantly, after retraining with the proposed policy-based RL method, the F1 scores achieve further improvement, even for the case the Pretrain agents perform bad. These comparable results illustrate that the proposed policy-based RL method is capable of making agents develop towards a good direction. 4The supervised relation classification task Semeval-2010 Task 8 (Hendrickx et al., 2009) annotates nearly 1,000 instances for each relation type. Model +RL p-value CNN+ONE 0.177 0.190 1.24e-4 CNN+ATT 0.219 0.229 7.63e-4 PCNN+ONE 0.206 0.220 8.35e-6 PCNN+ATT 0.253 0.261 4.36e-3 Table 3: Comparison of AUC values between previous studies and our RL method, and the p-value of t-test. 4.4 Impact of False Positive Samples Zeng et al. (2015) and Lin et al. (2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction. Zeng et al. (2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al. (2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair. However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base. This phenomenon is also common between Riedel dataset and Freebase through our manual inspection. Obviously, there is nothing the above two methods can do in this case. The proposed RL method is to tackle this problem. We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set. Then we use Zeng et al. (2015) and Lin et al. (2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset. As shown in Figure 3 and Figure 4, under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset. In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves. These comparable results also indicate the effectiveness of our policy-based RL method. Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant. 4.5 Case Study Figure 5 indicates that, for different relations, the scale of the detected false positive samples is not 2145 Relation /people/person/place of birth FP 1. GHETTO SUPERSTAR ( THE MAN THAT I AM) – Ranging from Pittsburgh to Broadway, Billy Porter performs his musical memoir. FP 1. “They are trying to create a united front at home in the face of the pressures Syria is facing,“ said Sami Moubayed, a political analyst and writer here. 2. “Iran injected Syria with a lot of confidence: stand up, show defiance,“ said Sami Moubayed, a political analyst and writer in Damascus. Relation /people/deceased person/place of death FP 1. Some New York city mayors – William O’Dwyer, Vincent R. Impellitteri and Abraham Beame – were born abroad. 2. Plenty of local officials have, too, including two New York city mayors, James J. Walker, in 1932, and William O’Dwyer, in 1950. Table 4: Some examples of the false positive samples detected by our policy-based agent. Each row denotes the annotated sentences of one entity pair. proportional to the original scale, which is in accordance with the actual accident situation. At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%]. This distribution is consistent with our assumption. Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems. In Table 4, we present some false positive examples selected by our agents. Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth. Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed. We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical. This phenomenon also increases the probability of the appearance of false positive samples. 5 Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision. The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data. More specifically, our goal is to 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 1 2 3 4 5 6 7 8 9 10 ENTITY PAIR AMOUNT RELATION ID Removed Total Figure 5: This figure presents the scale of the removed part for each relation type, where the horizontal axis corresponds to the IDs in Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification. An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline. In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times - Freebase dataset. Acknowledge This work was supported by National Natural Science Foundation of China (61702047), Beijing Natural Science Foundation (4174098), the Fundamental Research Funds for the Central Universities (2017RC02) and National Natural Science Foundation of China (61703234) 2146 References Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. 2017. A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866. Razvan Bunescu and Raymond J Mooney. 2005. Subsequence kernels for relation extraction. In NIPS, pages 171–178. Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. arXiv preprint arXiv:1708.02383. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 363–370. Association for Computational Linguistics. Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427–434. Association for Computational Linguistics. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94–99. Association for Computational Linguistics. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 541–550. Association for Computational Linguistics. Guoliang Ji, Kang Liu, Shizhu He, Jun Zhao, et al. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In AAAI, pages 3060–3066. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL (1). Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In ACL (2), pages 365–371. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer. Benjamin Roth, Tassilo Barth, Michael Wiegand, and Dietrich Klakow. 2013. A survey of noise reduction methods for distant supervision. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 73–78. ACM. Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. arXiv preprint arXiv:1504.06580. Yatian Shen and Xuanjing Huang. 2016. Attentionbased convolutional neural network for semantic relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 455– 465. Association for Computational Linguistics. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015. Semantic relation classification via convolutional neural networks with simple negative sampling. arXiv preprint arXiv:1506.07650. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3(Feb):1083–1106. 2147 Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal, pages 17–21. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In COLING, pages 2335–2344.
2018
199
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 12–22 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 12 A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors Mikhail Khodak∗, Nikunj Saunshi∗ Princeton University {mkhodak,nsaunshi}@princeton.edu Yingyu Liang University of Wisconsin-Madison [email protected] Tengyu Ma Facebook AI Research [email protected] Brandon Stewart, Sanjeev Arora Princeton University {bms4,arora}@princeton.edu Abstract Motivations like domain adaptation, transfer learning, and feature learning have fueled interest in inducing embeddings for rare or unseen words, n-grams, synsets, and other textual features. This paper introduces `a la carte embedding, a simple and general alternative to the usual word2vec-based approaches for building such representations that is based upon recent theoretical results for GloVe-like embeddings. Our method relies mainly on a linear transformation that is efficiently learnable using pretrained word vectors and linear regression. This transform is applicable “on the fly” in the future when a new text feature or rare word is encountered, even if only a single usage example is available. We introduce a new dataset showing how the `a la carte method requires fewer examples of words in context to learn high-quality embeddings and we obtain state-of-the-art results on a nonce task and some unsupervised document classification tasks. 1 Introduction Distributional word embeddings, which represent the “meaning” of a word via a low-dimensional vector, have been widely applied by many natural language processing (NLP) pipelines and algorithms (Goldberg, 2016). Following the success of recent neural (Mikolov et al., 2013) and matrixfactorization (Pennington et al., 2014) methods, researchers have sought to extend the approach to other text features, from subword elements to n-grams to sentences (Bojanowski et al., 2016; Poliak et al., 2017; Kiros et al., 2015). However, the performance of both word embeddings and their extensions is known to degrade in small corpus settings (Adams et al., 2017) or when embedding sparse, low-frequency features (Lazaridou et al., 2017). Attempts to address these issues often involve task-specific approaches (Rothe and Sch¨utze, 2015; Iacobacci et al., 2015; Pagliardini et al., 2018) or extensively tuning existing architectures such as skip-gram (Poliak et al., 2017; Herbelot and Baroni, 2017). For computational efficiency it is desirable that methods be able to induce embeddings for only those features (e.g. bigrams or synsets) needed by the downstream task, rather than having to pay a computational prix fixe to learn embeddings for all features occurring frequently-enough in a corpus. We propose an alternative, novel solution via `a la carte embedding, a method which bootstraps existing high-quality word vectors to learn a feature representation in the same semantic space via a linear transformation of the average word embeddings in the feature’s available contexts. This can be seen as a shallow extension of the distributional hypothesis (Harris, 1954), “a feature is characterized by the words in its context,” rather than the computationally more-expensive “a feature is characterized by the features in its context” that has been used implicitly by past work (Rothe and Sch¨utze, 2015; Logeswaran and Lee, 2018). Despite its elementary formulation, we demonstrate that the `a la carte method can learn faithful word embeddings from single examples and feature vectors improving performance on important downstream tasks. Furthermore, the approach is resource-efficient, needing only pretrained embed13 dings of common words and the text corpus used to train them, and easy to implement and compute via vector addition and linear regression. After motivating and specifying the method, we illustrate these benefits through several applications: • Embeddings of rare words: we introduce a dataset1 for few-shot learning of word vectors and achieve state-of-the-art results on the task of representing unseen words using only the definition (Herbelot and Baroni, 2017). • Synset embeddings: we show how the method can be applied to learn more finegrained lexico-semantic representations and give evidence of its usefulness for standard word-sense disambiguation tasks (Navigli et al., 2013; Moro and Navigli, 2015). • n-gram embeddings: we build seven million n-gram embeddings from large text corpora and use them to construct document embeddings that are competitive with unsupervised deep learning approaches when evaluated on linear text classification. Our experimental results2 clearly demonstrate the advantages of `a la carte embedding. For word embeddings, the approach is an easy way to get a good vector for a new word from its definition or a few examples in context. For feature embeddings, the method can embed anything that does not need labeling (such as a bigram) or occurs in an annotated corpus (such as a word-sense). Our document embeddings, constructed directly using `a la carte n-gram vectors, compete well with recent deep neural representations; this provides further evidence that simple methods can outperform modern deep learning on many NLP benchmarks (Arora et al., 2017; Mu and Viswanath, 2018; Arora et al., 2018a,b; Pagliardini et al., 2018). 2 Related Work Many methods have been proposed for extending word embeddings to semantic feature vectors, with the aim of using them as interpretable and structure-aware building blocks of NLP pipelines (Kiros et al., 2015; Yamada et al., 2016). Many exploit the structure and resources available for specific feature types, such as methods for sense, synsets, and lexemes (Rothe and Sch¨utze, 2015; 1Dataset: nlp.cs.princeton.edu/CRW 2Code: www.github.com/NLPrinceton/ALaCarte Iacobacci et al., 2015) that make heavy use of the graph structure of the Princeton WordNet (PWN) and similar resources (Fellbaum, 1998). By contrast, our work is more general, with incorporation of structure left as an open problem. Embeddings of n-grams are of special interest because they do not need annotation or expert knowledge and can often be effective on downstream tasks. Their computation has been studied both explicitly (Yin and Schutze, 2014; Poliak et al., 2017) and as an implicit part of models for document embeddings (Hill et al., 2016; Pagliardini et al., 2018), which we use for comparison. Supervised and multitask learning of text embeddings has also been attempted (Wang et al., 2017; Wu et al., 2017). A main motivation of our work is to learn good embeddings, of both words and features, from only one or a few examples. Efforts in this area can in many cases be split into contextual approaches (Lazaridou et al., 2017; Herbelot and Baroni, 2017) and morphological methods (Luong et al., 2013; Bojanowski et al., 2016; Pado et al., 2016). The current paper provides a more effective formulation for context-based embeddings, which are often simpler to implement, can improve with more context information, and do not require morphological annotation. Subword approaches, on the other hand, are often more compositional and flexible, and we leave the extension of our method to handle subword information to future work. Our work is also related to some methods in domain adaptation and multi-lingual correlation, such as that of Bollegala et al. (2014). Mathematically, this work builds upon the linear algebraic understanding of modern word embeddings developed by Arora et al. (2018b) via an extension to the latent-variable embedding model of Arora et al. (2016). Although there have been several other applications of this model for natural language representation (Arora et al., 2017; Mu and Viswanath, 2018), ours is the first to provide a general approach for learning semantic features using corpus context. 3 Method Specification We begin by assuming a large text corpus CV consisting of contexts c of words w in a vocabulary V, with the contexts themselves being sequences of words in V (e.g. a fixed-size window around the word or feature). We further assume that we have trained word embeddings vw ∈Rd on this collo14 cation information using a standard algorithm (e.g. word2vec / GloVe). Our goal is to construct a good embedding vf ∈Rd of a text feature f given a set Cf of contexts it occurs in. Both f and its contexts are assumed to arise via the same process that generates the large corpus CV. In many settings below, the number |Cf| of contexts available for a feature f of interest is much smaller than the number |Cw| of contexts that the typical word w ∈V occurs in. This could be because the feature is rare (e.g. unseen words, n-grams) or due to limited human annotation (e.g. word senses, named entities). 3.1 A Linear Approach A naive first approach to construct feature embeddings using context is additive, i.e. taking the average over all contexts of a feature f of the average word vector in each context: vadditive f = 1 |Cf| X c∈Cf 1 |c| X w∈c vw (1) This formulation reflects the training of commonly used embeddings, which employs additive composition to represent the context (Mikolov et al., 2013; Pennington et al., 2014). It has proved successful in the bag-of-embeddings approach to sentence representation (Wieting et al., 2016; Arora et al., 2017), which can compete with LSTM representations, and has also been given theoretical justification as the maximum a posteriori (MAP) context vector under a generative model related to popular embedding objectives (Arora et al., 2016). Lazaridou et al. (2017) use this approach to learn embeddings of unknown word amalgamations, or chimeras, given a few context examples. The additive approach has some limitations because the set of all word vectors is seen to share a few common directions. Simple addition amplifies the component in these directions, at the expense of less common directions that presumably carry more “signal.” Stop-word removal can help to ameliorate this (Lazaridou et al., 2017; Herbelot and Baroni, 2017), but does not deal with the fact that content-words also have significant components in the same direction as these deleted words. Another mathematical framework to address this lacuna is to remove the top one or top few principal components, either from the word embeddings themselves (Mu and Viswanath, 2018) or from their summations (Arora et al., 2017). However, this approach is liable to either not remove Change in Embedding Norm under Transform Figure 1: Plot of the ratio of embedding norms after transformation as a function of word count. While All-but-the-Top tends to affect only very frequent words, `a la carte learns to remove components even from less common words. enough noise or cause too much information loss without careful tuning (c.f. Figure 1). We now note that removing the component along the top few principal directions is tantamount to multiplying the additive composition by a fixed (but data-dependent) matrix. Thus a natural extension is to use an arbitrary linear transformation which will be learned from the data, and hence guaranteed to do at least as well as any of the above ideas. Specifically, we find the transform that can best recover existing word vectors vw —which are presumed to be of high quality— from their additive context embeddings vadditive w . This can be posed as the following linear regression problem vw ≈Avadditive w = A 1 |Cw| X c∈Cw X w′∈c vw′ ! (2) where A ∈Rd×d is learned and we assume for simplicity that 1 |c| is constant (e.g. if c has a fixed window size) and is thus subsumed by the transform. After learning the matrix, we can embed any text feature in the same semantic space as the word embeddings via the following expression: vf = Avadditive f = A  1 |Cf| X c∈Cf X w∈c vw   (3) Note that A is fixed for a given corpus and set of pretrained word embeddings and so does not need to be re-computed to embed different features or feature types. 15 Algorithm 1: The basic `a la carte feature embedding induction method. All contexts c consist of sequences of words drawn from the vocabulary V. Data: vocabulary V, corpus CV, vectors vw ∈Rd ∀w ∈V, feature f, corpus Cf of contexts of f Result: feature embedding vf ∈Rd 1 for w ∈V do 2 let Cw ⊂CV be the subcorpus of contexts of w 3 uw ← 1 |Cw| P c∈Cw P w′∈c vw′ // compute each word’s context embedding uw 4 A ←arg min A∈Rd×d P w∈V ∥vw −Auw∥2 2 // compute context-to-feature transform A 5 uf ← 1 |Cf| P c∈Cf P w∈c vw // compute feature’s context embedding uf 6 vf ←Auf // transform feature’s context embedding Theoretical Justification: As shown by Arora et al. (2018b, Theorem 1), the approximation (2) holds exactly in expectation for some matrix A when contexts c ∈C are generated by sampling a context vector vc ∈Rd from a zero-mean Gaussian with fixed covariance and drawing |c| words using P(w|vc) ∝exp⟨vc, vw⟩. The correctness (again in expectation) of (3) under this model is a direct extension. Arora et al. (2018b) use large text corpora to verify their model assumptions, providing theoretical justification for our approach. We observe that the best linear transform A can recover vectors with mean cosine similarity as high as 0.9 or more with the embeddings used to learn it, thus also justifying the method empirically. 3.2 Practical Details The basic `a la carte method, as motivated in Section 3.1 and specified in Algorithm 1, is straightforward and parameter-free (the dimension d is assumed to have been chosen beforehand, along with the other parameters of the original word embeddings). In practice we may wish to modify the regression step in an attempt to learn a better transformation matrix A. However, the standard first approach of using ℓ2-regularized (Ridge) regression instead of simple linear regression gives little benefit, even when we have more parameters than word embeddings (i.e. when d2 > |V|). A more useful modification is to weight each point by some non-decreasing function α of each word’s corpus count cw, i.e. to solve A = arg min A∈Rd×d X w∈V α(cw)∥vw −Auw∥2 2 (4) where uw is the additive context embedding. This reflects the fact that more frequent words likely have better pretrained embeddings. In settings where |V| is large we find that a hard threshold (α(c) = 1c≥τ for some τ ≥1) is often useful. When we do not have many embeddings we can still give more importance to words with better embeddings via a function such as α(c) = log c, which we use in Section 5.1. 4 One-Shot and Few-Shot Learning of Word Embeddings While we can use our method to embed any type of text feature, its simplicity and effectiveness is rooted in word-level semantics: the approach assumes pre-existing high quality word embeddings and only considers collocations of features with words rather than with other features. Thus to verify that our approach is reasonable we first check how it performs on word representation tasks, specifically those where word embeddings need to be learned from very few examples. In this section we first investigate how representation quality varies with number of occurrences, as measured by performance on a similarity task that we introduce. We then apply the `a la carte method to two tasks measuring the ability to learn new or synthetic words from context, achieving strong results on the nonce task of Herbelot and Baroni (2017). 4.1 Similarity Correlation vs. Sample Size Performance on pairwise word similarity tasks is a standard way to evaluate word embeddings, with success measured via the Spearman correlation between a human score and the cosine similarity between word vectors. An overview of widely used datasets is given by Faruqui and Dyer (2014). However, none of these datasets can be used directly to measure the effect of word frequency on 16 embedding quality, which would help us understand the data requirements of our approach. We address this issue by introducing the Contextual Rare Words (CRW) dataset, a subset of 562 pairs from the Rare Word (RW) dataset (Luong et al., 2013) supplemented by 255 sentences (contexts) for each rare word sampled from the Westbury Wikipedia Corpus (WWC) (Shaoul and Westbury, 2010). In addition we provide a subset of the WWC from which all sentences containing these rare words have been removed. The task is to use embeddings trained on this subcorpus to induce rare word embeddings from the sampled contexts. More specifically, the CRW dataset is constructed using all pairs from the RW dataset where the rarer word occurs between 512 and 10000 times in WWC; this yields a set of 455 distinct rare words. The lower bound ensures that we have a sufficient number of rare word contexts, while the upper bound ensures that a significant fraction of the sentences from the original WWC remain in the subcorpus we provide. In CRW, the first word in every pair is the more frequent word and occurs in the subcorpus, while the second word occurs in the 255 sampled contexts but not in the subcorpus. We provide word2vec embeddings trained on all words occurring at least 100 times in the WWC subcorpus; these vectors include those assigned to the first (non-rare) words in the evaluation pairs. Evaluation: For every rare word the method under consideration is given eight disjoint subsets containing 1, 2, 4, . . . , 128 example contexts. The method induces an embedding of the rare word for each subset, letting us track how the quality of rare word vectors changes with more examples. We report the Spearman ρ (as described above) at each sample size, averaged over 100 trials obtained by shuffling each rare word’s 255 contexts. The results in Figure 2 show that our `a la carte method significantly outperforms the additive baseline (1) and its variants, including stopword removal, SIF-weighting (Arora et al., 2017), and top principal component removal (Mu and Viswanath, 2018). We find that combining SIFweighting and top component removal also beats these baselines, but still does worse than our method. These experiments consolidate our intuitions from Section 3 that removing common components and frequent words is important and that learning a data-dependent transformation is an effective way to do this. However, if we train 1 2 4 8 16 32 64 128 Number of Contexts 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Spearman Coefficient ( ) CRW Similarity Task Additive Additive, all-but-the-top Additive, no stop words SIF weighted SIF weighted + all-but-the-top a la carte Figure 2: Spearman correlation between cosine similarity and human scores for pairs of words in the CRW dataset given an increasing number of contexts per rare word. Our `a la carte method outperforms all previous approaches, even when restricted to only eight example contexts. word2vec embeddings from scratch on the subcorpus together with the sampled contexts we achieve a Spearman correlation of 0.45; this gap between word2vec and our method shows that there remains room for even better approaches for fewshot learning of word embeddings. 4.2 Learning Embeddings of New Concepts: Nonces and Chimeras We now evaluate our work directly on the tasks posed by Herbelot and Baroni (2017), who developed simple datasets and methods to “simulate the process by which a competent speaker encounters a new word in known contexts.” The general goal will be to construct embeddings of new concepts in the same semantic space as a known embedding vocabulary using contextual information consisting of definitions or example sentences. Nonces: We first discuss the definitional nonce dataset made by the authors themselves, which has a test-set consisting of 300 single-word concepts and their definitions. The task of learning each concept’s embedding is simulated by removing or randomly re-initializing its vector and requiring the system to use the remaining embeddings and the definition to make a new vector that is close to the original. Because the embeddings were constructed using data that includes these concepts, an implicit assumption is made that including or excluding one word does not greatly affect the se17 Nonce (Herbelot and Baroni, 2017) Chimera (Lazaridou et al., 2017) Method Mean Recip. Rank Med. Rank 2 Sent. 4 Sent. 6 Sent. word2vec 0.00007 111012 0.1459 0.2457 0.2498 additive 0.00945 3381 0.3627 0.3701 0.3595 additive, no stop words 0.03686 861 0.3376 0.3624 0.4080 nonce2vec 0.04907 623 0.3320 0.3668 0.3890 `a la carte 0.07058 165.5 0.3634 0.3844 0.3941 Table 1: Comparison with baselines and nonce2vec (Herbelot and Baroni, 2017) on few-shot embedding tasks. Performance on the chimeras task is measured using the Spearman correlation with human ratings. Note that the additive baseline requires removing stop-words in order to improve with more data. mantic space; this assumption is necessary in order to have a good target vector for the system to be evaluated against. Using 259,376 word2vec embeddings trained on Wikipedia as the base vectors, Herbelot and Baroni (2017) heavily modify the skip-gram algorithm to successfully learn on one definition, creating the nonce2vec system. The original skipgram algorithm and vadditive w are used as baselines, with performance measured as the mean reciprocal rank and median rank of the concept’s original vector among the nearest neighbors of the output. To compare directly to their approach, we use their word2vec embeddings along with contexts from the Wikipedia corpus to construct context vectors uw for all words w apart from the 300 nonces. We then learn the `a la carte transform A, weighting the data points in the regression (4) using a hard threshold of at least 1000 occurrences in Wikipedia. An embedding for each nonce can then be constructed by multiplying A by the sum over all word embeddings in the nonce’s definition. As can be seen in Table 1, this approach significantly improves over both baselines and nonce2vec; the median rank of 165.5 of the original embedding among the nearest neighbors of the nonce vector is very low considering the vocabulary size is more than 250,000, and is also significantly lower than that of all previous methods. Chimeras: The second dataset Herbelot and Baroni (2017) consider is that of Lazaridou et al. (2017), who construct unseen concepts by combining two related words into a fake nonce word (the “chimera”) and provide two, four, or six example sentences for this nonce drawn from sentences containing one of the two component words. The desired nonce embeddings is then evaluated via the correlation of its cosine similarity with the embeddings of several other words, with ratings provided by human judges. We use the same approach as in the nonce task, except that the chimera embedding is the result of summing over multiple sentences. From Table 1 we see that, while our method is consistently better than both the additive baseline and nonce2vec, removing stop-words from the additive baseline leads to stronger performance for more sentences. Since the `a la carte algorithm explicitly trains the transform to match the true word embedding rather than human similarity measures, it is perhaps not surprising that our approach is much more dominant on the definitional nonce task. 5 Building Feature Embeddings using Large Corpora Having witnessed its success at representing unseen words, we now apply the `a la carte method to two types of feature embeddings: synset embeddings and n-gram embeddings. Using these two examples we demonstrate the flexibility and adaptability of our approach when handling different corpora, base word embeddings, and downstream applications. 5.1 Supervised Synset Embeddings for Word-Sense Disambiguation Embeddings of synsets, or sets of cognitive synonyms, and related entities such as senses and lexemes have been widely studied, often due to the desire to account for polysemy (Rothe and Sch¨utze, 2015; Iacobacci et al., 2015). Such representations can be evaluated in several ways, including via their use for word-sense disambiguation (WSD), the task of determining a word’s sense from context. While current state-of-theart methods often use powerful recurrent models (Raganato et al., 2017), we will instead use a sim18 SemEval-2013 Task 12 SemEval-2015 Task 13 Method nouns adj. nouns adv. verbs comb. `a la carte (SemCor) 60.0 72.2 67.7 85.2 60.6 68.1 `a la carte (glosses) 51.8 75.3 62.5 79.0 55.8 64.2 `a la carte (combined) 60.5 74.1 70.3 86.4 59.4 69.6 MFS (SemCor) 58.8 79.5 60.0 87.6 66.7 66.8 Raganato et al. (2017) 66.9 72.4 Table 2: Application of `a la carte synset embeddings to two standard WSD tasks. As all systems always return exactly one answer, performance is measured in terms of accuracy. Results due to Raganato et al. (2017), who use a bi-LSTM for this task, are given as the recent state-of-the-art result. ple similarity-based approach that heavily depends on the synset embedding itself and thus serves as a more useful indicator of representation quality. A major target for our simple systems is to beat the most-frequent sense (MFS) method, which returns for each word the sense that occurs most frequently in a corpus such as SemCor. This baseline is “notoriously hard-to-beat,” routinely besting many systems in SemEval WSD competitions (Navigli et al., 2013). Synset Embeddings: We use SemCor (Langone et al., 2004), a subset of the Brown Corpus (BC) (Francis and Kucera, 1979) annotated using PWN synsets. However, because the corpus is quite small we use GloVe trained on Wikipedia instead of on BC itself. The transform A is learned using context embeddings uw computed with windows of size ten around occurrences of w in BC and weighting each word by the log of its count during the regression stage (4). Then we set the context embedding us of each synset s to be the average sum of word embeddings representation over all sentences in SemCor containing s. Finally, we apply the `a la carte transform to get the synset embedding vs = Aus. Sense Disambiguation: To determine the sense of a word w given its context c, we convert c into a vector using the `a la carte transform A on the sum of its word embeddings and return the synset s of w whose embedding vs is most similar to this vector. We try two different synset embeddings: those induced from SemCor as above and those obtained by embedding a synset using its gloss, or PWN-provided definition, in the same way as a nonce in Section 4.2. We also consider a combined approach in which we fall back on the gloss vector if the synset does not appear in SemCor and thus has no induced embedding. As shown in Table 2, synset embeddings induced from SemCor alone beat MFS overall, largely due to good noun results. The method improves further when combined with the gloss approach. While we do not match the state-of-theart, our success in besting a difficult baseline using very little fine-tuning and exploiting none of the underlying graph structure suggests that the `a la carte method can learn useful synset embeddings, even from relatively small data. 5.2 N-Gram Embeddings for Classification As some of the simplest and most useful linguistic features, n-grams have long been a focus of embedding studies. Compositional approaches, such as sums and products of unigram vectors, are often used and work well on some evaluations, but are often order-insensitive or very high-dimensional (Mitchell and Lapata, 2010). Recent work by Poliak et al. (2017) works around this while staying compositional; however, as we will see their approach does not seem to capture a bigram’s meaning much better than the sum of its word vectors. n-grams embeddings have also gained interest for low-dimensional document representation schemes (Hill et al., 2016; Pagliardini et al., 2018; Arora et al., 2018a), largely due to the success of their sparse high-dimensional Bag-of-nGrams (BonG) counterparts (Wang and Manning, 2012). This setting of document embeddings derived from n-gram features will be used for quantitative evaluation in this section. We build n-gram embeddings using two corpora: 300-dimensional Wikipedia embeddings, which we evaluate qualitatively, and 1600dimensional embeddings on the Amazon Product Corpus (McAuley et al., 2015), which we use for document classification. For both we use as source embeddings GloVe vectors trained on the respec19 Method beef up cutting edge harry potter tight lipped vw1 + vw2 meat, out cut, edges deathly, azkaban loose, fitting vadditive (w1,w2) but, however which, both which, but but, however ECO meats, meat weft, edges robards, keach scaly, bristly Sent2Vec add, reallocate science, multidisciplinary naruto, pokemon wintel, codebase `a la carte need, improve innovative, technology deathly, hallows worried, very Table 3: Closest word embeddings (measured via cosine similarity) to the embeddings of four idiomatic or entity-associated bigrams. From these examples we see that purely compositional methods may struggle to construct context-aware bigram embeddings, even when the features are present in the corpus. On the other hand, adding up corpus contexts (1) is dominated by stop-word information. Sent2Vec is successful on half the examples, reflecting its focus on good sentence, not bigram, embeddings. tive corpora over words occurring at least a hundred times. Context embeddings are constructed using a window of size ten and a hard threshold at 1000 occurrences is used as the word-weighting function in the regression (4). Unlike Poliak et al. (2017), who can construct arbitrary embeddings but need to train at least two sets of vectors of dimension at least 2d to do so, and Yin and Schutze (2014), who determine which n-grams to represent via corpus counts, our `a la carte approach allows us to train exactly those embeddings that we need for downstream tasks. This, combined with our method’s efficiency, allows us to construct more than two million bigram embeddings and more than five million trigram embeddings, constrained only by their presence in the large source corpus. Qualitative Evaluation: We first compare bigram embedding methods by picking some idiomatic and entity-related bigrams and examining the closest word vectors to their representations. These word-pairs are picked because we expect sophisticated feature embedding methods to encode a better vector than the sum of the two embeddings, which we use as a baseline. From Table 3 we see that embeddings based on corpora rather than composition are better able to embed these bigrams to be close to concepts that are semantically similar. On the other hand, as discussed in Section 3 and evident from these results, the additive context approach is liable to emphasize stop-word directions due to their high frequency. Document Embedding: Our main application and quantitative evaluation of n-gram vectors is to use them to construct document embeddings. Given a length L document D = {w1, . . . , wL}, we define its embedding vD as a weighted concatenation over sums of our induced n-gram embeddings, i.e. vT D =  LP t=1 vT wt · · · 1 n L−n+1 P t=1 vT (wt,...,wt+n−1)  where v(wt,...,wt+n−1) is the embedding of the ngram (wt, . . . , wt+n−1). Following Arora et al. (2018a), we weight each n-gram component by 1 n to reflect the fact that higher-order n-grams have lower quality embeddings because they occur less often in the source corpus. While we concatenate across unigram, bigram, and trigram embeddings to construct our text representations, separate experiments show that simply adding up the vectors of all features also yields a smaller but still substantial improvement over the unigram performance. The higher embedding dimension due to concatenation is in line with previous methods and can also be theoretically supported as yielding a less lossy compression of the n-gram information (Arora et al., 2018a). In Table 4 we display the result of running cross-validated, ℓ2-regularized logistic regression on documents from MR movie reviews (Pang and Lee, 2005), CR customer reviews (Hu and Liu, 2004), SUBJ subjectivity dataset (Pang and Lee, 2004), MPQA opinion polarity subtask (Wiebe et al., 2005), TREC question classification (Li and Roth, 2002), SST sentiment classification (binary and fine-grained) (Socher et al., 2013), and IMDB movie reviews (Maas et al., 2011). The first four are evaluated using tenfold cross-validation, while the others have train-test splits. Despite the simplicity of our embeddings (a concatenation over sums of `a la carte n-gram vectors), we find that our results are very competitive with many recent unsupervised methods, achieving the best word-level results on two of the tested 20 Representation n d∗ MR CR SUBJ MPQA TREC SST (±1) SST IMDB BonG 1 V1 77.1 77.0 91.0 85.1 86.8 80.7 36.8 88.3 2 V1 + V2 77.8 78.1 91.8 85.8 90.0 80.9 39.0 90.0 3 V1 + V2 + V3 77.8 78.3 91.4 85.6 89.8 80.1 42.3 89.8 `a la carte 1 1600 79.8 81.3 92.6 87.4 85.6 84.1 46.7 89.0 2 3200 81.3 83.7 93.5 87.6 89.0 85.8 47.8 90.3 3 4800 81.8 84.3 93.8 87.6 89.0 86.7 48.1 90.9 Sent2Vec1 1-2 700 76.3 79.1 91.2 87.2 85.8 80.2 31.0 85.5 DisC2 2-3 3200-4800 80.1 81.5 92.6 87.9 90.0 85.5 46.7 89.6 skip-thoughts3 4800 80.3 83.8 94.2 88.9 93.0 85.1 45.8 SDAE4 2400 74.6 78.0 90.8 86.9 78.4 CNN-LSTM5 4800 77.8 82.0 93.6 89.4 92.6 MC-QT6 4800 82.4 86.0 94.8 90.2 92.4 87.6 byte mLSTM7 4096 86.8 90.6 94.7 88.8 90.4 91.7 54.6 92.2 ∗Vocabulary sizes (i.e. BonG dimensions) vary by task; usually 10K-100K. 1,3,7 (Pagliardini et al., 2018; Kiros et al., 2015; Radford et al., 2017) Evaluation conducted using latest pretrained models. Note that the latest available skip-thoughts implementation returns an error on the IMDB task. 2,4,5,6 (Arora et al., 2018a; Hill et al., 2016; Gan et al., 2017; Logeswaran and Lee, 2018) Best results from publication. Table 4: Performance of document embeddings built using `a la carte n-gram vectors and recent unsupervised word-level approaches on classification tasks, with the character LSTM of (Radford et al., 2017) shown for comparison. Top three results are bolded and the best word-level performance is underlined. datasets. The fact that we do especially well on the sentiment tasks indicates strong exploitation of the Amazon review corpus, which was also used by DisC, CNN-LSTM, and byte mLSTM. At the same time, the fact that our results are comparable to neural approaches indicates that local wordorder may contain much of the information needed to do well on these tasks. On the other hand, separate experiments do not show a substantial improvement from our approach over unigram methods such as SIF (Arora et al., 2017) on sentence similarity tasks such as STS (Cer et al., 2017). This could reflect either noise in the n-gram embeddings themselves or the comparative lower importance of local word-order for textual similarity compared to classification. 6 Conclusion We have introduced `a la carte embedding, a simple method for representing semantic features using unsupervised context information. A natural and principled integration of recent ideas for composing word vectors, the approach achieves strong performance on several tasks and promises to be useful in many linguistic settings and to yield many further research directions. Of particular interest is the replacement of simple window contexts by other structures, such as dependency parses, that could yield results in domains such as question answering or semantic role labeling. Extensions of the mathematical formulation, such as the use of word weighting when building context vectors as in Arora et al. (2018b) or of spectral information along the lines of Mu and Viswanath (2018), are also worthy of further study. More practically, the Contextual Rare Words (CRW) dataset we provide will support research on few-shot learning of word embeddings. Both in this area and for n-grams there is great scope for combining our approach with compositional approaches (Bojanowski et al., 2016; Poliak et al., 2017) that can handle settings such as zero-shot learning. More work is needed to understand the usefulness of our method for representing (potentially cross-lingual) entities such as synsets, whose embeddings have found use in enhancing WordNet and related knowledge bases (CamachoCollados et al., 2016; Khodak et al., 2017). Finally, there remain many language features, such as named entities and morphological forms, whose representation by our method remains unexplored. Acknowledgments We thank Karthik Narasimhan and our three anonymous reviewers for helpful suggestions. The work in this paper was in part supported by SRC JUMP, Mozilla Research, NSF grants CCF1302518 and CCF-1527371, Simons Investigator Award, Simons Collaboration Grant, and ONRN00014-16-1-2329. 21 References Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017. Cross-lingual word embeddings for low-resource language modeling. In Proc. EACL. Sanjeev Arora, Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2018a. A compressed sensing view of unsupervised text embeddings, bag-of-ngrams, and lstms. In Proc. ICLR. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. TACL. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2018b. Linear algebraic structure of word senses, with applications to polysemy. TACL. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In Proc. ICLR. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. ArXiv. Danushka Bollegala, David Weir, and John Carroll. 2014. Learning to predict distributions of words across domains. In Proc. ACL. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities. AI. Daniel Cer, Eneko Agirre, I˜nigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and cross-lingual focused evaluation. In Proc. SemEval. Manaal Faruqui and Chris Dyer. 2014. Community evaluation and exchange of word vectors at wordvectors.org. In Proc. ACL: System Demonstrations. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press. W. Nelson Francis and Henry Kucera. 1979. Brown Corpus Manual. Brown University. Zhe Gan, Yunchen Pu, Ricardo Henao, Chunyuan Li, Xiaodong He, and Lawrence Carin. 2017. Learning generic sentence representations using convolutional neural networks. In Proc. EMNLP. Yoav Goldberg. 2016. A primer on neural network models for natural language processing. JAIR. Zellig Harris. 1954. Distributional structure. Word, 10:146–162. Aur´elie Herbelot and Marco Baroni. 2017. High-risk learning: Acquiring new word vectors from tiny data. In Proc. EMNLP. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proc. NAACL. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proc. KDD. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. Sensembed: Learning sense embeddings for word and relational similarity. In Proc. ACL-IJCNLP. Mikhail Khodak, Andrej Risteski, Christiane Fellbaum, and Sanjeev Arora. 2017. Automated wordnet construction using word embeddings. In Proc. Workshop on Sense, Concept and Entity Representations and their Applications. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Adv. NIPS. Helen Langone, Benjamin R. Haskell, and George A. Miller. 2004. Annotating wordnet. In Proc. Workshop on Frontiers in Corpus Annotation. Angeliki Lazaridou, Marco Marelli, and Marco Baroni. 2017. Multimodal word meaning induction from minimal exposure to natural text. Cognitive Science. Xin Li and Dan Roth. 2002. Learning question classifiers. In Proc. COLING. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In Proc. ICLR. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proc. CoNLL. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proc. ACL-HLT. Julian McAuley, Rahul Pandey, and Jure Leskovec. 2015. Inferring networks of substitutable and complementary products. In Proc. KDD. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Adv. NIPS. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science. Andrea Moro and Roberto Navigli. 2015. Semeval2015 task 13: Multilingual all-words sense disambiguation and entity linking. In Proc. SemEval. 22 Jiaqi Mu and Pramod Viswanath. 2018. All-but-thetop: Simple and effective post-processing for word representations. In Proc. ICLR. Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. Semeval-2013 task 12: Multilingual word sense disambiguation. In Proc. SemEval. Sebastian Pado, Aurelie Herbelot, Max Kisselew, and Jan Snajder. 2016. Predictability of distributional semantics in derivational word formation. In Proc. COLING. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embeddings using compositional n-gram features. In Proc. NAACL. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proc. ACL. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proc. ACL. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. EMNLP. Adam Poliak, Pushpendre Rastogia, M. Patrick Martin, and Benjamin Van Durme. 2017. Efficient, compositional, order-sensitive n-gram embeddings. In Proc. EACL. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. ArXiv. Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017. Neural sequence learning models for word sense disambiguation. In Proc. EMNLP. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In Proc. ACL-IJCNLP. Cyrus Shaoul and Chris Westbury. 2010. The westbury lab wikipedia corpus. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. EMNLP. Dingquan Wang, Nanyun Peng, and Kevin Duh. 2017. A multi-task learning approach to adapting bilingual word embeddings for cross-lingual named entity recognition. In Proc. IJCNLP. Sida Wang and Christopher D. Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proc. ACL. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Proc. LREC. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In Proc. ICLR. Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2017. Starspace: Embed all the things! ArXiv. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proc. CoNLL. Wenpeng Yin and Hinrich Schutze. 2014. An exploration of embeddings for generalized phrases. In Proc. ACL 2014 Student Research Workshop.
2018
2
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 208–218 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 208 Efficient Online Scalar Annotation with Bounded Support Keisuke Sakaguchi and Benjamin Van Durme Johns Hopkins University {keisuke,vandurme}@cs.jhu.edu Abstract We describe a novel method for efficiently eliciting scalar annotations for dataset construction and system quality estimation by human judgments. We contrast direct assessment (annotators assign scores to items directly), online pairwise ranking aggregation (scores derive from annotator comparison of items), and a hybrid approach (EASL: Efficient Annotation of Scalar Labels) proposed here. Our proposal leads to increased correlation with ground truth, at far greater annotator efficiency, suggesting this strategy as an improved mechanism for dataset creation and manual system evaluation. 1 Introduction We are concerned here with the construction of datasets and evaluation of systems within natural language processing (NLP). Specifically, humans providing responses that are used to derive graded values on natural language contexts, or in the ordering of systems corresponding to their perceived performance on some task. Many NLP datasets involve eliciting from annotators some graded response. The most popular annotation scheme is the n-ary ordinal approach as illustrated in Figure 1(a). For example, text may be labeled for sentiment as positive, neutral or negative (Wiebe et al., 1999; Pang et al., 2002; Turney, 2002, inter alia); or under political spectrum analysis as liberal, neutral, or conservative (O’Connor et al., 2010; Bamman and Smith, 2015). A response may correspond to a likelihood judgment, e.g., how likely a predicate is factive (Lee et al., 2015), or that some natural language inference may hold (Zhang et al., 2017). Responses may correspond to a notion of semantic Direct Assessment (a) Ordinal 0 0.5 1.0 (b) Scalar (c) Unbounded (Gaussian) (d) Bounded (Beta) Online Pairwise Comparison very rare very frequent “dog” <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> <latexit sha1_base64="C39OhB+IczRcjLNINXH29e9lt8M=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuC pTN+pOcOygmML7VAymTtaCYzJHeEMvQFXLhRfDB3vo3pz0KtBwIf5yTk3hMXSloKgi+vtrW9s7tX3/cPGv7h0XGz8WTz0gMRa5y04+5RSU1hiRJYb8wyLNYS+e3i3y3jMaK3P9SLMCo4yPtUyl4OSs7qjZCtrBUmwTOmtowV qj5ucwyUWZoSahuLWDTlBQVHFDUic+8PSYsHFlI9x4FDzDG1ULcecs3PnJCzNjTua2NL9+aLimbWzLHY3M04T+zdbmP9lg5LS6iSuigJtVh9lJaKUc4WO7NEGhSkZg64MNLNysSEGy7INeO7Djp/N96E8LJ90w4eAqjDKZzBX TgCm7hHroQgoAEXuDNm3iv3vuqpq37uwEfsn7+Aap5IoM</latexit> <latexit sha1_base64="69vOBm8DWjWG8/c53xOXJRCWfg=">ACZnicdVHLSgMxFM2Mr1qrVsGNgoYWqQ spM4KP7gQ3Lis4tjAd2kyatqGZB8kdsQzAf6eO7/CjR9g+gJt7YXAybknJzcnfiy4Asv6NMy19Y3Nrdx2fqewu7dfPCi8qCiRlDk0EpFs+kQxwUPmAfBmrFkJPAFa/jDh3G/8cqk4lH4DKOYeQHph7zHKQFNtYvLWBvkKatiZ Ur+76XWlVrUpdLIOt0slXSa8u3diX9lzajfortYu2lUqWtYvl+R4vg7ltGc2q3i5+tLoRTQIWAhVEKde2YvBSIoFTwbJ8K1EsJnRI+szVMCQBU146GSjD5rp4l4k9QoBT9jfJ1ISKDUKfK0MCAzUYm9M/tdzE+jdeSkP4wRYSK cX9RKBIcLj/HGXS0ZBjDQgVHI9K6YDIgkF/Ut5HYK9+ORl4FxVa1XryUI5dIJK6ALZ6Bbdo0dURw6i6Ms4Mk6NM+PbPDbxNC3TmMV2iP6UWfoBX6SxCg=</latexit> <latexit sha1_base64="RX5eSgWIzgnSv7H1r5GjfsuxnLc=">ACZnicdVHLSgMxFM2Mr1qrjoIbBQ0tUh elzAg+uhPcuKxgbaEd2kyatqGZB8kdsQzAf6eO7/CjR9g+hJt7YXAybknJzcnXiS4Atv+My19Y3Nrcx2die3u7dvHeSeVRhLymo0FKFseEQxwQNWAw6CNSLJiO8JVveG9+N+/YVJxcPgCUYRc3SD3iPUwKaltvLWCvkCStiV VT9j03scv2pEpLIO10lXSK9upXDslZy7thv2V2kXbYjFN21ZhvsdzNzx3+wEFNKtq23pvdUMa+ywAKohSTceOwE2IBE4FS7OtWLGI0CHps6aGAfGZcpPJQCk+10wX90KpVwB4wv4+kRBfqZHvaVPYKAWe2Pyv14zht6tm/Agio EFdHpRLxYQjzOH3e5ZBTESANCJdezYjogklDQv5TVISw9eRnULsuVsv1ow6QXl0gRx0g+7QA6qiGqLo0zgyTo0z48s8NvE0LdOYxXaI/pSZ/wZ/0rEh</latexit> <latexit sha1_base64="S/Z6MZq2Ggn5hi3PzLBOBAJe+Q8=">ACcXicdVHbSgMxEM2ut1pvVfFQUOLKF jKruClb0VfKxgbaFd2myatqHZC8msWJb9AH/PN7/CFz/A9Cba2oHAmTNnTiYTNxRcgWV9GObS8srqWmo9vbG5tb2T2d17VkEkKavQASy5hLFBPdZBTgIVgslI54rWNXt3w/r1RcmFQ/8JxiEzPFI1+cdTgloqpl5awB7hThujK zqsus6sVWwRpGfA0mrlSySXl28drO21NpO+gu1M7anp0lSTOTm+Z46oanbj8ghyZRbmbeG+2ARh7zgQqiVN2QnBiIoFTwZJ0I1IsJLRPuqyuoU8px4NFCTzXTxp1A6uMDHrG/O2LiKTXwXK30CPTUbG1I/lerR9C5dWLuhx Ewn4v6kQCQ4CH+8dtLhkFMdCAUMn1rJj2iCQU9C+l9RLmnjwPKpeFYsF6tHKlu8k2UugIZdE5stENKqEHVEYVRNGncWAcGyfGl3loYjM7lprGpGcf/Qnz4hszd7H6</latexit> <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> <latexit sha1_base64="Nx3bknXj7znj2B7V4nfpFXhDlAs=">ACcXicdVHbSgMxEM2ut1pvq+KLgaLKF jKrnjrW9EXHytYFdqlZtO0Dc1eSGbFsuwH+Hu+RW+AGmlxVtdSBw5syZk8nEiwRXYNvhjkzOze/kFvMLy2vrK5Z6xv3KowlZTUailA+ekQxwQNWAw6CPUaSEd8T7MHrXQ/qD89MKh4Gd9CPmOuTsDbnBLQVN6bQB7gSRpDK 3qsuO5iV2yh1GcAunTU/qf9Mx2yudO0cmkrbDzr3bS9vAwTZtWIctx5oYzt29QOoNq23Riuksc8CoIoVXfsCNyESOBUsDTfiBWLCO2RDqtrGBCfKTcZDpTiA820cDuU+gSAh+zPjoT4SvV9Tyt9Al01WRuQf9XqMbQv3YQHUQ wsoKOL2rHAEOLB/nGLS0ZB9DUgVHI9K6ZdIgkF/Ut5vYSpJ0+D2kmpXLJvTwuVq/E2cmgH7aMj5KALVE3qIpqiKIPY8vYNfaMT3PbxOb+SGoa45N9CvM4y80t7H+</latexit> −1 <latexit sha1_base64="W3FOctKGhzom8KqPhzefcKqeing=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBiyURQb 0VvXisYGyhDWz3bRLN5uwOxFK6I/w4kHFq/Hm/GbZuDtj4YeLw3w8y8MJXCoOt+O6WV1bX1jfJmZWt7Z3evun/waJM+6zRCa6HVLDpVDcR4GSt1PNaRxK3gpHt1O/9cS1EYl6wHKg5gOlIgEo2il1lXqAjHvWrNrbszkGXiFaQGB Zq96le3n7As5gqZpMZ0PDfFIKcaBZN8UulmhqeUjeiAdyxVNOYmyGfnTsiJVfokSrQthWSm/p7IaWzMOA5tZ0xaBa9qfif18kwugpyodIMuWLzRVEmCSZk+jvpC80ZyrElGlhbyVsSDVlaBOq2BC8xZeXiX9ev679xe1xk2RhmO4BhO wYNLaMAdNMEHBiN4hld4c1LnxXl3PuatJaeYOYQ/cD5/AJgyj0Y=</latexit> <latexit sha1_base64="W3FOctKGhzom8KqPhzefcKqeing=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBiyURQb 0VvXisYGyhDWz3bRLN5uwOxFK6I/w4kHFq/Hm/GbZuDtj4YeLw3w8y8MJXCoOt+O6WV1bX1jfJmZWt7Z3evun/waJM+6zRCa6HVLDpVDcR4GSt1PNaRxK3gpHt1O/9cS1EYl6wHKg5gOlIgEo2il1lXqAjHvWrNrbszkGXiFaQGB Zq96le3n7As5gqZpMZ0PDfFIKcaBZN8UulmhqeUjeiAdyxVNOYmyGfnTsiJVfokSrQthWSm/p7IaWzMOA5tZ0xaBa9qfif18kwugpyodIMuWLzRVEmCSZk+jvpC80ZyrElGlhbyVsSDVlaBOq2BC8xZeXiX9ev679xe1xk2RhmO4BhO wYNLaMAdNMEHBiN4hld4c1LnxXl3PuatJaeYOYQ/cD5/AJgyj0Y=</latexit> <latexit sha1_base64="W3FOctKGhzom8KqPhzefcKqeing=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBbBiyURQb 0VvXisYGyhDWz3bRLN5uwOxFK6I/w4kHFq/Hm/GbZuDtj4YeLw3w8y8MJXCoOt+O6WV1bX1jfJmZWt7Z3evun/waJM+6zRCa6HVLDpVDcR4GSt1PNaRxK3gpHt1O/9cS1EYl6wHKg5gOlIgEo2il1lXqAjHvWrNrbszkGXiFaQGB Zq96le3n7As5gqZpMZ0PDfFIKcaBZN8UulmhqeUjeiAdyxVNOYmyGfnTsiJVfokSrQthWSm/p7IaWzMOA5tZ0xaBa9qfif18kwugpyodIMuWLzRVEmCSZk+jvpC80ZyrElGlhbyVsSDVlaBOq2BC8xZeXiX9ev679xe1xk2RhmO4BhO wYNLaMAdNMEHBiN4hld4c1LnxXl3PuatJaeYOYQ/cD5/AJgyj0Y=</latexit> 1 <latexit sha1_base64="LzEZ9jowvLSL2VWrVwNnoLZ648=">AB7HicbVBNS8NAEJ34WetX1aOXYBE8lUQE9V b04rGCaQtKJvtpl272Q27E6GE/gcvHlS8+oO8+W/ctjlo64OBx3szMyLUsENet63s7K6tr6xWdoqb+/s7u1XDg6bRmWasoAqoXQ7IoYJLlmAHAVrp5qRJBKsFY1up37riWnDlXzAcrChAwkjzklaKVml8sYx71K1at5M7jLxC9IFQo0e pWvbl/RLGESqSDGdHwvxTAnGjkVbFLuZoalhI7IgHUslSRhJsxn107cU6v03VhpWxLdmfp7IieJMeMksp0JwaFZ9Kbif14nw/gqzLlM2SzhfFmXBRudPX3T7XjKIYW0Ko5vZWlw6JhRtQGUbgr/48jIJzmvXNe/+olq/KdIowTGcwBn4 cAl1uIMGBEDhEZ7hFd4c5bw4787HvHXFKWaO4A+czx8uMI8P</latexit> <latexit sha1_base64="LzEZ9jowvLSL2VWrVwNnoLZ648=">AB7HicbVBNS8NAEJ34WetX1aOXYBE8lUQE9V b04rGCaQtKJvtpl272Q27E6GE/gcvHlS8+oO8+W/ctjlo64OBx3szMyLUsENet63s7K6tr6xWdoqb+/s7u1XDg6bRmWasoAqoXQ7IoYJLlmAHAVrp5qRJBKsFY1up37riWnDlXzAcrChAwkjzklaKVml8sYx71K1at5M7jLxC9IFQo0e pWvbl/RLGESqSDGdHwvxTAnGjkVbFLuZoalhI7IgHUslSRhJsxn107cU6v03VhpWxLdmfp7IieJMeMksp0JwaFZ9Kbif14nw/gqzLlM2SzhfFmXBRudPX3T7XjKIYW0Ko5vZWlw6JhRtQGUbgr/48jIJzmvXNe/+olq/KdIowTGcwBn4 cAl1uIMGBEDhEZ7hFd4c5bw4787HvHXFKWaO4A+czx8uMI8P</latexit> <latexit sha1_base64="LzEZ9jowvLSL2VWrVwNnoLZ648=">AB7HicbVBNS8NAEJ34WetX1aOXYBE8lUQE9V b04rGCaQtKJvtpl272Q27E6GE/gcvHlS8+oO8+W/ctjlo64OBx3szMyLUsENet63s7K6tr6xWdoqb+/s7u1XDg6bRmWasoAqoXQ7IoYJLlmAHAVrp5qRJBKsFY1up37riWnDlXzAcrChAwkjzklaKVml8sYx71K1at5M7jLxC9IFQo0e pWvbl/RLGESqSDGdHwvxTAnGjkVbFLuZoalhI7IgHUslSRhJsxn107cU6v03VhpWxLdmfp7IieJMeMksp0JwaFZ9Kbif14nw/gqzLlM2SzhfFmXBRudPX3T7XjKIYW0Ko5vZWlw6JhRtQGUbgr/48jIJzmvXNe/+olq/KdIowTGcwBn4 cAl1uIMGBEDhEZ7hFd4c5bw4787HvHXFKWaO4A+czx8uMI8P</latexit> “burrito” ≺“dog” <latexit sha1_base64="iw3k6BTFNkiPgEA5tkYxe6F9ki8=">ADB3icpVLSgMxFM2M7/quhQkWEQXUm bEV3dFNy4VrAqdoWbStIZmJkNyRyzDLN34K25cqLj1F9z5N6a1BbUKihcCJ+e3NykyAWXIPjvFr20PDI6Nj4RG5yanpmNj83f6JloirUCmkOguIZoJHrAIcBDuLFSNhINhp0Nrv5E8vmdJcRsfQjpkfkmbEG5wSMFRtzlrygF 1BmnrdXlXVDPzUKTrdWB8A2fl59knq/izFQaIUB5n9tvfqapZhz9in+H+mPi2HLe07a73bWZ12fyTn1q+0N/jQdBvW0C9OKzlX7y6pEnIqCaF1nRj8lCjgVLAs5yWaxYS2SJNVDYxIyLSfdg1leMUwdyQyqwIcJf9WJGSUO t2GBhlSOBCf81yO9y1Qau37KozgBFtH3gxqJwCBx51fgOjdTB9E2gFDzaJxiekEUoWD+Ts4Mwf165UFQ2SiWis7RZqG815vGOFpEy2gNuWgHldEBOkQVRK1r69a6tx7sG/vOfrSf3qW21atZQJ/Cfn4DCiXf6w=</latexit> <latexit sha1_base64="iw3k6BTFNkiPgEA5tkYxe6F9ki8=">ADB3icpVLSgMxFM2M7/quhQkWEQXUm bEV3dFNy4VrAqdoWbStIZmJkNyRyzDLN34K25cqLj1F9z5N6a1BbUKihcCJ+e3NykyAWXIPjvFr20PDI6Nj4RG5yanpmNj83f6JloirUCmkOguIZoJHrAIcBDuLFSNhINhp0Nrv5E8vmdJcRsfQjpkfkmbEG5wSMFRtzlrygF 1BmnrdXlXVDPzUKTrdWB8A2fl59knq/izFQaIUB5n9tvfqapZhz9in+H+mPi2HLe07a73bWZ12fyTn1q+0N/jQdBvW0C9OKzlX7y6pEnIqCaF1nRj8lCjgVLAs5yWaxYS2SJNVDYxIyLSfdg1leMUwdyQyqwIcJf9WJGSUO t2GBhlSOBCf81yO9y1Qau37KozgBFtH3gxqJwCBx51fgOjdTB9E2gFDzaJxiekEUoWD+Ts4Mwf165UFQ2SiWis7RZqG815vGOFpEy2gNuWgHldEBOkQVRK1r69a6tx7sG/vOfrSf3qW21atZQJ/Cfn4DCiXf6w=</latexit> <latexit sha1_base64="iw3k6BTFNkiPgEA5tkYxe6F9ki8=">ADB3icpVLSgMxFM2M7/quhQkWEQXUm bEV3dFNy4VrAqdoWbStIZmJkNyRyzDLN34K25cqLj1F9z5N6a1BbUKihcCJ+e3NykyAWXIPjvFr20PDI6Nj4RG5yanpmNj83f6JloirUCmkOguIZoJHrAIcBDuLFSNhINhp0Nrv5E8vmdJcRsfQjpkfkmbEG5wSMFRtzlrygF 1BmnrdXlXVDPzUKTrdWB8A2fl59knq/izFQaIUB5n9tvfqapZhz9in+H+mPi2HLe07a73bWZ12fyTn1q+0N/jQdBvW0C9OKzlX7y6pEnIqCaF1nRj8lCjgVLAs5yWaxYS2SJNVDYxIyLSfdg1leMUwdyQyqwIcJf9WJGSUO t2GBhlSOBCf81yO9y1Qau37KozgBFtH3gxqJwCBx51fgOjdTB9E2gFDzaJxiekEUoWD+Ts4Mwf165UFQ2SiWis7RZqG815vGOFpEy2gNuWgHldEBOkQVRK1r69a6tx7sG/vOfrSf3qW21atZQJ/Cfn4DCiXf6w=</latexit> Figure 1: Elicitation strategies for graded response include direct assessment via ordinal or scalar judgments, and pairwise comparisons aggregated via an assumption of latent distributions such as Gaussians, or novel here: Beta distributions, providing bounded support. The example concerns subjective assessments of the lexical frequency of dog. In pairwise comparison, we assess it by comparison such as “burrito” is less frequent (≺) than “dog”. similarity, e.g., whether one word can be substituted for another in context (Pavlick et al., 2015), or whether an entire sentence is more or less similar than another (Marelli et al., 2014), and so on. Less common in NLP are system comparisons based on direct human ratings, but an exception includes the annual shared task evaluations of the Conference on Machine Translation (WMT). There, MT practitioners submit system outputs based on a shared set of source sentences, which are then judged relative to other system outputs. Various aggregation strategies have been employed over the years to take these relative comparisons and derive competitive rankings between shared task entrants (Callison-Burch et al., 2012; 209 Bojar et al., 2013, 2014, 2015, 2016, 2017). Inspired by prior work in MT system evaluation, we propose a procedure for eliciting graded responses that we demonstrate to be more efficient than prior work. While remaining applicable to system evaluation, our experimental results suggest our approach as a more general framework for a variety of future data creation tasks, allowing for higher quality data in less time and cost. We consider three different approaches for scalar annotation: direct assessment (DA), online pairwise ranking aggregation (RA), and a hybrid method which we call EASL (Efficient Annotation of Scalar Labels).1 DA scalar annotation, shown in Figure 1(b), directly annotates absolute judgments on some scale (e.g., 0 to 100), independently per item (§2). As an RA approach (§3), we start with conventional unbounded models, where each instance is parameterized as a Gaussian distribution, as shown in Figure 1(c). Since boundedness is essential for the scalar annotation we aim to model, we propose a bounded variant which parameterizes each instance by a beta distribution as illustrated in Figure 1(d). Finally, we propose EASL (§4) that combines benefits of DA and RA. We illustrate the improvements enabled by our proposal on three example tasks (§5): lexical frequency inference, political spectrum inference and machine translation system ranking.2 For example, we find that in the commonly employed condition of 3-way redundant annotation, our approach on multiple tasks gives similar quality with just 2way redundancy: this translates to a potential 50% increase in dataset size for the same cost. 2 Direct Assessment Direct assessment or direct annotation (DA) is a straightforward method for collecting graded response from annotators. The most popular scheme is n-ary ordinal labeling, as illustrated in Figure 1(a), where annotators are shown one instance (i.e., sample point) and asked to label one of the n-ary ordered classes. According to the level of measurement in psychometrics (Stevens, 1946, inter alia), which classifies the numerals based on certain properties (e.g., identity, order, quantity), ordinal data do not allow for degree of difference. Namely, there is no guarantee that the distance between each label 1Pronounced as “easel”. 2We release the code at http://decomp.net/. is equal, and instances in the same class are not discriminated. For example, in a typical five-level Likert scale (Likert, 1932) of likelihood – very unlikely, unlikely, unsure, likely, very likely – we cannot conclude that very likely instances are exactly twice as likely those marked likely, nor can we assume two instances with the same label have exactly the same likelihood. The issue of distance between ordinals is perhaps obviated by using scalar annotations (i.e., ratio scale in Stevens’s terminology), which directly correspond to continuous quantities (Figure 1(b)). In scalar DA,3 each instance in the collection (Si ∈SN 1 ) is annotated with values (e.g., on the range 0 to 100) often by several annotators. The notion of quantitative difference is enabled by the property of absolute zero: the scale is bounded. For example, distance, length, mass, size etc. are represented by this scale. In the annual shared task evaluation of the WMT, DA has been used for scoring adequacy and fluency of machine learning system outputs with human evaluation (Graham et al., 2013, 2014; Bojar et al., 2016, 2017), and has separately been used in creating datasets such as for factuality (Lee et al., 2015). Why perhaps obviated? Because of two concerns: (1) annotators may not have a pre-existing, well-calibrated scale for performing DA on a particular collection according to a particular task;4 and (2) it is known that people may be biased in their scalar estimates (Tversky and Kahneman, 1974). Regarding (1), this motivates us to consider RA on the intuition that annotators may give more calibrated responses when performed in the context of other elements. Regarding (2), our goal is not to correct for human bias, but simply to more efficiently converge to the same consensus judgments already being pursued by the community in their annotation protocols, biased or otherwise.5 3 Online Pairwise Ranking Aggregation 3.1 Unbounded Model Pairwise ranking aggregation (Thurstone, 1927) is a method to obtain a total ranking on instances, 3In the rest of the paper, we take DA to mean scalar annotation rather than ordinals. 4E.g., try to imagine your level of calibration to a hypothetical task described as ”On a scale of 1 to 100, label this tweet according to a conservative / liberal political spectrum.” 5There has been a line of work on relative weighting of annotators, based on their agreement with others (Whitehill et al., 2009; Welinder et al., 2010; Hovy et al., 2013). In this paper, however, we do not perform such annotator weighting. 210 assuming that scalar value for each sample point follows a Gaussian distribution, N(µi, σ2). The parameters {µi} are interpreted as mean scalar annotation.6 Given the parameters, the probability that Si is preferred (≻) over Sj is defined as p(Si ≻Sj) = Φ µi −µj √ 2σ  , (1) where Φ(·) is the cumulative distribution function of the standard normal distribution. The objective of pairwise ranking aggregation (including all the following models) is formulated as a maximum log-likelihood estimation: max {SN 1 } X Si,Sj∈{SN 1 } log p(Si ≻Sj). (2) TrueSkillTM (Herbrich et al., 2006) extends the Thurstone model by applying a Bayesian online and active learning framework, allowing for ties. TrueSkill has been used in the Xbox Live online gaming community,7 and has been applied for various NLP tasks, such as question difficulty estimation (Liu et al., 2013), ranking speech quality (Baumann, 2017), and ranking machine translation and grammatical error correction systems with human evaluation (Bojar et al., 2014, 2015; Sakaguchi et al., 2014, 2016) In the same way as the Thurstone model, TrueSkill assumes that scalar values for each instance Si (i.e., skill level for each player in the context of TrueSkill) follow a Gaussian distribution N(µi, σ2 i ), where σi is also parameterized as the uncertainty of the scalar value for each instance. Importantly, TrueSkill uses a Bayesian online learning scheme, and the parameters are iteratively updated after each observation of pairwise comparison (i.e., game result: win (≻), tie (≡), or loss (≺)) in proportion to how surprising the outcome is. Let ti≻j = µi −µj, the difference in scalar responses (skill levels) when we observe i wins j, and ϵ ⩾0 be a parameter to specify the tie rate. The update functions are formulated as follows: µi = µi + σ2 i c · v t c, ϵ c  (3) µj = µj − σ2 j c · v t c, ϵ c  , (4) 6Thurstone and another popular ranking method by Elo (1978) use a fixed σ for all instances. 7www.xbox.com/live/ −4 −2 0 2 4 ti≻j = µi −µj 0 1 2 3 4 vi≻j(t, ϵ) (a) vi≻j −4 −2 0 2 4 ti≡j = µi −µj −2 −1 0 1 2 vi≡j(t, ϵ) (b) vi≡j −4 −2 0 2 4 ti≻j = µi −µj 0.00 0.25 0.50 0.75 1.00 wi≻j(t, ϵ) (c) wi≻j −4 −2 0 2 4 ti≡j = µi −µj 0.00 0.25 0.50 0.75 1.00 wi≡j(t, ϵ) (d) wi≡j Figure 2: Surprisal of the outcome for µ and σ2 (ϵ = 0.5). where c2 = 2γ2+σ2 i +σ2 j , and v are multiplicative factors that affect the amount of change (surprisal of the outcome) in µ. In the accumulation of the variances (c2), another free parameter called “skill chain”, γ, indicates the width (or difference) of skill levels that two given players have 0.8 (80%) probability of win/lose. The multiplicative factor depends on the observation (wins or ties): vi≻j(t, ϵ) = ϕ(−ϵ + t) Φ(−ϵ + t), (5) vi≡j(t, ϵ) = ϕ(−ϵ −t) −ϕ(ϵ −t) Φ(ϵ −t) −Φ(−ϵ −t), (6) where ϕ(·) is the probability density function of the standard normal distribution. As shown in Figure 2 (a) and (b), vi≻j increases exponentially as t becomes smaller (i.e., the observation is unexpected), whereas vi≡j becomes close to zero when |t| is close to zero. In short, v becomes larger as the outcome is more surprising. In order to update variance (σ2), another set of update functions is used: σ2 i = σ2 i ·  1 −σ2 i c2 · w t c, ϵ c  (7) σ2 j = σ2 j · " 1 − σ2 j c2 · w t c, ϵ c # , (8) 211 where w serve as multiplicative factors that affect the amount of change in σ2. wi≻j(t, ϵ) = vi≻j · (vi≻j + t −ϵ) (9) wi≡j(t, ϵ) = v2 i≡j + (ϵ −t) · ϕ(ϵ −t) + (ϵ + t) · ϕ(ϵ + t) Φ(ϵ −t) −Φ(−ϵ −t) . (10) As shown in Figure 2 (c) and (d), the value of w is between 0 and 1. The underlying idea for the variance updates is that these updates always decrease the size of the variances σ2, which means uncertainty of the instances (Si, Sj) always decreases as we observe more pairwise comparisons. In other words, TrueSkill becomes more confident in the current estimate of µi and µj. Further details are provided by Herbrich et al. (2006).8 Another important property of TrueSkill is “match quality (chance to draw)”. The match quality helps selecting competitive players to make games more interesting. More broadly, the match quality enables us to choose similar instances to be compared to maximize the information gain from pairwise comparisons, as in the active learning literature (Settles et al., 2008). The match quality between two instances (players) is computed as follows: q(γ, Si, Sj) := r 2γ2 c2 exp  −(µi −µj)2 2c2  (11) Intuitively, the match quality is based on the difference µi −µj. As the difference becomes smaller, the match quality goes higher, and vice versa. As mentioned, TrueSkill has been used for NLP tasks to infer continuous values for instances. However, it is important to note that the support of a Gaussian distribution is unbounded, namely R = (−∞, ∞). This does not satisfy the property of absolute zero of scalar annotation in the level of measurement (§2). It becomes problematic when it comes to annotating a scalar (continuous) value for extremes such as extremely positive or negative sentiments. We address this issue by proposing a novel variant of TrueSkill in the next section. 3.2 Bounded Variant TrueSkill can induce a continuous spectrum of instances (such as skill level of game players) by 8The following material is also useful to understand the math behind TrueSkill (http://www.moserware. com/assets/computing-your-skill/The% 20Math%20Behind%20TrueSkill.pdf). assuming that each instance is represented as a Gaussian distribution. However, the Gaussian distribution has unbounded support, namely R = (−∞, ∞), which does not satisfy the property of absolute bounds for appropriate scalar annotation (i.e., ratio scale in the level of measurement). Thus, we propose a variant of TrueSkill by changing the latent distribution from a Gaussian to a beta, using a heuristic algorithm based on TrueSkill for inference. The Beta distribution has natural [0, 1] upper and lower bounds and a simple parameterization: Si ∼Bi(αi, βi). We choose the scalar response as the mode M[Si] of the distribution and the variance as uncertainty:9 Mi = αi −1 αi + βi −2 (12) Vari = σ2 i = αiβi (αi + βi)2(αi + βi + 1) (13) As in TrueSkill, we iteratively update parameters of instances B(α, β) according to each observation and how it is surprising. Similarly to Eqns. (3) and (4), we choose the update functions as follows;10 first, in case that an annotator judged that Si is preferred to Sj (Si ≻Sj), αi = αi + σ2 i c · (1 −pi≻j) (14) βj = βj + σ2 j c · (1 −pj≺i) (15) in case of ties with |D| > ϵ and Mi > Mj, αj = αj + σ2 j c · (1 −pi≡j) (16) βi = βi + σ2 i c · (1 −pi≡j) (17) and in case of ties with |D| ⩽ϵ, for both Si, Sj, αi,j = αi,j + σ2 i,j c · (1 −pi≡j) (18) βi,j = βi,j + σ2 i,j c · (1 −pi≡j). (19) 9We may have instead used the mean (E[Si] = αi αi+βi ) of the distribution, where in a beta (α, β > 1) the mean is always closer to 0.5 than the mode, whereas mean and mode are always the same in a Gaussian distribution. The mode was selected owing to better performance in development. 10There may be other potential update (and surprisal) functions such as −log p, instead of 1 −p. As in our use of the mode rather than mean as scalar response, we empirically developed our update functions with respect to annotation efficiency observed through experimentation (§ 5). 212 −1 0 1 ti≻j = Mi −Mj 0.00 0.25 0.50 0.75 1.00 1 −p(Si ≻Sj) (a) 1 −pi≻j −1 0 1 ti≡j = Mi −Mj 0.7 0.8 0.9 1.0 1 −p(Si ≡Sj) (b) 1 −pi≡j Figure 3: Surprisal of the outcome for the bounded variant (ϵ = 0.5). Regarding the probability of pairwise comparison between instances, we follow Bradley and Terry (1952) and Rao and Kupper (1967) to describe the chance of win, tie, or loss, as follows: p(Si ≻Sj) = p(D > ϵ) = πi πi + θπj (20) p(Si ≺Sj) = p(D < −ϵ) = πj θπi + πj (21) p(Si ≡Sj) = p(|D| ⩽ϵ) = (θ2 −1)πiπj (πi + θπj)(θπi + πj) (22) where D = Mi −Mj, ϵ ⩾0 is a parameter to specify the tie rate, θ = exp (ϵ), and π is an exponential score function of S; πi = exp(Mi). It is important to note that α and β never decrease (because 1 −p ≥0 as shown Figure 3), which satisfies the property that variance (uncertainty) always decreases as we observe more judgments, as seen in TrueSkill (§3.1). In addition, we do not need individual update functions for µ and σ2, since the mode and variance in beta distribution depend on two shared parameters α, β (Eqns. 12 and 13). Regarding match quality, we use the same formulation as the TrueSkill (Eqn. 11), except that the bounded model uses M instead of µ: q(γ, Si, Sj) = r 2γ2 c2 exp  −(Mi −Mj)2 2c2  (23) 4 Efficient Annotation of Scalar Labels In the previous section, we propose a bounded online ranking aggregation model for scalar annotation. However, the amount of update by a pairwise judgment depends only on the distance between instances, not on the distance from the bounds (i.e., 0 and 1). To integrate this property into the online ranking aggregation model, Select Update 0 0.5 1.0 Si <latexit sha1_base64="t+Iz4LaxbyDvBoP2/ZeJbTGgKmE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lEqMeiF4+V2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJaPZpqgH9GR5CFn1Fip2 RzwQbniVt0FyDrxclKBHI1B+as/jFkaoTRMUK17npsYP6PKcCZwVuqnGhPKJnSEPUsljVD72eLUGbmwypCEsbIlDVmovycyGmk9jQLbGVEz1qveXPzP6UmvPEzLpPUoGTLRWEqiInJ/G8y5AqZEVNLKFPc3krYmCrK jE2nZEPwVl9eJ+2rqudWvYfrSv02j6MIZ3AOl+BDepwDw1oAYMRPMrvDnCeXHenY9la8HJZ07hD5zPHygyjbM=</latexit> <latexit sha1_base64="t+Iz4LaxbyDvBoP2/ZeJbTGgKmE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lEqMeiF4+V2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJaPZpqgH9GR5CFn1Fip2 RzwQbniVt0FyDrxclKBHI1B+as/jFkaoTRMUK17npsYP6PKcCZwVuqnGhPKJnSEPUsljVD72eLUGbmwypCEsbIlDVmovycyGmk9jQLbGVEz1qveXPzP6UmvPEzLpPUoGTLRWEqiInJ/G8y5AqZEVNLKFPc3krYmCrK jE2nZEPwVl9eJ+2rqudWvYfrSv02j6MIZ3AOl+BDepwDw1oAYMRPMrvDnCeXHenY9la8HJZ07hD5zPHygyjbM=</latexit> <latexit sha1_base64="t+Iz4LaxbyDvBoP2/ZeJbTGgKmE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lEqMeiF4+V2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJaPZpqgH9GR5CFn1Fip2 RzwQbniVt0FyDrxclKBHI1B+as/jFkaoTRMUK17npsYP6PKcCZwVuqnGhPKJnSEPUsljVD72eLUGbmwypCEsbIlDVmovycyGmk9jQLbGVEz1qveXPzP6UmvPEzLpPUoGTLRWEqiInJ/G8y5AqZEVNLKFPc3krYmCrK jE2nZEPwVl9eJ+2rqudWvYfrSv02j6MIZ3AOl+BDepwDw1oAYMRPMrvDnCeXHenY9la8HJZ07hD5zPHygyjbM=</latexit> <latexit sha1_base64="t+Iz4LaxbyDvBoP2/ZeJbTGgKmE=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lEqMeiF4+V2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJaPZpqgH9GR5CFn1Fip2 RzwQbniVt0FyDrxclKBHI1B+as/jFkaoTRMUK17npsYP6PKcCZwVuqnGhPKJnSEPUsljVD72eLUGbmwypCEsbIlDVmovycyGmk9jQLbGVEz1qveXPzP6UmvPEzLpPUoGTLRWEqiInJ/G8y5AqZEVNLKFPc3krYmCrK jE2nZEPwVl9eJ+2rqudWvYfrSv02j6MIZ3AOl+BDepwDw1oAYMRPMrvDnCeXHenY9la8HJZ07hD5zPHygyjbM=</latexit> Sj <latexit sha1_base64="uy8/o2XH/5NK+sYedP26NUTqT4=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lE0GPRi8dK7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHsHM0nQj+hQ8pAzaqzUa PQf+WKW3XnIKvEy0kFctT75a/eIGZphNIwQbXuem5i/Iwqw5nAamXakwoG9Mhdi2VNELtZ/NTp+TMKgMSxsqWNGSu/p7IaKT1JApsZ0TNSC97M/E/r5ua8NrPuExSg5ItFoWpICYms7/JgCtkRkwsoUxeythI6o Mzadkg3BW35lbQuqp5b9e4vK7WbPI4inMApnIMHV1CDO6hDExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gApto20</latexit> <latexit sha1_base64="uy8/o2XH/5NK+sYedP26NUTqT4=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lE0GPRi8dK7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHsHM0nQj+hQ8pAzaqzUa PQf+WKW3XnIKvEy0kFctT75a/eIGZphNIwQbXuem5i/Iwqw5nAamXakwoG9Mhdi2VNELtZ/NTp+TMKgMSxsqWNGSu/p7IaKT1JApsZ0TNSC97M/E/r5ua8NrPuExSg5ItFoWpICYms7/JgCtkRkwsoUxeythI6o Mzadkg3BW35lbQuqp5b9e4vK7WbPI4inMApnIMHV1CDO6hDExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gApto20</latexit> <latexit sha1_base64="uy8/o2XH/5NK+sYedP26NUTqT4=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lE0GPRi8dK7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHsHM0nQj+hQ8pAzaqzUa PQf+WKW3XnIKvEy0kFctT75a/eIGZphNIwQbXuem5i/Iwqw5nAamXakwoG9Mhdi2VNELtZ/NTp+TMKgMSxsqWNGSu/p7IaKT1JApsZ0TNSC97M/E/r5ua8NrPuExSg5ItFoWpICYms7/JgCtkRkwsoUxeythI6o Mzadkg3BW35lbQuqp5b9e4vK7WbPI4inMApnIMHV1CDO6hDExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gApto20</latexit> <latexit sha1_base64="uy8/o2XH/5NK+sYedP26NUTqT4=">AB6nicbVBNS8NAEJ3Ur1q/qh69LB bBU0lE0GPRi8dK7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHsHM0nQj+hQ8pAzaqzUa PQf+WKW3XnIKvEy0kFctT75a/eIGZphNIwQbXuem5i/Iwqw5nAamXakwoG9Mhdi2VNELtZ/NTp+TMKgMSxsqWNGSu/p7IaKT1JApsZ0TNSC97M/E/r5ua8NrPuExSg5ItFoWpICYms7/JgCtkRkwsoUxeythI6o Mzadkg3BW35lbQuqp5b9e4vK7WbPI4inMApnIMHV1CDO6hDExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/gApto20</latexit> si <latexit sha1_base64="SvlfVCVJhDjhE9FQnoYm58bFYw=">AB6nicbVBNS8NAEJ3 Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeS t8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpwfRF3616NW8Oskr8glShQKPvfvUGCctirpBJakzX91IMcqpRMmnlV5meErZmA51JFY26CfH7qlJxZUCiRNtSObq74mcxsZM4tB2xhRHZ tmbif953Qyj6yAXKs2QK7ZYFGWSYEJmf5OB0JyhnFhCmRb2VsJGVFOGNp2KDcFfnmVtC5qvlfz7y+r9ZsijKcwCmcgw9XUIc7aEATGAzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AFjyjd M=</latexit> <latexit sha1_base64="SvlfVCVJhDjhE9FQnoYm58bFYw=">AB6nicbVBNS8NAEJ3 Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeS t8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpwfRF3616NW8Oskr8glShQKPvfvUGCctirpBJakzX91IMcqpRMmnlV5meErZmA51JFY26CfH7qlJxZUCiRNtSObq74mcxsZM4tB2xhRHZ tmbif953Qyj6yAXKs2QK7ZYFGWSYEJmf5OB0JyhnFhCmRb2VsJGVFOGNp2KDcFfnmVtC5qvlfz7y+r9ZsijKcwCmcgw9XUIc7aEATGAzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AFjyjd M=</latexit> <latexit sha1_base64="SvlfVCVJhDjhE9FQnoYm58bFYw=">AB6nicbVBNS8NAEJ3 Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeS t8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpwfRF3616NW8Oskr8glShQKPvfvUGCctirpBJakzX91IMcqpRMmnlV5meErZmA51JFY26CfH7qlJxZUCiRNtSObq74mcxsZM4tB2xhRHZ tmbif953Qyj6yAXKs2QK7ZYFGWSYEJmf5OB0JyhnFhCmRb2VsJGVFOGNp2KDcFfnmVtC5qvlfz7y+r9ZsijKcwCmcgw9XUIc7aEATGAzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AFjyjd M=</latexit> <latexit sha1_base64="SvlfVCVJhDjhE9FQnoYm58bFYw=">AB6nicbVBNS8NAEJ3 Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeS t8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpwfRF3616NW8Oskr8glShQKPvfvUGCctirpBJakzX91IMcqpRMmnlV5meErZmA51JFY26CfH7qlJxZUCiRNtSObq74mcxsZM4tB2xhRHZ tmbif953Qyj6yAXKs2QK7ZYFGWSYEJmf5OB0JyhnFhCmRb2VsJGVFOGNp2KDcFfnmVtC5qvlfz7y+r9ZsijKcwCmcgw9XUIc7aEATGAzhGV7hzZHOi/PufCxaS04xcwx/4Hz+AFjyjd M=</latexit>sj <latexit sha1_base64="Jptagc36A9ZYk3FYwVai8xTh4=">AB6nicbVBNS8NAEJ3 Ur1q/qh69LBbBU0lEqMeiF48V7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC 28H4Zua3n1BpHsHM0nQj+hQ8pAzaqx0r/uP/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6Jmp Je9mfif101NeOVnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teneXlfp1HkcRTuAUzsGDGtThFhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP1p2jd Q=</latexit> <latexit sha1_base64="Jptagc36A9ZYk3FYwVai8xTh4=">AB6nicbVBNS8NAEJ3 Ur1q/qh69LBbBU0lEqMeiF48V7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC 28H4Zua3n1BpHsHM0nQj+hQ8pAzaqx0r/uP/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6Jmp Je9mfif101NeOVnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teneXlfp1HkcRTuAUzsGDGtThFhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP1p2jd Q=</latexit> <latexit sha1_base64="Jptagc36A9ZYk3FYwVai8xTh4=">AB6nicbVBNS8NAEJ3 Ur1q/qh69LBbBU0lEqMeiF48V7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC 28H4Zua3n1BpHsHM0nQj+hQ8pAzaqx0r/uP/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6Jmp Je9mfif101NeOVnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teneXlfp1HkcRTuAUzsGDGtThFhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP1p2jd Q=</latexit> <latexit sha1_base64="Jptagc36A9ZYk3FYwVai8xTh4=">AB6nicbVBNS8NAEJ3 Ur1q/qh69LBbBU0lEqMeiF48V7Qe0oWy2k3btZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCahGwSU2DTcCO4lCGgUC 28H4Zua3n1BpHsHM0nQj+hQ8pAzaqx0r/uP/XLFrbpzkFXi5aQCORr98ldvELM0QmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNJ6EgW2M6Jmp Je9mfif101NeOVnXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teneXlfp1HkcRTuAUzsGDGtThFhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP1p2jd Q=</latexit> : 0 <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> : 0 <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> <latexit sha1_base64="D0FGQE6FBQJNuSufyG9qkORfjp0=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqHgqevFYxX5 AG8pmu2mXbjZhdyKU0H/gxYMiXv1H3vw3btsctPXBwO9GWbmBYkUBl32ymsrW9sbhW3Szu7e/sH5cOjlolTzXiTxTLWnYAaLoXiTRQoeSfRnEaB5O1gfDvz209cGxGrR5wk3I/oUIlQMIpWerh2+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6CfkY1Cib 5tNRLDU8oG9Mh71qaMSNn80vnZIzqwxIGtbCslc/T2R0ciYSRTYzojiyCx7M/E/r5tieOVnQiUpcsUWi8JUEozJ7G0yEJozlBNLKNPC3krYiGrK0IZTsiF4y+vktZF1XOr3v1lpX6Tx1GEziFc/CgBnW4gwY0gUEIz/AKb87YeXHenY9Fa8HJZ47hD5zPH/Yf jPg=</latexit> 1 <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> 1 <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> <latexit sha1_base64="l9eImvYcFOKpzEDji/n9jPDeWb8=">AB6HicbVBNS8NAEJ3Ur1q/qh 69LBbBU0lEqMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9 GR5CFn1Fip6Q3KFbfqLkDWiZeTCuRoDMpf/WHM0gilYJq3fPcxPgZVYzgbNSP9WYUDahI+xZKmE2s8Wh87IhVWGJIyVLWnIQv09kdFI62kU2M6ImrFe9ebif14vNeGNn3GZpAYlWy4KU0FMTOZfkyFXyI yYWkKZ4vZWwsZUWZsNiUbgrf68jpX1U9t+o1ryv12zyOIpzBOVyCBzWowz0oAUMEJ7hFd6cR+fFeXc+lq0FJ585hT9wPn8Aep2MtQ=</latexit> Figure 4: Illustrative example of the EASL protocol. Each instance is represented as a beta distribution. Instances are chosen to annotate according to the variance and match quality, and the parameters are updated iteratively. we propose EASL (Efficient Annotation of Scalar Labels) that combines benefits from both direct assessment (DA) and bounded online ranking aggregation model (RA).11 Similarly to RA, EASL parameterizes each instance by a beta distribution (Eqns. 12 and 13), and the parameters are inferred using a computationally efficient and easy-to-implement heuristic. The difference from RA is the type of annotation. While we ask for discrete pairwise judgment (≻, ≺, ≡) between Si and Sj in RA, here we directly ask for scalar values for them (denoted as si and sj) as in DA. Thus, given an annotated score si which is normalized between [0,1], we change the update functions as follows: αi = αi + si (24) βi = βi + (1 −si) (25) This procedure may look similar to DA, where si is simply accumulated and averaged at the end. However, there are two differences. First, as illustrated in Figure 4, EASL parameterizes each instance as a probability distribution while DA does not. Second, DA elicits annotations independently per element, whereas EASL elicits annotations on elements in the context of other elements selected jointly according to match quality. Further, DA generally uses a batch style annotation scheme, where the number of annotations per instance is independent from the latent scalar values. On the other hand, EASL uses online learning, which impacts the calculation of match quality. This allows us to choose instances to annotate 11 Novikova et al. (2018) recently proposed a similar approach named RankME, which is a variant of DA with comparing multiple instances at a time. It can also be regarded as a batch-learning variant of EASL without probabilistic parameterization. 213 Figure 5: Example of partial ranking with scalars (HITS) by order of uncertainty for each instance, and as in RA, the match quality (Eqn. 23) enables us to consider similar instances in the same context. 5 Experiments To compare different annotation methods, we conduct three experiments: (1) lexical frequency inference, (2) political spectrum inference, and (3) human evaluation for machine translation systems. In all experiments, data collection is conducted through Amazon Mechanical Turk (AMT). We ask annotators who meet the following minimum requirements:12 living in the US, overall approval rate > 98%, and number of tasks approved > 500. The experimental setting for DA is straightforward. We ask annotators to annotate a scalar value for each instance, one item at a time. We collect ten annotations for each instance to see the relation between the number of annotations and accuracy (i.e., correlation). To set up the online update in RA and EASL, we use a partial ranking framework with scalars, where annotators are asked to rank and score n instances at one time as illustrated in Figure 5. In all three experiments, we fix n = 5. The partial ranking yields n 2  pairwise comparisons for RA and n scalar values for EASL.13 It is important to note that we can simultaneously retrieve pairwise 12In all experiments, we set the reward of single instance to be $0.01 (i.e., $0.05 in RA and EASL). This is $8/hour, assuming that annotating one instance takes five seconds. Prior to annotation, we run a pilot to make sure that the participants understand the task correctly and the instructions are clear. 13The partial ranking can be regarded as mini-batching. Algorithm 1: Online pairwise ranking aggregation with bounded support. Input: Instances {SN 1 } Output: Updated instances {SN 1 } /* Initialize params */ 1 (αi, βi)∈S = (αinit i , βinit i ) /* Update S over iterations */ 2 foreach iteration do 3 HITS = SampleByMatchQuality(S, N, n) 4 A = Annotate(HITS) 5 for obs ∈A do // Update S 6 i, j, d = parseObservation(obs) 7 αi,j, βi,j = update(i, j, d) 8 return S 9 Function SampleByMatchQuality(S, N, n) 10 k = N/n 11 descendingSort(S, key=Var[S]) 12 S′ = top-k instances of S 13 HITS = [] 14 foreach Si ∈S′ do 15 m = [] 16 foreach Sj ∈S/S′ do 17 m.append([matchQuality(Si, Sj), j]) 18 p = normalize(m) 19 ˜S = sampling n-1 items by p 20 HITS.append([Si, ˜S]) 21 return HITS judgments (≻, ≺, ≡) as well as scalar values from this format. In each iteration, n instances are selected by variance and match quality. We first select top k (= N/n) instances according to the variance, and for each selected instance we choose the other n −1 instances to be compared based on match quality. This approach has been used in the NLP community in tasks such as for assessing machine translation quality (Bojar et al., 2014; Sakaguchi et al., 2014; Bojar et al., 2015, 2016) to collect pairwise judgments efficiently. The detailed procedure of iterative parameter updates in the RA and EASL is described in Algorithm 1. As mentioned in Section 4, the main difference between RA and EASL is the update functions (line 7). Model hyper-parameters in RA and EASL are set as follows; each instance is initialized as αinit i = 1.0, βinit i = 1.0. The skill chain parameter γ and tie-rate parameter ϵ are set to be 0.1.14 5.1 Lexical Frequency Inference In the first experiment, we compare the three scalar annotation approaches on lexical frequency inference, in which we ask annotators to judge frequency (from very rare to very frequent) of verbs 14We explored the hyper-parameters γ, ϵ in a pilot task. 214 1 2 3 4 5 6 7 8 9 10 Number of annotations per item 0.5 0.6 0.7 Spearman’s ρ DA RA EASL 1 2 3 4 5 6 7 8 9 10 Number of annotations per item 0.5 0.6 0.7 Pearson’s ρ DA RA EASL Figure 6: Spearman’s (top) and Pearson’s (bottom) correlations with three difference methods on lexical frequency inference annotation: direct assessment (DA), online ranking aggregation (RA), and EASL. The shade for each line indicates 95% confidence intervals by bootstrap resampling (running 100 times). that are randomly selected from the corpus of Contemporary American English (COCA)15. We include this task for evaluation owing to its nonsubjective ground truth (relative corpus frequency) which can be used as an oracle response we would like to maximally correlate with.16 We randomly select 150 verbs from COCA; the log frequency (log10) is regarded as the oracle. In DA, each instance is annotated by 10 different annotators.17 In the RA and EASL, annotators are asked to rank/score five verbs for each HIT (n = 5). Each iteration contains 20 HITS and we run 10 iterations, which means that total number of annotations is the same in DA, RA, and EASL.18 Figure 6 presents Spearman’s and Pearson’s correlations, indicating how accurately each annotation method obtains scalar values for each instance. Overall, in all three methods, the correlations are increased as more annotations are made. The result also shows that RA and EASL ap15https://www.wordfrequency.info/ 16Lexical frequency inference is an established experiment in (computational) psycholinguistics. E.g., human behavioral measures have been compared with predictability and bias in various corpora (Balota et al., 1999; Fine et al., 2014). 17The agreement rate in DA (10 annotators) is 0.37 in Spearman’s ρ. Considering the difficulty of ranking 150 verbs, this rate is fair. 18Technically, the number of annotations per instance vary in RA and EASL, because they choose instances by match quality at each iteration. 0 20 40 60 80 Oracle 0 20 40 EASL 0 10 20 30 40 50 60 70 80 90 100 0 20 40 DA 0 10 20 30 40 50 60 70 80 90 100 0 20 40 60 80 RA Figure 7: Histograms of scalar values on lexical frequency obtained by each annotation scheme (direct assessment (DA), online ranking aggregation (RA), and EASL), and the oracle. The scalar annotations are put into five bins to see the overall distribution. The scalar in the oracle is normalized as log10(frequency(Si)) / max log10(frequency(S)). (a) Iter 0 (b) Iter 3 (c) Iter 6 (d) Iter 9 Figure 8: Heatmaps of match quality distribution across the cross-product of instances ordered by the oracle (i.e., log10(frequency)). proaches achieve high correlation more efficiently than DA. The gain of efficiency from DA to EASL is about 50%; two iterations in EASL achieves a close Spearman’s ρ to three annotators in DA. Figure 7 presents the results of the final scalar values that each method annotated. The distribution of the histograms shows that overall three methods successfully capture the latent distribution of scalar values in the data. Figure 8 shows a dynamic change of match quality. In the beginning (iteration 0), all the instances are equally competitive because we have no information about them and initialize them with the same parameters. As iterations go on, the instances along the diagonal have higher match quality, indicating that competitive matches are more likely to be selected for a next iteration. In other words, match-quality helps to choose informative pairs to compare at each iteration, which reduces the number of less informative annotations (e.g., a pairwise comparison between the highest and lowest instances). 215 1 2 3 4 5 6 7 8 9 10 Number of annotations per item 0.80 0.85 0.90 Spearman’s ρ DA RA EASL 1 2 3 4 5 6 7 8 9 10 Number of annotations per item 0.80 0.85 0.90 0.95 Pearson’s ρ DA RA EASL Figure 9: Spearman’s (top) and Pearson’s (bottom) correlations with three difference methods on political spectrum annotation: direct assessment (DA), online ranking aggregation (RA), and EASL 5.2 Political Spectrum Inference In the second experiment, we compare the three scalar annotation methods for political spectrum inference. We use the Fine-Grained Political Statements dataset (Bamman and Smith, 2015), which consists of 766 propositions collected from political blog comments, paired with judgments about the political belief of the statement (or the person who would say it) based on the five ordinals: very conservative (-2), slightly conservative (-1), neutral (0), slightly liberal (1), and very liberal (2). We normalize the ordinal scores between 0 and 1. The dataset contains the mean scores by aggregating 7 annotations for each proposition.19 We randomly choose 150 political propositions from the dataset (see the histogram in Figure 10 oracle).20 The experimental setting (i.e., the number of annotations per instance, the number of iterations, and the number of HITS in each iteration) is the same as the lexical frequency inference experiment (§5.1). Figure 9 shows Spearman’s and Pearson’s correlations to the oracle by each method. Overall, all the three methods achieve strong correlation above 19We stress that the oracle here derives from subjective annotations: it does not necessarily reflect the true latent scalar values for each instance. However, in this experiment, we use them as a tentative oracle to compare three scalar annotation methods objectively. 20The agreement rate in DA (among 10 annotators) is 0.67 in Spearman’s ρ. This is significantly high, considering the difficulty of ranking 150 instances in order. 0 20 40 60 Oracle 0 20 40 EASL 0 10 20 30 40 50 60 70 80 90 100 0 20 40 DA 0 10 20 30 40 50 60 70 80 90 100 0 20 40 60 RA Figure 10: Histograms of scalar values on political spectrum obtained by each annotation scheme (DA, RA, EASL) and the oracle. Scalars are put into five bins to see the overall distribution. Propositions Gold DA RA EASL the republicans are useless 100 91.7 75.8 91.9 obama is right 92.9 90.1 74.6 90.0 hillary will win 78.6 86.3 72.9 86.4 aca is a success 75.0 78.2 68.3 77.3 harry reid is a democrat 53.6 55.5 55.8 55.9 ebola is a virus 50.0 53.0 53.8 53.5 cruz is eligible 32.2 31.0 44.0 31.4 global warming is a religion 28.6 22.4 37.3 23.0 bush kept us safe 10.7 9.6 31.5 9.6 democrats are corrupt 0.0 7.1 29.9 7.4 Table 1: Example propositions and the scalar political spectrum ranged between 0 (very conservative) and 100 (very liberal) by each approach: direct assessment, online ranking aggregation, and EASL. The dashed lines indicate a split by 5ary ordinal scale. 0.9. We also find that RA and EASL reach high correlation more efficiently than DA as in the lexical frequency inference experiment (§5.1). The gain of efficiency from DA to EASL is about 50%; 4-way redundant annotation in EASL achieves a close Spearman’s ρ to 6-way redundancy in DA. Figure 10 presents the results of the annotated scalar values by each method. The distribution of the histograms shows that DA and EASL successfully fit to the distribution in the oracle, whereas RA converges to a rather narrow range. This is because of the “lack of distance from bounds” in RA that is explained in §4. We note that renormalizing the distribution in RA will not address the issue. For instance, when the dataset has only liberal propositions, RA still fails to capture the latent distribution because it looks only at relative distances between instances but not the distance from bounds. Table 1 shows the examples of scalar annotations by each method. Again, we 216 see that RA approach has a narrower range than the oracle, DA, and EASL. 5.3 Ranking Machine Translation Systems In the third experiment, we apply the scalar annotation methods for evaluating machine translation systems. This is different from two previous experiments, because the main purpose is to rank the MT systems (SN 1 ) rather than the adequacy (q) of each MT output for a given source sentence (m). Namely, we want to rank Si by observing qi,m. We use WMT16 German-English translation dataset (Bojar et al., 2016), which consists of 2,999 test set sentences and the translations from 10 different systems with DA annotation. Each sentence has its adequacy score annotation between 0 and 100, and the average adequacy scores are computed for each system for ranking. In this setting, annotators are asked to judge adequacy of system output(s) with the reference being given. The official scores (made by DA) and ranking in WMT16 are used as the oracle in this experiment. In this experiment, we replicate DA and run EASL to compare the efficiency. We omit RA in this experiment, because it does not necessarily capture the distance from bounds as shown in the previous experiment (§5.2). In DA, 33,760 translation outputs (3,376 sentences per system in average) are randomly sampled without replacement to make sure that it reaches up to the same result as oracle when the entire data are used. In EASL, we assume that adequacy (q) of an MT output by system (Si) for a given source sentence (m) is drawn from beta distribution: qi,m ∼ B(αi, βi).21 Annotators are asked to judge adequacy of system outputs by scoring 0 and 100. Similarly to the previous experiments (§ 5.1 and § 5.2), we use the partial ranking strategy, where we show n = 5 system outputs (for the same source sentence l) to annotate at a time. The procedure of parameter updates is the same as previous experiments (Algorithm 1). We compare the correlations (Spearman’s ρ) of system ranking with respect to the number of annotations per system, and the result is shown in Figure 11. As seen in the previous two experiments, EASL achieves higher Spearmans correlation on ranking MT systems with smaller number of annotations than the baseline method (DA), 21This is the same setting as WMT14, WMT15, and WMT16 (Bojar et al., 2014, 2015), although they used TrueSkill (Gaussian) instead of EASL to rank systems. 0 200 400 600 800 1000 Number of annotations per system 0.0 0.2 0.4 0.6 0.8 1.0 Spearman’s ρ DA EASL Figure 11: Spearman’s correlation on ranking machine translation systems on WMT16 German-English data: direct assessment (DA), and EASL. The shade for each line indicates 95% confidence intervals by bootstrap resampling (running 100 times). which means EASL is able to collect annotation more efficiently. The result shows that EASL can be applied for efficient system evaluation in addition to data curation. 6 Conclusions We have presented an efficient, online model to elicit scalar annotations for computational linguistic datasets and system evaluations. The model combines two approaches for scalar annotation: direct assessment and online pairwise ranking aggregation. We conducted three illustrative experiments on lexical frequency inference, political spectrum inference, and ranking machine translation systems. We have shown that our approach, EASL (Efficient Annotation of Scalar Labels), outperforms direct assessment in terms of annotation efficiency and outperforms online ranking aggregation in terms of accurately capturing the latent distributions of scalar values. The significant gains demonstrated suggests EASL as a promising approach for future dataset curation and system evaluation in the community. Acknowledgments We are grateful to Rachel Rudinger, Adam Teichert, Chandler May, Tongfei Chen, Pushpendre Rastogi, and anonymous reviewers for their useful feedback. This work was supported in part by IARPA MATERIAL and DARPA LORELEI. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies of the U.S. Government. 217 References David A. Balota, Cortese Michael J., and Maura Pilotti. 1999. Item-level analyses of lexical decision performance: Results from a mega-study. In Abstracts of the 40th Annual Meeting of the Psychonomics Society, page 44, Los Angeles, California. Psychonomic Society. David Bamman and Noah A. Smith. 2015. Open extraction of fine-grained political statements. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 76– 85, Lisbon, Portugal. Association for Computational Linguistics. Timo Baumann. 2017. Large-scale speaker ranking from crowdsourced pairwise listener ratings. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1–44, Sofia, Bulgaria. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (wmt17). In Proceedings of the Second Conference on Machine Translation, pages 169–214, Copenhagen, Denmark. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation, pages 131– 198, Berlin, Germany. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, Lisbon, Portugal. Association for Computational Linguistics. Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324– 345. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 10–51, Montr´eal, Canada. Association for Computational Linguistics. Arpad E. Elo. 1978. The rating of chessplayers, past and present. Arco Pub. Alex B. Fine, Austin F. Frank, T. Florian Jaeger, and Benjamin Van Durme. 2014. Biases in predicting the human language model. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 7–12, Baltimore, Maryland. Association for Computational Linguistics. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33–41, Sofia, Bulgaria. Association for Computational Linguistics. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2014. Is machine translation getting better over time? In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 443–451, Gothenburg, Sweden. Association for Computational Linguistics. Ralf Herbrich, Tom Minka, and Thore Graepel. 2006. TrueSkillTM: A Bayesian Skill Rating System. In Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, pages 569– 576, Vancouver, British Columbia, Canada. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with mace. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120–1130, Atlanta, Georgia. Association for Computational Linguistics. Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643– 1648, Lisbon, Portugal. Association for Computational Linguistics. 218 Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology. Jing Liu, Quan Wang, Chin-Yew Lin, and Hsiao-Wuen Hon. 2013. Question difficulty estimation in community question answering services. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 85–90, Seattle, Washington, USA. Association for Computational Linguistics. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella bernardi, and Roberto Zamparelli. 2014. A sick cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). European Language Resources Association (ELRA). J. Novikova, O. Duˇsek, and V. Rieser. 2018. RankME: Reliable Human Ratings for Natural Language Generation. ArXiv e-prints. Brendan O’Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. ICWSM, 11(122-129):1–2. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79–86. Association for Computational Linguistics. Ellie Pavlick, Travis Wolfe, Pushpendre Rastogi, Chris Callison-Burch, Mark Dredze, and Benjamin Van Durme. 2015. Framenet+: Fast paraphrastic tripling of framenet. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 408–413, Beijing, China. Association for Computational Linguistics. P. V. Rao and L. L. Kupper. 1967. Ties in pairedcomparison experiments: A generalization of the bradley-terry model. Journal of the American Statistical Association, 62(317):194–204. Keisuke Sakaguchi, Courtney Napoles, Matt Post, and Joel Tetreault. 2016. Reassessing the goals of grammatical error correction: Fluency instead of grammaticality. Transactions of the Association for Computational Linguistics, 4. Keisuke Sakaguchi, Matt Post, and Benjamin Van Durme. 2014. Efficient elicitation of annotations for human evaluation of machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 1–11, Baltimore, Maryland, USA. Association for Computational Linguistics. Burr Settles, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Proceedings of the NIPS workshop on cost-sensitive learning, pages 1–10. S. S. Stevens. 1946. On the theory of scales of measurement. Science, 103(2684):677–680. Louis L Thurstone. 1927. The method of paired comparisons for social values. The Journal of Abnormal and Social Psychology, 21(4):384. Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 417–424. Association for Computational Linguistics. Amos Tversky and Daniel Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185(4157):1124–1131. Peter Welinder, Steve Branson, Pietro Perona, and Serge J Belongie. 2010. The multidimensional wisdom of crowds. In Advances in neural information processing systems, pages 2424–2432. Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in neural information processing systems, pages 2035– 2043. Janyce M Wiebe, Rebecca F Bruce, and Thomas P O’Hara. 1999. Development and use of a goldstandard data set for subjectivity classifications. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 246–253. Association for Computational Linguistics. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2017. Ordinal common-sense inference. Transactions of the Association for Computational Linguistics, 5:379–395.
2018
20
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2148–2159 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2148 Interpretable and Compositional Relation Learning by Joint Training with an Autoencoder Ryo Takahashi*1 and Ran Tian*1 and Kentaro Inui1,2 (* equal contribution) 1Tohoku University 2 RIKEN, Japan {ryo.t, tianran, inui}@ecei.tohoku.ac.jp Abstract Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base. Intuitively, a relation can be modeled by a matrix mapping entity vectors. However, relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices – for one reason, composition of two relations M1, M2 may match a third M3 (e.g. composition of relations currency of country and country of film usually matches currency of film budget), which imposes compositional constraints to be satisfied by the parameters (i.e. M1·M2 ≈ M3). In this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder, which is expected to better capture compositional constraints. We achieve state-of-the-art on Knowledge Base Completion tasks with strongly improved Mean Rank, and show that joint training with an autoencoder leads to interpretable sparse codings of relations, helps discovering compositional constraints and benefits from compositional training. Our source code is released at github.com/tianran/glimvec. 1 Introduction Broad-coverage knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) and DBPedia (Auer et al., 2007) store a large amount of facts in the form of ⟨head entity, relation, tail entity⟩ triples (e.g. ⟨The Matrix, country of film, Australia⟩), which could support a wide range of reasoning and question answering applications. The Knowledge Base Completion (KBC) task aims Figure 1: In joint training, relation parameters (e.g. M1) receive updates from both a KB-learning objective, trying to predict entities in the KB; and a reconstruction objective from an autoencoder, trying to recover relations from low dimension codings. to predict the missing part of an incomplete triple, such as ⟨Finding Nemo, country of film, ?⟩, by reasoning from known facts stored in the KB. As a most common approach (Wang et al., 2017), modeling entities and relations to operate in a low dimension vector space helps KBC, for three conceivable reasons. First, when dimension is low, entities modeled as vectors are forced to share parameters, so “similar” entities which participate in many relations in common get close to each other (e.g. Australia close to US). This could imply that an entity (e.g. US) “type matches” a relation such as country of film. Second, relations may share parameters as well, which could transfer facts from one relation to other similar relations, for example from ⟨x, award winner, y⟩to ⟨x, award nominated, y⟩. Third, spatial positions might be used to implement composition of relations, as relations can be regarded 2149 as mappings from head to tail entities, and the composition of two maps can match a third (e.g. the composition of currency of country and country of film matches the relation currency of film budget), which could be captured by modeling composition in a space. However, modeling relations as mappings naturally requires more parameters – a general linear map between d-dimension vectors is represented by a matrix of d2 parameters – which are less likely to be shared, impeding transfers of facts between similar relations. Thus, it is desired to reduce dimensionality of relations; furthermore, the existence of a composition of two relations (assumed to be modeled by matrices M1, M2) matching a third (M3) also justifies dimension reduction, because it implies a compositional constraint M1 · M2 ≈M3 that can be satisfied only by a lower dimension sub-manifold in the parameter space1. Previous approaches reduce dimensionality of relations by imposing pre-designed hard constraints on the parameter space, such as constraining that relations are translations (Bordes et al., 2013) or diagonal matrices (Yang et al., 2015), or assuming they are linear combinations of a small number of prototypes (Xie et al., 2017). However, pre-designed hard constraints do not seem to cope well with compositional constraints, because it is difficult to know a priori which two relations compose to which third relation, hence difficult to choose a pre-design; and compositional constraints are not always exact (e.g. the composition of currency of country and headquarter location usually matches business operation currency but not always), so hard constraints are less suited. In this paper, we investigate an alternative approach by training relation parameters jointly with an autoencoder (Figure 1). During training, the autoencoder tries to reconstruct relations from low dimension codings, with the reconstruction objective back-propagating to relation parameters as well. We show this novel technique promotes parameter sharing between different relations, and drives them toward low dimension manifolds (Sec.6.2). Besides, we expect the technique to cope better with compositional constraints, because it discovers low dimension manifolds posteriorly from data, and it does not impose any explicit hard constraints. 1It is noteworthy that similar compositional constraints apply to most modeling schemes of relations, not just matrices. Yet, joint training with an autoencoder is not simple; one has to keep a subtle balance between gradients of the reconstruction and KB-learning objectives throughout the training process. We are not aware of any theoretical principles directly addressing this problem; but we found some important settings after extensive pre-experiments (Sec.4). We evaluate our system using standard KBC datasets, achieving state-of-the-art on several of them (Sec.6.1), with strongly improved Mean Rank. We discuss detailed settings that lead to the performance (Sec.4.1), and we show that joint training with an autoencoder indeed helps discovering compositional constraints (Sec.6.2) and benefits from compositional training (Sec.6.3). 2 Base Model A knowledge base (KB) is a set T of triples of the form ⟨h, r, t⟩, where h, t ∈E are entities and r ∈R is a relation (e.g. ⟨The Matrix, country of film, Australia⟩). A relation r has its inverse r−1 ∈R so that for every ⟨h, r, t⟩∈T , we regard ⟨t, r−1, h⟩as also in the KB. Under this assumption and given T as training data, we consider the Knowledge Base Completion (KBC) task that predicts candidates for a missing tail entity in an incomplete ⟨h, r, ?⟩triple. Most approaches tackle this problem by training a score function measuring the plausibility of triples being facts. The model we implement in this work represents entities h, t as d-dimension vectors uh, vt respectively, and relation r as a d×d matrix Mr. If uh, vt are one-hot vectors with dimension d = |E| corresponding to each entity, one can take Mr as the adjacency matrix of entities joined by relation r, so the set of tail entities filling into ⟨h, r, ?⟩is calculated by u⊤ h Mr (with each nonzero entry corresponds to an answer). Thus, we have u⊤ h Mrvt > 0 if and only if ⟨h, r, t⟩∈T . This motivates us to use u⊤ h Mrvt as a natural parameter to model plausibility of ⟨h, r, t⟩, even in a low dimension space with d ≪|E|. Thus, we define the score function as s(h, r, t) := exp(u⊤ h Mrvt) (1) for the basic model. This is similar to the bilinear model of Nickel et al. (2011), except that we distinguish uh (the vector for head entities) from vt (the vector for tail entities). It has also been proposed in Tian et al. (2016), but for modeling dependency trees rather than KBs. 2150 More generally, we consider composition of relations r1/ . . . /rl to model paths in a KB (Guu et al., 2015), as defined by r1, . . . , rl participating in a sequence of facts such that the head entity of each fact coincides with the tail of its previous. For example, a sequence of two facts ⟨The Matrix, country of film, Australia⟩and ⟨Australia, currency of country, Australian Dollar⟩ form a path of composition country of film / currency of country, because the head of the second fact (i.e. Australia) coincides with the tail of the first. Using the previous d = |E| analogue, one can verify that composition of relations is represented by multiplication of adjacency matrices, so we accordingly define s(h, r1/ . . . /rl, t) := exp(u⊤ h Mr1 · · · Mrlvt) to measure the plausibility of a path. It is explored in Guu et al. (2015) to learn a score function not only for single facts but also for paths. This compositional training scheme is shown to bring valuable information about the structure of the KB and may help KBC. In this work, we conduct experiments both with and without compositional training. In order to learn parameters uh, vt, Mr of the score function, we follow Tian et al. (2016) using a Noise Contrastive Estimation (NCE) (Gutmann and Hyv¨arinen, 2012) objective. For each path (or triple) ⟨h, r1/ . . . , t⟩taken from the KB, we generate negative samples by replacing the tail entity t with some random noise t∗. Then, we maximize L1 := X path ln s(h, r1/ . . . , t) k + s(h, r1/ . . . , t) + X noise ln k k + s(h, r1/ . . . , t∗) as our KB-learning objective. Here, k is the number of noises generated for each path. When the score function is regarded as probability, L1 represents the log-likelihood of “⟨h, r1/ . . . , t⟩being actual path and ⟨h, r1/ . . . , t∗⟩being noise”. Maximizing L1 increases the scores of actual paths and decreases the scores of noises. 3 Joint Training with an Autoencoder Autoencoders learn efficient codings of highdimensional data while trying to reconstruct the original data from the coding. By joint training relation matrices with an autoencoder, we also expect it to help reducing the dimensionality of the original data (i.e. relation matrices). Formally, we define a vectorization mr for each relation matrix Mr, and use it as input to the autoencoder. mr is defined as a reshape of Mr flattened into a d2-dimension vector, and normalized such that ∥mr∥= √ d. We define cr := ReLU(Amr) (2) as the coding. Here A is a c × d2 matrix with c ≪d2, and ReLU is the Rectified Linear Unit function (Nair and Hinton, 2010). We reconstruct the input from cr by multiplying a d2 ×c matrix B. We want Bcr to be more similar to mr than other relations. For this purpose, we define a similarity g(r1, r2) := exp( 1 √ dc m⊤ r1Bcr2), (3) which measures the length of Bcr2 projected to the direction of mr1. In order to learn the parameters A, B, we adopt the Noise Contrastive Estimation scheme as in Sec.2, generate random noises r∗for each relation r and maximize L2 := X r∈R ln g(r, r) k + g(r, r) + X r∗∼R ln k k + g(r, r∗) as our reconstruction objective. Maximizing L2 increases mr’s similarity with Bcr, and decreases it with Bcr∗. During joint training, both L1 and L2 are simultaneously maximized, and the gradient ∇L2 propagates to relation matrices as well. Since ∇L2 depends on A and B, and A, B interact with all relations, they promote indirect parameter sharing between different relation matrices. In Sec.6.2, we further show that joint training drives relations toward a low dimension manifold. 4 Optimization Tricks Joint training with an autoencoder is not simple. Relation matrices receive updates from both ∇L1 and ∇L2, but if they update ∇L1 too much, the autoencoder has no effect; conversely, if they update ∇L2 too often, all relation matrices crush into one cluster. Furthermore, an autoencoder should learn from genuine patterns of relation matrices that emerge from fitting the KB, but not the reverse – in which the autoencoder imposes arbitrary patterns to relation matrices according to random initialization. Therefore, it is not surprising that a naive optimization of L1 + L2 does not work. After extensive pre-experiments, we have found some crucial settings for successful training. The 2151 most important “magic” is the scaling factor 1 √ dc in definition of the similarity function (3), perhaps being combined with other settings as we discuss below. We have tried different factors 1, 1 √ d, 1 √c and 1 dc instead, with various combinations of d and c; but the autoencoder failed to learn meaningful codings in other settings. When the scaling factor is too small (e.g. 1 dc), all relations get almost the same coding; conversely if the factor is too large (e.g. 1), all codings get very close to 0. The next important rule is to keep a balance between the updates coming from ∇L1 and ∇L2. We use Stochastic Gradient Descent (SGD) for optimization, and the common practice (Bottou, 2012) is to set the learning rate as α(τ) := η 1 + ηλτ . (4) Here, η, λ are hyper-parameters and τ is a counter of processed data points. In this work, in order to control the updates in detail to keep a balance, we modify (4) to use a a step counter τr for each relation r, counting “number of updates” instead of data points2. That is, whenever Mr gets a nonzero update from a gradient calculation, τr increases by 1. Furthermore, we use different hyper-parameters for different “types of updates”, namely η1, λ1 for updates coming from ∇L1, and η2, λ2 for updates coming from ∇L2. Thus, let ∆1 be the partial gradient of ∇L1, and ∆2 the partial gradient of ∇L2, we update Mr by α1(τr)∆1 + α2(τr)∆2 at each step, where α1(τr) := η1 1 + η1λ1τr , α2(τr) := η2 1 + η2λ2τr . The rule for setting η1, λ1 and η2, λ2 is that, η2 should be much smaller than η1, because η1, η2 control the magnitude of learning rates at the early stage of training, with the autoencoder still largely random and ∆2 not making much sense; on the other hand, one has to choose λ1 and λ2 such that ∥∆1∥/λ1 and ∥∆2∥/λ2 are at the same scale, because the learning rates approach 1/(λ1τr) and 1/(λ2τr) respectively, as the training proceeds. In this way, the autoencoder will not impose random patterns to relation matrices according to its initialization at the early stage, and a balance is kept between α1(τr)∆1 and α2(τr)∆2 later. But how to estimate ∥∆1∥and ∥∆2∥? It seems that we can approximately calculate their scales 2Similarly, we set separate step counters for all head and tail entities, and the autoencoder as well. from initialization. In this work, we use i.i.d. Gaussians of variance 1/d to initialize parameters, so the initial Euclidean norms are ∥uh∥≈1, ∥vt∥≈1, ∥Mr∥≈ √ d, and ∥BAmr∥≈ √ dc. Thus, by calculating ∇L1 and ∇L2 using (1) and (3), we have approximately ∥∆1∥≈∥uhv⊤ t ∥≈1, and ∥∆2∥≈∥1 √ dc Bcr∥≈ 1 √ dc ∥BAmr∥≈1. It suggests that, because of the scaling factor 1 √ dc in (3), we have ∥∆1∥and ∥∆2∥at the same scale, so we can set λ1 = λ2. This might not be a mere coincidence. 4.1 Training the Base Model Besides the tricks for joint training, we also found settings that significantly improve the base model on KBC, as briefly discussed below. In Sec.6.3, we will show performance gains by these settings using the FB15k-237 validation set. Normalization It is better to normalize relation matrices to ∥Mr∥= √ d during training. This might reduce fluctuations in entity vector updates. Regularizer It is better to minimize ∥M ⊤ r Mr − 1 d tr(M ⊤ r Mr)I∥during training. This regularizer drives Mr toward an orthogonal matrix (Tian et al., 2016) and might reduce fluctuations in entity vector updates. As a result, all relation matrices trained in this work are very close to orthogonal. Initialization Instead of pure Gaussian, it is better to initialize matrices as (I + G)/2, where G is random. The identity matrix I helps passing information from head to tail (Tian et al., 2016). Negative Sampling Instead of a unigram distribution, it is better to use a uniform distribution for generating noises. This is somehow counterintuitive compared to training word embeddings. 5 Related Works KBs have a wide range of applications (Berant et al., 2013; Hixon et al., 2015; Nickel et al., 2016a) and KBC has inspired a huge amount of research (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Wang et al., 2014b,a; Xiao et al., 2016; Nguyen et al., 2016; Toutanova et al., 2016; Das et al., 2017; Hayashi and Shimbo, 2017). 2152 Among the previous works, TransE (Bordes et al., 2013) is the classic method which represents a relation as a translation of the entity vector space, and is partially inspired by Mikolov et al. (2013)’s vector arithmetic method of solving word analogy tasks. Although competitive in KBC, it is speculated that this method is well-suited for 1to-1 relations but might be too simple to represent N-to-N relations accurately(Wang et al., 2017). Thus, extensions such as TransR (Lin et al., 2015b) and STransE (Nguyen et al., 2016) are proposed to map entities into a relation-specific vector space before translation. The ITransF model (Xie et al., 2017) further enhances this approach by imposing a hard constraint that the relation-specific maps should be linear combinations of a small number of prototypical matrices. Our work inherits the same motivation with ITransF in terms of promoting parameter-sharing among relations. On the other hand, the base model used in this work originates from RESCAL (Nickel et al., 2011), in which relations are naturally represented as analogue to the adjacency matrices (Sec.2). Further developments include HolE (Nickel et al., 2016b) and ConvE (Dettmers et al., 2018) which improve this approach in terms of parameterefficiency, by introducing low dimension factorizations of the matrices. We inherit the basic model of RESCAL but draw additional training techniques from Tian et al. (2016), and show that the base model already can achieve near state-of-the-art performance (Sec.6.1,6.3). This sends a message similar to Kadlec et al. (2017), saying that training tricks might be as important as model designs. Nevertheless, we emphasize the novelty of this work in that the previous models mostly achieve dimension reduction by imposing some pre-designed hard constraints (Bordes et al., 2013; Yang et al., 2015; Trouillon et al., 2016; Nickel et al., 2016b; Xie et al., 2017; Dettmers et al., 2018), whereas the constraints themselves are not learned from data; in contrast, our approach by jointly training an autoencoder does not impose any explicit hard constraints, so it leads to more flexible modeling. Moreover, we additionally focus on leveraging composition in KBC. Although this idea has been frequently explored before (Guu et al., 2015; Neelakantan et al., 2015; Lin et al., 2015a), our discussion about the concept of compositional constraints and its connection to dimension reduction has not been addressed similarly in previous research. In experiments, we will show (Sec.6.2,6.3) that joint training with an autoencoder indeed helps finding compositional constraints and benefits from compositional training. Autoencoders have been used solo for learning distributed representations of syntactic trees (Socher et al., 2011), words and images (Silberer and Lapata, 2014), or semantic roles (Titov and Khoddam, 2015). It is also used for pretraining other deep neural networks (Erhan et al., 2010). However, when combined with other models, the learning of autoencoders, or more generally sparse codings (Rubinstein et al., 2010), is usually conveyed in an alternating manner, fixing one part of the model while optimizing the other, such as in Xie et al. (2017). To our knowledge, joint training with an autoencoder is not widely used previously for reducing dimensionality. Jointly training an autoencoder is not simple because it takes non-stationary inputs. In this work, we modified SGD so that it shares traits with some modern optimization algorithms such as Adagrad (Duchi et al., 2011), in that they both set different learning rates for different parameters. While Adagrad sets them adaptively by keeping track of gradients for all parameters, our modification of SGD is more efficient and allows us to grasp a rough intuition about which parameter gets how much update. We believe our techniques and findings in joint training with an autoencoder could be helpful to reducing dimensionality and improving interpretability in other neural network architectures as well. 6 Experiments We evaluate on standard KBC datasets, including WN18 and FB15k (Bordes et al., 2013), WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova and Chen, 2015). The statistical information of these datasets are shown in Table 1. WN18 collects word relations from WordNet (Miller, 1995), and FB15k is taken from Freebase (Bollacker et al., 2008); both have filtered out low frequency entities. However, it is reported in Toutanova and Chen (2015) that both WN18 and FB15k have information leaks because the inverses of some test triples appear in the training set. FB15k-237 and WN18RR fix this problem by deleting such triples from training and test data. In this work, we do evaluate on WN18 and FB15k, but our models are mainly tuned on FB15k-237. 2153 Dataset |E| |R| #Train #Valid #Test WN18 40,943 18 141,442 5,000 5,000 FB15k 14,951 1,345 483,142 50,000 59,071 WN18RR 40,943 11 86,835 3,034 3,134 FB15k-237 14,541 237 272,115 17,535 20,466 Table 1: Statistical information of the KBC datasets. |E| and |R| denote the number of entities and relation types, respectively; #Train, #Valid, and #Test are the numbers of triples in the training, validation, and test sets, respectively. For all datasets, we set the dimension d = 256 and c = 16, the SGD hyper-parameters η1 = 1/64, η2 = 2−14 and λ1 = λ2 = 2−14. The training batch size is 32 and the triples in each batch share the same head entity. We compare the base model (BASE) to our joint training with an autoencoder model (JOINT), and the base model with compositional training (BASE+COMP) to our joint model with compositional training (JOINT+COMP). When compositional training is enabled (BASE+COMP, JOINT+COMP), we use random walk to sample paths of length 1 + X, where X is drawn from a Poisson distribution of mean λ = 1.0. For any incomplete triple ⟨h, r, ?⟩in KBC test, we calculate a score s(h, r, e) from (1), for every entity e ∈E such that ⟨h, r, e⟩does not appear in any of the training, validation, or test sets (Bordes et al., 2013). Then, the calculated scores together with s(h, r, t) for the gold triple is converted to ranks, and the rank of the gold entity t is used for evaluation. Evaluation metrics include Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 10 (H10). Lower MR, higher MRR, and higher H10 indicate better performance. We consult MR and MRR on validation sets to determine training epochs; we stop training when both MR and MRR have stopped improving. 6.1 KBC Results The results are shown in Table 2. We found that joint training with an autoencoder mostly improves performance, and the improvement becomes more clear when compositional training is enabled (i.e., JOINT ≥BASE and JOINT+COMP > BASE+COMP). This is convincing because generally, joint training contributes with its regularizing effects, and drastic improvements are less expected3. When compositional training is enabled, 3The source code and trained models are publicly released at https://github.com/tianran/glimvec, where profession profession−1 film_crew_role−1 film_release_region−1 film_language−1 nationality currency_of_country currency_of_company currency_of_university currency_of_film_budget 2 4 6 8 10 12 14 16 currency_of_film_budget release_region_of_film corporation_of_film producer_of_film writer_of_film Figure 2: Examples of relation codings learned from FB15k-237. Each row shows a 16 dimension vector encoding a relation. Vectors are normalized such that their entries sum to 1. the system usually achieves better MR, though not always improves in other measures. The performance gains are more obvious on the WN18RR and FB15k-237 datasets, possibly because WN18 and FB15k contain a lot of easy instances that can be solved by a simple rule (Dettmers et al., 2018). Furthermore, the numbers demonstrated by our joint and base models are among the strongest in the literature. We have conducted re-experiments of several representative algorithms, and also compare with state-of-the-art published results. For re-experiments, we use Lin et al. (2015b)’s implementation4 of TransE (Bordes et al., 2013) and TransR, which represent relations as vector translations; and Nickel et al. (2016b)’s implementation5 of RESCAL (Nickel et al., 2011) and HolE, where RESCAL is most similar to the BASE model and HolE is a more parameter-efficient variant. We experimented with the default settings, and found that our models outperform most of them. Among the published results, STransE (Nguyen et al., 2016) and ITransF (Xie et al., 2017) are more complicated versions of TransR, achieving the previous highest MR on WN18 but are outperformed by our JOINT+COMP model. ITransF is most similar to our JOINT model in that they both learn sparse codings for relations. On WN18RR and FB15k237, Dettmers et al. (2018)’s report of ComplEx we also show the mean performance and deviations of multiple random initializations, to give a more complete picture. 4https://github.com/thunlp/KB2E 5https://github.com/mnick/ holographic-embeddings 2154 Model WN18 FB15k WN18RR FB15k-237 MR H10 MR H10 MR MRR H10 MR MRR H10 JOINT 277 95.8 53 82.5 4233 .461∗ 53.4 212 .336 52.3∗ BASE 286 95.8 53 82.5 4371 .459 52.9 215 .337∗ 52.3∗ JOINT+COMP 191∗ 94.8 53 69.7 2268∗ .343 54.8∗ 197∗ .331 51.6 BASE+COMP 195 94.8 54 69.4 2447 .310 54.1 203 .328 51.5 TransE (Bordes et al., 2013) 292 92.0 66 70.4 4311 .202 45.6 278 .236 41.6 TransR (Lin et al., 2015b) 281 93.6 76 74.4 4222 .210 47.1 320 .282 45.9 RESCAL (Nickel et al., 2011) 911 58.0 163 41.0 9689 .105 20.3 457 .178 31.9 HolE (Nickel et al., 2016b) 724 94.3 293 66.8 8096 .376 40.0 1172 .169 30.9 STransE (Nguyen et al., 2016) 206 93.4 69 79.9 ITransF (Xie et al., 2017) 205 94.2 65 81.0 ComplEx (Trouillon et al., 2016) 94.7 84.0 5261 .44 51 339 .247 42.8 Ensemble DistMult (Kadlec et al., 2017) 457 95.0 35.9 90.4 IRN (Shen et al., 2017) 249 95.3 38 92.7∗ ConvE (Dettmers et al., 2018) 504 95.5 64 87.3 5277 .46 48 246 .316 49.1 R-GCN+ (Schlichtkrull et al., 2017) 96.4∗ 84.2 .249 41.7 ProjE (Shi and Weninger, 2017) 34∗ 88.4 Table 2: KBC results on the WN18, FB15k, WN18RR, and FB15k-237 datasets. The first and second sectors compare our joint to the base models with and without compositional training, respectively; the third sector shows our re-experiments and the fourth shows previous published results. Bold numbers are the best in each sector, and (∗) indicates the best of all. (Trouillon et al., 2016) and ConvE were previously the best results. Our models mostly outperform them. Other results include Kadlec et al. (2017)’s simple but strong baseline and several recent models (Schlichtkrull et al., 2017; Shi and Weninger, 2017; Shen et al., 2017) which achieve best results on FB15k or WN18 in some measure. Our models have comparable results. 6.2 Intuition and Insight What does the autoencoder look like? How does joint training affect relation matrices? We address these questions by analyses showing that (i) the autoencoder learns sparse and interpretable codings of relations, (ii) the joint training drives relation matrices toward a low dimension manifold, and (iii) it helps discovering compositional constraints. Sparse Coding and Interpretability Due to the ReLU function in (2), our autoencoder learns sparse coding, with most relations having large code values at only two or three dimensions. This sparsity makes it easy to find patterns in the model that to some extent explain the semantics of relations. Figure 2 shows some examples. In the first group of Figure 2, we show a small number of relations that are almost always assigned a near one-hot coding, regardless of initialization. These are high frequency relations joining two large categories (e.g. film and language), which probably constitute the skeleton of a KB. In the second group, we found the 12th dimension strongly correlates with currency; and in the third group, we found the 4th dimension strongly correlates with film. As for the relation currency of film budget, it has large code values at both dimensions. This kind of relation clustering also seems independent of initialization. Intuitively, it shows that the autoencoder may discover similarities between relations and promote indirect parameter sharing among them. Yet, as the autoencoder only reconstructs approximations of relation matrices but never constrain them to be exactly equal to the original, relation matrices with very similar codings may still differ considerably. For example, producer of film and writer of film have codings of cosine similarity 0.973, but their relation matrices only have6 a cosine similarity 0.338. Low dimension manifold In order to visualize the relation matrices learned by our joint and base models, we use UMAP7 (McInnes and Healy, 2018) to embed Mr into a 2D plane8. We use relation matrices trained on 6Cosine similarity 0.338 is still high for matrices, due to the high dimensionality of their parameter space. 7https://github.com/lmcinnes/umap 8UMAP is a recently proposed manifold learning algorithm based on the fuzzy topological structure. We also tried 2155 (a) BASE (b) JOINT (c) BASE+COMP (d) JOINT+COMP Figure 3: By UMAP, relation matrices are embedded into a 2D plane. Colors show frequencies of relations; and lighter color means more frequent. FB15k-237, and compare models trained by the same number of epochs. The results are shown in Figure 3. We can see that Figure 3a and Figure 3c are mostly similar, with high frequency relations scattered randomly around a low frequency cluster, suggesting that they come from various directions of a high dimension space, with frequent relations probably being pulled further by the training updates. On the other hand, in Figure 3b and Figure 3d we found less frequent relations being clustered with frequent ones, and multiple traces of low dimension structures. It suggests that joint training with an autoencoder indeed drives relations toward a low dimension manifold. In addition, Figure 3d shows different structures against Figure 3b, which we conjecture could be related to compositional constraints discovered by compositional training. Compositional constraints In order to directly evaluate a model’s ability to find compositional constraints, we extracted from FB15k-237 a list of (r1/r2, r3) pairs such that r1/r2 matches r3. Formally, the list is constructed as below. For any relation r, we define a content set C(r) as the set of (h, t) pairs such that ⟨h, r, t⟩ is a fact in the KB. Similarly, we define C(r1/r2) t-SNE (van der Maaten and Hinton, 2008) but found UMAP more insightful. Model MR MRR JOINT+COMP 130±27 .0481±.0090 BASE+COMP 150±3 .0280±.0010 RANDOMM2 181±19 .0356±.0100 Table 3: Performance at discovering compositional constraints extracted from FB15k-237 as the set of (h, t) pairs such that ⟨h, r1/r2, t⟩is a path. We regard (r1/r2, r3) as a compositional constraint if their content sets are similar; that is, if |C(r1/r2) ∩C(r3)| ≥50 and the Jaccard similarity between C(r1/r2) and C(r3) is ≥0.4. Then, after filtering out degenerated cases such as r1 = r3 or r2 = r−1 1 , we obtained a list of 154 compositional constraints, e.g. (currency of country/country of film, currency of film budget). For each compositional constraint (r1/r2, r3) in the list, we take the matrices M1, M2 and M3 corresponding to r1, r2 and r3 respectively, and rank M3 according to its cosine similarity with M1M2, among all relation matrices. Then, we calculate MR and MRR for evaluation. We compare the JOINT+COMP model to BASE+COMP, as well as a randomized baseline where M2 is selected randomly from the relation matrices in JOINT+COMP instead (RANDOMM2). The results are shown in Table 3. We have evaluated 5 different random initializations for each model, trained by the same number of epochs, and we report the mean and standard deviation. We verify that JOINT+COMP performs better than BASE+COMP, indicating that joint training with an autoencoder indeed helps discovering compositional constraints. Furthermore, the random baseline RANDOMM2 tests a hypothesis that joint training might be just clustering M3 and M1 here, to the extent that M3 and M1 are so close that even a random M2 can give the correct answer; but as it turns out, JOINT+COMP largely outperforms RANDOMM2, excluding this possibility. Thus, joint training performs better not simply because it clusters relation matrices; it learns compositions indeed. 6.3 Losses and Gains In the KBC task, where are the losses and what are the gains of different settings? With additional evaluations, we show (i) some crucial settings for the base model, and (ii) joint training with an autoencoder benefits more from compositional training. 2156 Settings MR MRR H10 BASE 214 .338 52.5 no normalization 309 .326 49.9 no regularizer 400 .328 51.3 pure Gaussian 221 .336 52.1 unigram distribution 215 .324 50.6 Table 4: Ablation of the four settings of the base model as described in Sec.4.1 Crucial settings for the base model It is noteworthy that our base model already achieves strong results. This is due to several detailed but crucial settings as we discussed in Sec.4.1; Table 4 shows their gains on the FB15k237 validation data. The most dramatic improvement comes from the regularizer that drives matrices to orthogonal. Gains with compositional training One can force a model to focus more on (longer) compositions of relations, by sampling longer paths in compositional training. Since joint training with an autoencoder helps discovering compositional constraints, we expect it to be more helpful when the sampled paths are longer. In this work, path lengths are sampled from a Poisson distribution, we thus vary the mean λ of the Poisson to control the strength of compositional training. The results on FB15k-237 are shown in Table 5. We can see that, as λ gets larger, MR improves much but MRR slightly drops. It suggests that in FB15k-237, composition of relations might mainly help finding more appropriate candidates for a missing entity, rather than pinpointing a correct one. Yet, joint training improves base models even more as the paths get longer, especially in MR. It further supports our conjecture that joint training with an autoencoder may strongly interact with compositional training. 7 Conclusion We have investigated a dimension reduction technique which trains a KB embedding model jointly with an autoencoder. We have developed new training techniques and achieved state-of-the-art results on several KBC tasks with strong improvements in Mean Rank. Furthermore, we have shown that the autoencoder learns low dimension sparse codings that can be easily explained; the joint training technique drives high-dimensional data toward low Model λ Valid Test MR MRR H10 MR MRR H10 BASE 0 209 .341 52.9 215 .337 52.3 JOINT 0 +1 -.001 -.2 -3 -.001 0 BASE 0.5 204 .337 52.2 211 .332 51.7 JOINT 0.5 -3 +.002 +.1 +1 +.002 +.2 BASE 1.0 191 .334 52.0 203 .328 51.5 JOINT 1.0 -5 +.002 -.1 -6 +.003 +.1 Table 5: Evaluation of BASE and gains by JOINT, on FB15k-237 with different strengths of compositional training. Bold numbers are improvements. dimension manifolds; and the reduction of dimensionality may interact strongly with composition, help discovering compositional constraints and benefit from compositional training. We believe these findings provide insightful understandings of KB embedding models and might be applied to other neural networks beyond the KBC task. Acknowledgments This work was supported by JST CREST Grant Number JPMJCR1301, Japan. We thank Pontus Stenetorp, Makoto Miwa, and the anonymous reviewers for many helpful advices and comments. References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007., pages 722–735. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics. Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information 2157 Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 2787– 2795. L´eon Bottou. 2012. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade, pages 421– 436. Springer. Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. 2017. Chains of reasoning over entities, relations, and text using recurrent neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 132–141, Valencia, Spain. Association for Computational Linguistics. Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Dumitru Erhan, Yoshua Bengio, Aaron C. Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11:625–660. Michael Gutmann and Aapo Hyv¨arinen. 2012. Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics. Journal of Machine Learning Research, 13:307– 361. Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 318–327, Lisbon, Portugal. Association for Computational Linguistics. Katsuhiko Hayashi and Masashi Shimbo. 2017. On the equivalence of holographic and complex embeddings for link prediction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 554–559, Vancouver, Canada. Association for Computational Linguistics. Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 851–861, Denver, Colorado. Association for Computational Linguistics. Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69–74, Vancouver, Canada. Association for Computational Linguistics. Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 705–714, Lisbon, Portugal. Association for Computational Linguistics. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA., pages 2181–2187. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605. L. McInnes and J. Healy. 2018. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv e-prints. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, Georgia. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 807–814. Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 156–166, Beijing, China. Association for Computational Linguistics. Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. Stranse: a novel embedding model of entities and relationships in knowledge bases. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 460–466, San Diego, California. Association for Computational Linguistics. 2158 Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016a. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33. Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. 2016b. Holographic embeddings of knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 1217, 2016, Phoenix, Arizona, USA., pages 1955– 1961. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 809–816, USA. Omnipress. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84, Atlanta, Georgia. Association for Computational Linguistics. R. Rubinstein, A. M. Bruckstein, and M. Elad. 2010. Dictionaries for sparse representation modeling. Proceedings of the IEEE, 98(6):1045–1057. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. CoRR, abs/1703.06103. Yelong Shen, Po-Sen Huang, Ming-Wei Chang, and Jianfeng Gao. 2017. Modeling large-scale structured relationships with shared memory for knowledge base completion. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 57–68, Vancouver, Canada. Association for Computational Linguistics. Baoxu Shi and Tim Weninger. 2017. Proje: Embedding projection for knowledge graph completion. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 1236–1242. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 721–732, Baltimore, Maryland. Association for Computational Linguistics. Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 926–934. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 151–161, Edinburgh, Scotland, UK. Association for Computational Linguistics. Ran Tian, Naoaki Okazaki, and Kentaro Inui. 2016. Learning semantically and additively compositional distributional representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1277–1287, Berlin, Germany. Association for Computational Linguistics. Ivan Titov and Ehsan Khoddam. 2015. Unsupervised induction of semantic roles within a reconstructionerror minimization framework. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1–10, Denver, Colorado. Association for Computational Linguistics. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, Beijing, China. Association for Computational Linguistics. Kristina Toutanova, Victoria Lin, Wen-tau Yih, Hoifung Poon, and Chris Quirk. 2016. Compositional learning of embeddings for relation paths in knowledge base and text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1434– 1444, Berlin, Germany. Association for Computational Linguistics. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2071–2080. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. Knowl. Data Eng., 29(12):2724–2743. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014a. Knowledge graph and text jointly embedding. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1591–1601, Doha, Qatar. Association for Computational Linguistics. 2159 Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014b. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Qu´ebec City, Qu´ebec, Canada., pages 1112–1119. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. From one point to a manifold: Knowledge graph embedding for precise link prediction. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 1315–1321. Qizhe Xie, Xuezhe Ma, Zihang Dai, and Eduard Hovy. 2017. An interpretable knowledge transfer model for knowledge base completion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 950–962, Vancouver, Canada. Association for Computational Linguistics. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the 3rd International Conference on Learning Representations, pages 1–12. A Out-of-vocabulary Entities in KBC Occasionally, a KBC test set may contain entities that never appear in the training data. Such out-ofvocabulary (OOV) entities pose a challenge to KBC systems; while some systems address this issue by explicitly learn an OOV entity vector (Dettmers et al., 2018), our approach is described below. For an incomplete triple ⟨h, r, ?⟩in the test, if h is OOV, we replace it with the most frequent entity that has ever appeared as a head of relation r in the training data. If the gold tail entity is OOV, we use the zero vector for computing the score and the rank of the gold entity. Usually, OOV entities are rare and negligible in evaluation; except for the WN18RR test data which contains about 6.7% triples with OOV entities. Here, we also report adjusted scores on WN18RR in the setting that all triples with OOV entities are removed from the test. The results are shown in Table 6. Model MR MRR H10 JOINT 3317 .493 57.2 BASE 3435 .492 56.7 JOINT+COMP 1507 .367 58.7 BASE+COMP 1629 .332 58.0 Table 6: Adjusted scores on WN18RR.
2018
200
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2160–2170 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2160 Zero-Shot Transfer Learning for Event Extraction Lifu Huang1, Heng Ji1, Kyunghyun Cho2, Ido Dagan3, Sebastian Riedel4, Clare R. Voss5 1 Rensselaer Polytechnic Institute, 2 New York University, 3 Bar-Ilan University, 4 University College London, 5 US Army Research Lab 1 {huangl7, jih}@rpi.edu, 2 [email protected] 3 [email protected], 4 [email protected] 5 [email protected] Abstract Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in a target event ontology. We design a transferable architecture of structural and compositional neural networks to jointly represent and map event mentions and types into a shared semantic space. Based on this new framework, we can select, for each event mention, the event type which is semantically closest in this space as its type. By leveraging manual annotations available for a small set of existing event types, our framework can be applied to new unseen event types without additional manual annotations. When tested on 23 unseen event types, this zeroshot framework, without manual annotations, achieves performance comparable to a supervised model trained from 3,000 sentences annotated with 500 event mentions.1 1 Introduction The goal of event extraction is to identify event triggers and their arguments in unstructured text data, and then to assign an event type to each trigger and a semantic role to each argument. An example is shown in Figure 1. Traditional supervised methods have typically modeled this task of event 1The programs are publicly available for research purpose at: https://github.com/wilburOne/ZeroShotEvent extraction as a classification problem, by assigning event triggers to event types from a pre-defined fixed set. These methods rely heavily on manual annotations and features specific to each event type, and thus are not easily adapted to new event types without extra annotation effort. Handling new event types may even entail starting over, without being able to re-use annotations from previous event types. To make event extraction effective as new realworld scenarios emerge, we take a look at this task from the perspective of zero-shot learning, ZSL (Frome et al., 2013; Norouzi et al., 2013; Socher et al., 2013a). ZSL, as a type of transfer learning, makes use of separate, pre-existing classifiers to build a semantic, cross-concept space that maps between their respective classes. The resulting shared semantic space then allows for building a novel “zero-shot” classifier, i,e,, requiring no (zero) additional training examples, to handle unseen cases. We observe that each event mention has a structure consisting of a candidate trigger and arguments, with corresponding predefined name labels for the event type and argument roles. We propose to enrich the semantic representations of each event mention and event type with rich structures, and determine the type based on the semantic similarity between an event mention and each event type defined in a target ontology. Let’s consider two example sentences: E1. The Government of China has ruled Tibet since 1951 after dispatching troops to the Himalayan region in 1950. E2. Iranian state television stated that the conflict between the Iranian police and the drug smugglers took place near the town of mirjaveh. In E1, as also diagrammed in Figure 1, dis2161 Figure 1: Event Mention Example: dispatching is the trigger of a Transport-Person event with four arguments: the solid lines show the event annotations for the sentence while the dotted lines show the Abstract Meaning Representation parsing output. patching is the trigger for the event mention of type Transport Person and in E2, conflict is the trigger for the event mention of type Attack. We make use of Abstract Meaning Representations (AMR) (Banarescu et al., 2013) to identify the candidate arguments and construct event mention structures as shown in Figure 2 (top). Figure 2 (bottom) also shows event type structures defined in the Automatic Content Extraction (ACE) guideline.2 We can see that a trigger and its event type name usually have some shared meaning. Furthermore, their structures also tend to be similar: a Transport Person event typically involves a Person as its patient role, while an Attack event involves a Person or Location as an Attacker. This observation matches the theory by Pustejovsky (1991): “the semantics of an event structure can be generalized and mapped to event mention structures in a systematic and predictable way”. Figure 2: Examples of Event Mention Structures and Type Structures from ACE. Inspired by this theory, for the first time, we model event extraction as a generic grounding problem, by mapping each mention to its semantically closest event type. Given an event ontology, 2https://en.wikipedia.org/wiki/Automatic content extraction where each event type structure is well-defined, we will refer to the event types for which we have annotated event mentions as seen types, while those without annotations as unseen types. Our goal is to learn a generic mapping function independent of event types, which can be trained from annotations for a limited number of seen event types and then used for any new unseen event types. We design a transferable neural architecture, which jointly learns and maps the structural representations of event mentions and types into a shared semantic space, by minimizing the distance between each event mention and its corresponding type. For event mentions with unseen types, their structures will be projected into the same semantic space using the same framework and assigned types with top-ranked similarity values. To summarize, to apply our new zero-shot transfer learning framework to any new unseen event types, we only need (1) a structured definition of the unseen event type (its type name along with role names for its arguments, from the event ontology); and (2) some annotations for one or a few seen event types. Without requiring any additional manual annotations for the new unseen types, our ZSL framework achieves performance comparable to supervised methods trained from a substantial amount of training data for the same types. 2 Approach Overview Briefly here, we overview the phases involved in building our framework’s shared semantic space that, in turn, is the basis for the ZSL framework. Given a sentence s, we start by identifying candidate triggers and arguments based on AMR parsing (Wang et al., 2015b). For the example shown in Figure 1, we identify dispatching as a trigger, and its candidate arguments: China, troops, Himalayan and 1950. The details will be described in Section 3. 2162 Figure 3: Architecture Overview. The blue circles denote event types and event type representations. The dark grey diamonds and circles denote triggers and trigger representations from training set. The light grey diamonds and circles denote triggers and trigger representations from testing set. After this identification phase, we use our new neural architecture, as depicted in Figure 3, to classify triggers into event types. (The classification of arguments into roles follows the same pipeline.) For each trigger t, e.g., dispatch-01, we determine its type by comparing its semantic representation with that of any event type in the event ontology. In order to incorporate the contexts into the semantic representation of t, we build a structure St using AMR as shown in Figure 3. Each structure is composed of a set of tuples, e.g, ⟨dispatch-01, :ARG0, China⟩. We use a matrix to represent each AMR relation, composing its semantics with two concepts for each tuple (in Section 4), and feed all tuple representations into a CNN to generate a dense vector representation VSt for the event mention structure (in Section 5.1). Given a target event ontology, for each type y, e.g., Transport Person, we construct a type structure Sy consisting of its predefined roles, and use a tensor to denote the implicit relation between any type and argument role. We compose the semantics of type and argument role with the tensor for each tuple, e.g., ⟨Transport Person, Destination⟩(in Section 4). Then we generate the event type structure representation VSy using the same CNN (in Section 5.1). By minimizing the semantic distance between dispatch-01 and Transport Person using their dense vectors, VSt and VSy respectively, we jointly map the representations of event mention and event types into a shared semantic space, where each mention is closest to its annotated type. After training that completes the construction of the semantic space, the compositional functions and CNNs are then used to project any new event mention (e.g., donate-01) into the semantic space and find its closest event type (e.g., Donation) (in Section 5.3). In the next sections we will elaborate each step in great detail. 3 Trigger and Argument Identification Similar to Huang et al. (2016), we identify candidate triggers and arguments based on AMR Parsing (Wang et al., 2015b) and apply the same word sense disambiguation (WSD) tool (Zhong and Ng, 2010) to disambiguate word senses and link each sense to OntoNotes, as shown in Figure 1. Given a sentence, we consider all noun and verb concepts that can be mapped to OntoNotes senses by WSD as candidate event triggers. In addition, the concepts that can be matched with verbs or nominal lexical units in FrameNet (Baker et al., 1998) are also considered as candidate triggers. For each candidate trigger, we consider any concepts that are involved in a subset of AMR rela2163 tions as candidate arguments 3. We manually select this subset of AMR relations that are useful for capturing generic relations between event triggers and arguments, as shown in Table 1. Categories Relations Core roles ARG0, ARG1, ARG2, ARG3, ARG4 Non-core roles mod, location, instrument, poss, manner, topic, medium, prep-X Temporal year, duration, decade, weekday, time Spatial destination, path, location Table 1: Event-Related AMR Relations. 4 Trigger and Type Structure Composition As Figure 3 shows, for each candidate trigger t, we construct its event mention structure St based on its candidate arguments and AMR parsing. For each type y in the target event ontology, we construct a structure Sy by including its pre-defined roles and using its type as the root. Each St or Sy is composed of a collection of tuples. For each event mention structure, a tuple consists of two AMR concepts and an AMR relation. For each event type structure, a tuple consists of a type name and an argument role name. Next we will describe how to compose semantic representations for event mention and event type respectively based on these structures. Event Mention Structure For each tuple u = ⟨w1, λ, w2⟩in an event mention structure, we use a matrix to represent each AMR relation λ, and compose the semantics of λ between two concepts w1 and w2 as: Vu = [V ′ w1; V ′ w2] = f([Vw1; Vw2] · Mλ) where Vw1, Vw2 ∈Rd are the vector representations of words w1 and w2. d is the dimension size of each word vector. [ ; ] denotes the concatenation of two vectors. Mλ ∈R2d×2d is the matrix representation for AMR relation λ. Vu is the composition representation of tuple u, which consists of two updated vector representations V ′ w1, V ′ w2 for w1 and w2 by incorporating the semantics of λ. Event Type Structure For each tuple u ′ = ⟨y, r⟩ in an event type structure, where y denotes the 3On the whole ACE2005 corpus, using the AMR parser (Wang et al., 2015b), the coverage for trigger identification is 89.4% and the coverage for argument candidate identification is 66.0%. event type and r denotes an argument role, following Socher et al. (2013b), we assume an implicit relation exists between any pair of type and argument, and use a single and powerful tensor to represent the implicit relation: Vu′ = [V ′ y; V ′ r ] = f([Vy; Vr]T · U [1:2d] · [Vy; Vr]) where Vy and Vr are vector representations for y and r. U [1:2d] ∈R2d×2d×2d is a 3-order tensor. V ′ u is the composition representation of tuple u ′, which consists of two updated vector representations V ′ y, V ′ r for y and r by incorporating the semantics of their implicit relation U [1:2d]. 5 Trigger and Argument Classification 5.1 Trigger Classification for Seen Types Both event mention and event type structures are relatively simple and can be represented with a set of tuples. CNNs have been demonstrated effective at capturing sentence level information by aggregating compositional n-gram representations. In order to generate structure-level representations, we use CNN to learn to aggregate all edge and tuple representations. Input layer is a sequence of tuples, where the order of tuples is from top to bottom in the structure. Each tuple is represented by a d × 2 dimensional vector, thus each mention structure and each type structure are represented as a feature map of dimensionality d × 2h∗and d × 2p∗respectively, where h∗and p∗are the maximal number of tuples for event mention and type structures. We use zero-padding to the right to make the volume of all input structures consistent. Convolution layer Take St with h∗tuples: u1, u2, ..., uh∗as an example. The input matrix of St is a feature map of dimensionality d × 2h∗. We make ci as the concatenated embeddings of n continuous columns from the feature map, where n is the filter width and 0 < i < 2h∗+ n. A convolution operation involves a filter W ∈Rnd, which is applied to each sliding window ci: c ′ i = tanh(W · ci + b) where c ′ i is the new feature representation, and b ∈Rd is a biased vector. We set filter width as 2 and stride as 2 to make the convolution function operate on each tuple with two input columns. 2164 Max-Pooling: All tuple representations c ′ i are used to generate the representation of the input sequence by max-pooling. Learning: For each event mention t, we name the correct type as positive and all the other types in the target event ontology as negative. To train the composition functions and CNN, we first consider the following hinge ranking loss: L1(t, y) = X j∈Y, j̸=y max{0, m −Ct,y + Ct,j} Ct,y = cos([Vt; VSt], [Vy; VSy]) where y is the positive event type for t. Y is the type set of the target event ontology. [Vt; VSt] denotes the concatenation of representations of t and St. j is a negative event type for t from Y . m is a margin. Ct,y denotes the cosine similarity between t and y. The hinge loss is commonly used in zero-shot visual object classification task. However, it tends to overfit the seen types in our experiments. While clever data augmentation can help alleviate overfitting, we design two strategies: (1) we add “negative” event mentions into the training process. Here a “negative” event mention means that the mention has no positive event type among all seen types, namely it belongs to Other. (2) we design a new loss function as follows: Ld 1(t, y) = ( max j∈Y,j̸=y max{0, m −Ct,y + Ct,j}, y ̸= Other max j∈Y ′ ,j̸=y′ max{0, m −Ct,y′ + Ct,j}, y = Other where Y is the type set of the event ontology. Y ′ is the seen type set. y is the annotated type. y ′ is the type which ranks the highest among all event types for event mention t, while t belongs to Other. By minimizing Ld 1, we can learn the optimized model which can compose structure representations and map both event mention and types into a shared semantic space, where the positive type ranks the highest for each mention. 5.2 Argument Classification for Seen Types For each mention, we map each candidate argument to a specific role based on the semantic similarity of the argument path. Take E1 as an example. China is matched to Agent based on the semantic similarity between dispatch-01→ :ARG0→China and Transport-Person→Agent. Given a trigger t and a candidate argument a, we first extract a path Sa = (u1, u2, ..., up), which connects t and a and consists of p tuples. Each predefined role r is also represented as a structure by incorporating the event type, Sr = ⟨y, r⟩. We apply the same framework to take the sequence of tuples contained in Sa and Sr into a weightsharing CNN to rank all possible roles for a. Ld 2(a, r) =    max j∈Ry,j̸=r max{0, m −Ca,r + Ca,j} r ̸= Other max j∈RY ′ ,j̸=r′ max{0, m −Ca,r′ + Ca,j} r|y = Other where Ry and RY ′ are the set of argument roles which are predefined for trigger type y and all seen types Y ′. r is the annotated role and r ′ is the argument role which ranks the highest for a when a or y is annotated as Other. In our experiments, we sample various size of “negative” training data for trigger and argument labeling respectively. In the following section, we describe how the negative training instances are generated. 5.3 Zero-Shot Classification for Unseen Types During test, given a new event mention t ′, we compute its mention structure representation for St′ and all event type structure representations for SY = {Sy1, Sy2, ..., Syn} using the same parameters trained from seen types. Then we rank all event types based on their similarity scores with mention t ′. The top ranked prediction for t ′ from the event type set, denoted as by(t ′, 1), is given by: by(t ′, 1) = arg max y∈Y cos([Vt′; VSt′ ], [Vy; VSy]) Moreover, by(t ′, k) denotes the kth most probable event type predicted for t ′. We will investigate the event extraction performance based on the topk predicted event types. After determining the type y ′ for mention t ′, for each candidate argument, we adopt the same ranking function to find the most appropriate role from the role set defined for y ′. 6 Experiments 6.1 Hyper-Parameters We used the English Wikipedia dump to learn trigger sense and argument embeddings based on 2165 the Continuous Skip-gram model (Mikolov et al., 2013). Table 2 shows the hyper-parameters we used to train models. Parameter Name Value Word Sense Embedding Size 200 Initial Learning Rate 0.1 # of Filters in Convolution Layer 500 Maximal # of Tuples for Mention Structure 10 Maximal # of Tuples for Argument Path 5 Maximal # of Tuples for Event Type Structure 5 Maximal # of Tuples for Argument Role Path 1 Table 2: Hyper-parameters. 6.2 ACE Event Classification Setting N Seen Types for Training/Dev A 1 Attack B 3 Attack, Transport, Die C 5 Attack, Transport, Die, Meet, Arrest-Jail D 10 Attack, Transport, Die, Meet, Sentence, Arrest-Jail, Transfer-Money, Elect, Transfer-Ownership, End-Position Table 3: Seen Types in Each Experiment Setting. We first used the ACE event schema 4 as our target event ontology and assumed the boundaries of triggers and arguments as given. Of the 33 ACE event types, we selected the top-N most popular event types from ACE05 data as “seen” types, and used 90% event annotations of these for training and 10% for development. We set N as 1, 3, 5, 10 respectively. We tested the zero-shot classification performance on the annotations for the remaining 23 unseen types. Table 3 shows the types that we selected for training in each experiment setting. The negative event mentions and arguments that belong to Other were sampled from the output of the system developed by Huang et al. (2016) based on ACE05 training sentences, which groups all candidate triggers and arguments into clusters based on semantic representations and assigns a type/role name to each cluster. We sampled the negative event mentions from the clusters (e.g., Build, Threaten) which do not map to ACE event types. We sampled the negative arguments from the arguments of negative event mentions. Table 4 shows the statistics of the training, development and testing data sets. To show the effectiveness of structural similarity in our approach, we designed a baseline, WSD4ACE event schema specification is at: https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/englishevents-guidelines-v5.4.3.pdf Embedding, which directly maps event mentions and arguments to their candidate types and roles using our pre-trained word sense embeddings. Table 5 makes the contrast clear: structural similarity (our approach) is much more effective than lexical similarity (baseline) for both trigger and argument classification. Also, as the number of seen types in training increases, the performance of the transfer model improves. We further evaluated the performance of our transfer approach on similar and distinct unseen types. The 33 subtypes defined in ACE fall within 8 coarse-grained main types, such as Life and Justice. Each subtype belongs to one main type. Subtypes that belong to the same main type tend to have similar structures. For example, TrialHearing and Charge-Indict have the same set of argument roles. For training our transfer model, we selected 4 subtypes of Justice: Arrest-Jail, Convict, Charge-Indict, Execute. For testing, we selected 3 other subtypes of Justice: Sentence, Appeal, Release-Parole. Additionally, we selected one subtype from each of the other seven main types for comparison. Table 6 shows that, when testing on a new unseen type, the more similar it is to the seen types, the better performance is achieved. 6.3 ACE Event Identification & Classification The ACE2005 corpus includes the richest event annotations currently available for 33 types. However, in real-world scenarios, there may be thousands of event types of interest. To enrich the target event ontology and assess our transferable neural architecture on a large number of unseen types, when trained on limited annotations of seen types, we manually constructed a new event ontology which combined 33 ACE event types and argument roles, and 1,161 frames from FrameNet, except for the most generic frames such as Entity and Locale. Some ACE event types were easily aligned to frames, e.g., Die aligned to Death. Some frames were instead more accurately treated as inheritors of ACE types, such as Suicide-Attack, which inherits from Attack. We manually mapped the selected frames to ACE types. We then compared our approach with the following state-of-the-art supervised methods: • LSTM: A long short-term memory neural network (Hochreiter and Schmidhuber, 1997) based on distributed semantic features, similar 2166 Setting Training Development Test # of Types, Roles # of Events # of Arguments # of Events # of Arguments # of Types/Roles # of Events # of Arguments A 1, 5 953/900 894/1,097 105/105 86/130 23/59 753 879 B 3, 14 1,803/1,500 2,035/1,791 200/200 191/237 C 5, 18 2,033/1,300 2,281/1,503 225/225 233/241 D 10, 37 2537/700 2,816/879 281/281 322/365 Table 4: Statistics for Positive/Negative Instances in Training, Dev, and Test Sets for Each Experiment. Setting Method Hit@k Trigger Classification (%) Hit@k Argument Classification (%) k=1 k=3 k=5 k=1 k=3 k=5 WSD-Embedding 1.7 13.0 22.8 2.4 2.8 2.8 A Our Approach 4.0 23.8 32.5 1.3 3.4 3.6 B 7.0 12.5 36.8 3.5 6.0 6.3 C 20.1 34.7 46.5 9.6 14.7 15.7 D 33.5 51.4 68.3 14.7 26.5 27.7 Table 5: Comparison between Structural Representation (Our Approach) and Word Sense Embedding based Approaches on Hit@K Accuracy (%) for Trigger and Argument Classification. Type Subtype Hit@k Trigger Classification 1 3 5 Justice Sentence 68.3 68.3 69.5 Justice Appeal 67.5 97.5 97.5 Justice Release-Parole 73.9 73.9 73.9 Conflict Attack 26.5 44.5 46.7 TransactionTransfer-Money 48.4 68.9 79.5 Business Start-Org 0 33.3 66.7 Movement Transport 2.6 3.7 7.8 Personnel End-Position 9.1 50.4 53.7 Contact Phone-Write 60.8 88.2 90.2 Life Injure 87.6 91.0 91.0 Table 6: Performance on Various Types Using Justice Subtypes for Training to (Feng et al., 2016). • Joint: A structured perceptron model based on symbolic semantic features (Li et al., 2013). For our approach, we followed the experiment setting D in the previous section, using the same training and development data sets for the 10 seen types, but targeted all 1,194 event types in our new event ontology, instead of just the 33 ACE event types. For evaluation, we sampled 150 sentences from the remaining ACE05 data, including 129 annotated event mentions for the 23 unseen types. For both LSTM and Joint approaches, we used the entire ACE05 annotated data for 33 ACE event types for training except for the held-out 150 evaluation sentences. We first identified the candidate triggers and arguments, then mapped each of these to the target event ontology. We evaluated our model on their extracting of event mentions which were classified into 23 testing ACE types. Table 7 shows the performance. To further demonstrate the effectiveness of zero-shot learning in our framework and its impact in saving human annotation effort, we used the supervised LSTM approach for comparison. The training data of LSTM contained 3,464 sentences with 905 annotated event mentions for the 23 unseen event types. We divided these event annotations into 10 subsets and successively added one subset at a time (10% of annotations) into the training data of LSTM. Figure 4 shows the LSTM learning curve. By contrast, without any annotated mentions on the 23 unseen test event types in its training set, our transfer learning approach achieved performance comparable to that of the LSTM, which was trained on 3,000 sentences5 with 500 annotated event mentions. Figure 4: Comparison between Our Approach and Supervised LSTM model on 23 Unseen Event Types. 5The 3,000 sentences included all the sentences which even have not any event annotations. 2167 Method Trigger Identification Trigger Identification + Classification Arg Identification Arg Identification + Classification P R F P R F P R F P R F Supervised LSTM 94.7 41.8 58.0 89.4 39.5 54.8 47.8 22.6 30.6 28.9 13.7 18.6 Supervised Joint 55.8 67.4 61.1 50.6 61.2 55.4 36.4 28.1 31.7 33.3 25.7 29.0 Transfer 85.7 41.2 55.6 75.5 36.3 49.1 28.2 27.3 27.8 16.1 15.6 15.8 Table 7: Event Trigger and Argument Extraction Performance (%) on Unseen ACE Types. 6.4 Impact of AMR Recall that we used AMR parsing output to identify triggers and arguments in constructing event structures. To assess the impact of the AMR parser (Wang et al., 2015a) on event extraction, we chose a subset of the ERE (Entity, Relation, Event) corpus (Song et al., 2015) which has ground-truth AMR annotations. This subset contains 304 documents with 1,022 annotated event mentions of 40 types. We selected the top-6 most popular event types (Arrest-Jail, Execute, Die, Meet, Sentence, Charge-Indict) with manual annotations of 548 event mentions as seen types. We sampled 500 negative event mentions from distinct types of clusters generated from the system (Huang et al., 2016) based on ERE training sentences. We combined the annotated events for seen types and the negative event mentions, and used 90% for training and 10% for development. For evaluation, we selected 200 sentences from the remaining ERE subset, which contains 128 Attack event mentions and 40 Convict event mentions. Table 8 shows the event extraction performances based on groundtruth AMR and system AMR respectively. We also compared AMR analyses with Semantic Role Labeling (SRL) output (Palmer et al., 2010) by keeping only the core roles (e.g., :ARG0, :ARG1) from AMR annotations. As Table 8 shows, comparing the full AMR (top row) to this SRL proxy (middle row), the fine-grained AMR semantic relations such as :location, :instrument appear to be more informative for inferring event argument role labeling. Method Trigger Labeling Argument Labeling P R F1 P R F1 Perfect AMR 79.1 47.1 59.1 25.4 21.4 23.2 Perfect AMR with Core Roles only (SRL) 77.1 47.0 58.4 19.7 16.9 18.2 System AMR 85.7 32.0 46.7 22.6 15.8 18.6 Table 8: Impact of AMR and Semantic Roles on Trigger and Argument Extraction (%). 7 Related Work Most previous event extraction methods have been based on supervised learning, using either symbolic features (Ji and Grishman, 2008; Miwa et al., 2009; Liao and Grishman, 2010; Liu et al., 2010; Hong et al., 2011; McClosky et al., 2011; Riedel and McCallum, 2011; Li et al., 2013; Liu et al., 2016) or distributional features (Chen et al., 2015; Nguyen and Grishman, 2015; Feng et al., 2016; Nguyen et al., 2016) derived from a large amount of training data, and treating event types and argument role labels as symbols. These approaches can achieve high quality for known event types, but cannot be applied to new types without additional annotation effort. In contrast, we provide a new angle on event extraction, modeling it as a generic grounding task by taking advantage of rich semantics of event types. Some other IE paradigms such as Open IE (Etzioni et al., 2005; Banko et al., 2007, 2008; Etzioni et al., 2011; Ritter et al., 2012), Preemptive IE (Shinyama and Sekine, 2006), Ondemand IE (Sekine, 2006), Liberal IE (Huang et al., 2016, 2017), and semantic frame-based event discovery (Kim et al., 2013) can discover many events without pre-defined event schema. These paradigms however rely on information redundancy, and so they are not effective when the input data only consists of a few sentences. Our work can discover events from any size of input corpus and can also be complementary with these paradigms. Our event extraction paradigm is similar to the task of entity linking (Ji and Grishman, 2011) in semantic mapping. However, entity linking aims to map entity mentions to the same concept, while our framework maps each event mention to a specific category. In addition, Bronstein et al. (2015) and Peng et al. (2016) employ an eventindependent similarity-based function for event trigger detection, which follows few-shot learning setting and requires some trigger examples as seeds. Lu and Roth (2012) design a structure pref2168 erence modeling framework, which can automatically predict argument roles without any annotated data, but it relies on manually constructed patterns. Zero-Shot learning has been widely applied in visual object classification (Frome et al., 2013; Norouzi et al., 2013; Socher et al., 2013a; Chen et al., 2017; Li et al., 2017; Xian et al., 2017; Changpinyo et al., 2017), fine-grained name tagging (Ma et al., 2016; Qu et al., 2016), relation extraction (Verga et al., 2016; Levy et al., 2017), semantic parsing (Bapna et al., 2017) and domain adaptation (Romera-Paredes and Torr, 2015; Kodirov et al., 2015; Peng et al., 2017). In contrast to these tasks, for our case, the number of seen types in event extraction with manual annotations is quite limited. The most popular event schemas, such as ACE, define 33 event types while most visual object training sets contain more than 1,000 types. Therefore, methods proposed for zero-shot visual-object classification cannot be directly applied to event extraction due to overfitting. In this work, we designed a new loss function by creating “negative” training instances to avoid overfitting. 8 Conclusions and Future Work In this work, we take a fresh look at the event extraction task and model it as a generic grounding problem. We propose a transferable neural architecture, which leverages existing humanconstructed event schemas and manual annotations for a small set of seen types, and transfers the knowledge from the existing types to the extraction of unseen types, to improve the scalability of event extraction as well as to save human effort. To the best of our knowledge, this work is the first time that zero-shot learning has been applied to event extraction. Without any annotation, our approach can achieve performance comparable to state-of-the-art supervised models trained on a large amount of labeled data. In the future, we will extend this framework to other Information Extraction problems. Acknowledgments This material is based upon work supported by United States Air Force under Contract No. FA8650-17-C-7715 and ARL NS-CTA No. W911NF-09-2-0053. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force. or the United States Government. The United States Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proc. COLING1998. L. Banarescu, C. Bonial, S. Cai, M. Georgescu, K. Griffitt, U. Hermjakob, K. Knight, P. Koehn, M. Palmer, and N. Schneider. 2013. Abstract meaning representation for sembanking. In Proc. ACL2013 Workshop on Linguistic Annotation and Interoperability with Discourse. M. Banko, M. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction for the web. In Proc. IJCAI2007. M. Banko, O. Etzioni, and T. Center. 2008. The tradeoffs between open and traditional relation extraction. In Proc. ACL-HLT2008. Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2017. Towards zero-shot frame semantic parsing for domain scaling. arXiv preprint arXiv:1707.02363 . Ofer Bronstein, Ido Dagan, Qi Li, Heng Ji, and Anette Frank. 2015. Seed-based event trigger labeling: How far can event descriptions get us? In Proc. ACL2015. Soravit Changpinyo, Wei-Lun Chao, and Fei Sha. 2017. Predicting visual exemplars of unseen classes for zero-shot learning. In Proc. ICCV2017. Long Chen, Hanwang Zhang, Jun Xiao, Wei Liu, and Shih-Fu Chang. 2017. Zero-shot visual recognition using semantics-preserving adversarial embedding network. arXiv preprint arXiv:1712.01928 . Y. Chen, L. Xu, K. Liu, D. Zeng, and J. Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proc. ACL2015. O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial Intelligence . O. Etzioni, A. Fader, J. Christensen, S. Soderland, and M. Mausam. 2011. Open information extraction: The second generation. In Proc. IJCAI2011. X. Feng, L. Huang, D. Tang, B. Qin, H. Ji, and T. Liu. 2016. A language-independent neural network for event detection. In Proc. ACL2016. 2169 A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, and T. Mikolov. 2013. Devise: A deep visualsemantic embedding model. In Proc. NIPS2013. S. Hochreiter and J. Schmidhuber. 1997. Long shortterm memory. Neural computation . Y. Hong, J. Zhang, B. Ma, J. Yao, G. Zhou, and Q. Zhu. 2011. Using cross-entity inference to improve event extraction. In Proc. ACL2011. L. Huang, T. Cassidy, X. Feng, H. Ji, C. Voss, J. Han, and A. Sil. 2016. Liberal event extraction and event schema induction. In Proc. ACL2016. L. Huang, J. May, X. Pan, H. Ji, X. Ren, J. Han, L. Zhao, and J. Hendler. 2017. Liberal entity extraction: Rapid construction of fine-grained entity typing systems. Big Data . H. Ji and R. Grishman. 2008. Refining event extraction through cross-document inference. In Proc. ACL2008. Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proc. ACL-HLT2011. H. Kim, X. Ren, Y. Sun, C. Wang, and J. Han. 2013. Semantic frame-based document representation for comparable corpora. In Proc. ICDM2013. Elyor Kodirov, Tao Xiang, Zhenyong Fu, and Shaogang Gong. 2015. Unsupervised domain adaptation for zero-shot learning. In Proc. ICCV2015. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115 . Q. Li, H. Ji, and L. Huang. 2013. Joint event extraction via structured prediction with global features. In Proc. ACL2013. Yanan Li, Donghui Wang, Huanhang Hu, Yuetan Lin, and Yueting Zhuang. 2017. Zero-shot recognition using dual visual-semantic mapping paths. arXiv preprint arXiv:1703.05002 . S. Liao and R. Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proc. ACL2010. B. Liu, L. Qian, H. Wang, and G. Zhou. 2010. Dependency-driven feature-based learning for extracting protein-protein interactions from biomedical text. In Proc. COLING2010. S. Liu, Y. Chen, S. He, K. Liu, and J. Zhao. 2016. Leveraging framenet to improve automatic event detection. In Proc. ACL2016. Wei Lu and Dan Roth. 2012. Automatic event extraction with structured preference modeling. In Proc. ACL2012. Y. Ma, E. Cambria, and S. Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In Proc. COLING2016. D. McClosky, M. Surdeanu, and C. D. Manning. 2011. Event extraction as dependency parsing. In Proc. ACL2011. T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. M. Miwa, R. Stre, Y. Miyao, and J. Tsujii. 2009. A rich feature vector for protein-protein interaction extraction from multiple corpora. In Proc. EMNLP2009. T. Nguyen, K. Cho, and R. Grishman. 2016. Joint event extraction via recurrent neural networks. In Proc. NAACL-HLT2016. T. Nguyen and R. Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proc. ACL2015. M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. Corrado, and J. Dean. 2013. Zero-shot learning by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650 . M. Palmer, D. Gildea, and N. Xue. 2010. Semantic role labeling. Synthesis Lectures on Human Language Technologies . Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal supervision. In Proc. EMNLP2016. Kuan-Chuan Peng, Ziyan Wu, and Jan Ernst. 2017. Zero-shot deep domain adaptation. arXiv preprint arXiv:1707.01922 . J. Pustejovsky. 1991. The syntax of event structure. Cognition . L. Qu, G. Ferraro, L. Zhou, W. Hou, and T. Baldwin. 2016. Named entity recognition for novel types by transfer learning. In Proc. ACL2016. S. Riedel and A. McCallum. 2011. Fast and robust joint models for biomedical event extraction. In Proc. EMNLP2011. A. Ritter, O. Etzioni, and S. Clark. 2012. Open domain event extraction from twitter. In Proc. SIGKDD2012. Bernardino Romera-Paredes and Philip Torr. 2015. An embarrassingly simple approach to zero-shot learning. In Proc. ICML2015. S. Sekine. 2006. On-demand information extraction. In Proc. COLING-ACL2006. Y. Shinyama and S. Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proc. HLT-NAACL2006. 2170 R. Socher, M. Ganjoo, C. Manning, and A. Ng. 2013a. Zero-shot learning through cross-modal transfer. In Proc. NIPS2013. R. Socher, A. Perelygin, J. Wu, J. Chuang, C. Manning, A. Ng, and C. Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. EMNLP2013. Zhiyi Song, Ann Bies, Stephanie Strassel, Tom Riese, Justin Mott, Joe Ellis, Jonathan Wright, Seth Kulick, Neville Ryant, and Xiaoyi Ma. 2015. From light to rich ere: Annotation of entities, relations, and events. In Proc. NAACL-HLT2015 Workshop on on EVENTS. P. Verga, D. Belanger, E. Strubell, B. Roth, and A. McCallum. 2016. Multilingual relation extraction using compositional universal schema. In Proc. NAACL2016. C. Wang, N. Xue, and S. Pradhan. 2015a. Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In Proc. ACL2015. Chuan Wang, Nianwen Xue, Sameer Pradhan, and Sameer Pradhan. 2015b. A transition-based algorithm for amr parsing. In HLT-NAACL. Yongqin Xian, Bernt Schiele, and Zeynep Akata. 2017. Zero-shot learning-the good, the bad and the ugly. arXiv preprint arXiv:1703.04394 . Z. Zhong and H. T. Ng. 2010. It makes sense: A widecoverage word sense disambiguation system for free text. In Proc. ACL2010.
2018
201
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2171–2181 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2171 Recursive Neural Structural Correspondence Network for Cross-domain Aspect and Opinion Co-Extraction Wenya Wang†‡ and Sinno Jialin Pan† †Nanyang Technological University, Singapore ‡SAP Innovation Center Singapore {wa0001ya, sinnopan}@ntu.edu.sg Abstract Fine-grained opinion analysis aims to extract aspect and opinion terms from each sentence for opinion summarization. Supervised learning methods have proven to be effective for this task. However, in many domains, the lack of labeled data hinders the learning of a precise extraction model. In this case, unsupervised domain adaptation methods are desired to transfer knowledge from the source domain to any unlabeled target domain. In this paper, we develop a novel recursive neural network that could reduce domain shift effectively in word level through syntactic relations. We treat these relations as invariant “pivot information” across domains to build structural correspondences and generate an auxiliary task to predict the relation between any two adjacent words in the dependency tree. In the end, we demonstrate state-ofthe-art results on three benchmark datasets. 1 Introduction The problem of fine-grained opinion analysis involves extraction of opinion targets (or aspect terms) and opinion expressions (or opinion terms) from each review sentence. For example, in the sentence: “They offer good appetizers”, the aspect and opinion terms are appetizers and good correspondingly. Many supervised deep models have been proposed for this problem (Liu et al., 2015; Yin et al., 2016; Wang et al., 2017), and obtained promising results. However, these methods fail to adapt well across domains, because the aspect terms from two different domains are usually disjoint, e.g., laptop v.s. restaurant, leading to large domain shift in the feature vector space. Though unsupervised methods (Hu and Liu, 2004; Qiu et al., 2011) can deal with data with few labels, their performance is unsatisfactory compared with supervised ones. There have been a number of domain adaptation methods for coarse-grained sentiment classification problems across domains, where an overall sentiment polarity of a sentence or document is being predicted. Nevertheless, very few approaches exist for cross-domain fine-grained opinion analysis due to the difficulties in fine-grained adaptation, which is more challenging than coarse-grained problems. Li et al. (2012) proposed a bootstrap method based on the TrAdaBoost algorithm (Dai et al., 2007) to iteratively expand opinion and aspect lexicons in the target domain by exploiting source-domain labeled data and cross-domain common relations between aspect terms and opinion terms. However, their model requires a seed opinion lexicon in the target domain and pre-mined syntactic patterns as a bridge. Ding et al. (2017) proposed to use rules to generate auxiliary supervision on top of a recurrent neural network to learn domain-invariant hidden representation for each word. The performance highly depends on the quality of the manually defined rules and the prior knowledge of a sentiment lexicon. In addition, the recurrent structure fails to capture the syntactic interactions among words intrinsically for opinion extraction. The requirement for rules makes the above methods non-flexible. In this paper, we propose a novel cross-domain Recursive Neural Network (RNN)1 for aspect and opinion terms co-extraction across domains. Our motivations are twofold: 1) The dependency relations capture the interactions among different words. These relations are especially important for identifying aspect terms and opinion terms (Qiu et al., 2011; Wang et al., 2016), which are also domain-invariant within the same language. Therefore, they can be used as “pivot” information to 1Here, we use RNN to denote recursive neural networks, rather than recurrent neural networks. 2172 bridge the gap between different domains. 2) Inspired by the idea of structural learning (Ando and Zhang, 2005), the success of target task depends on the ability of finding good predictive structures learned from other related tasks, e.g., structural correspondence learning (SCL) (Blitzer et al., 2006) for coarse-grained cross-domain sentiment classification. Here, we aim to generate an auxiliary task on dependency relation classification. Different from previous approaches, our auxiliary task and the target extraction task are of heterogeneous label spaces. We aim to integrate this auxiliary task with distributed relation representation learning into a recursive neural network. Specifically, we generate a dependency tree for each sentence from the dependency parser and construct a unified RNN that integrates an auxiliary task into the computation of each node. The auxiliary task is to classify the dependency relation for each direct edge in the dependency tree by learning a relation feature vector. To reduce label noise brought by inaccurate parsing trees, we further propose to incorporate an autoencoder into the auxiliary task to group the relations into different clusters. Finally, to model the sequential context interaction, we develop a joint architecture that combines RNN with a sequential labeling model for aspect and opinion terms extraction. Extensive experiments are conducted to demonstrate the advantage of our proposed model. 2 Related Work Existing works for single-domain aspect/opinion terms extraction include unsupervised methods based on association rule mining (Hu and Liu, 2004), syntactic rule propagation (Qiu et al., 2011) or topic modeling (Titov and McDonald, 2008; Lu et al., 2009; Zhang et al., 2010), as well as supervised methods based on extensive feature engineering with graphical models (Jin and Ho, 2009; Li et al., 2010) or deep learning (Liu et al., 2015; Zhang et al., 2015; Wang et al., 2017; Yin et al., 2016). Among exiting deep models, improved results are obtained using dependency relations (Yin et al., 2016; Wang et al., 2016), which indicates the significance of syntactic word interactions for target term extraction. In cross-domain setting, there are very few works for aspect/opinion terms extraction including a pipelined approach (Li et al., 2012) and a recurrent neural network (Ding et al., 2017). Both of the methods require manual construction of common and pivot syntactic patterns or rules, which are indicative of aspect or opinion words. There have been a number of domain adaptation approaches proposed for coarse-grained sentiment classification. Among existing methods, one active line focuses on projecting original feature spaces of two domains into the same low-dimensional space to reduce domain shift using pivot features as a bridge (Blitzer et al., 2007; Pan et al., 2010; Bollegala et al., 2015; Yu and Jiang, 2016). Another line learns domain-invariant features via autoencoders (Glorot et al., 2011; Chen et al., 2012; Zhou et al., 2016). Our work is more related to the first line by utilizing pivot information to transfer knowledge across domains, but we integrate the idea into a unified deep structure that can fully utilize syntactic structure for domain adaptation in fine-grained sentiment analysis. 3 Problem Definition & Motivation Our task is to extract opinion and aspect terms within each review sentence. We denote a sentence by a sequence of tokens x= (w1, w2, ..., wn). The output is a sequence of token-level labels y=(y1, y2, ..., yn), with yi ∈{BA, IA, BO, IO, N} that represents beginning of an aspect (BA), inside of an aspect (IA), beginning of an opinion (BO), inside of an opinion (IO) or none of the above (N). A subsequence of labels started with “BA” and followed by “IA” indicates a multi-word aspect term. In unsupervised domain adaptation, we are given a set of labeled review sentences from a source domain DS ={(xSi, ySi)}nS i=1, and a set of unlabeled sentences from a target domain DT = {xTj}nT j=1. Our goal is to predict token-level labels on DT . Existing works for cross-domain aspect and/or opinion terms extraction require hand-coded rules and a sentiment lexicon in order to transfer knowledge across domains. For example in Figure 1, given a review sentence “They offer good appetizers” in the source domain and “The laptop has a nice screen” in the target domain. If nice has been extracted as a common sentiment word, and “OPINION-amod-ASPECT” has been identified as a common syntactic pattern from the source domain, screen could be deduced as an aspect term using the identified syntactic pattern (Li et al., 2012). Similarly, Ding et al. (2017) used a set of predefined rules based on syntactic relations and a sentiment lexicon to generate auxiliary labels to learn high-level feature representations through a 2173 They oer good appetizers nsubj dobj amod The laptop has a screen nice det det nsubj dobj amod RESTAURANT LAPTOP Figure 1: An example of two reviews with similar syntactic patterns. recurrent neural network. On one hand, these previous attempts have verified that syntactic information between words, which can be used as a bridge between domains, is crucial for domain adaptation. On the other hand, dependency-tree-based RNN (Socher et al., 2010) has proven to be effective to learn high-level feature representation of each word by encoding syntactic relations between aspect terms and opinion terms (Wang et al., 2016). With the above findings, we propose a novel RNN named Recursive Neural Structural Correspondence Network (RNSCN) to learn high-level representation for each word across different domains. Our model is built upon dependency trees generated from a dependency parser. Different from previous approaches, we do not require any hand-coded rules or pre-selected pivot features to construct correspondences, but rather focus on the automatically generated dependency relations as the pivots. The model associates each direct edge in the tree with a relation feature vector, which is used to predict the corresponding dependency relation as an auxiliary task. Note that the relation vector is the key in the model: it associates with the two interacting words and is used to construct structural correspondences between two different domains. Hence, the auxiliary task guides the learning of relation vectors, which in turn affects their correspondingly interactive words. Specifically in Figure 1, the relation vector for “amod” is computed from the features of its child and parent words, and also used to produce the hidden representation of its parent. For this relation path in both sentences, the auxiliary task enforces close proximity for these two relation vectors. This pushes the hidden representations for their parent nodes appetizers and screen closer to each other, provided that good and nice have similar representations. In a word, the auxiliary task bridges the gap between two different domains by drawing the words with similar syntactic properties closer to each other. However, the relation vectors may be sensitive to the accuracy of the dependency parser. It might harm the learning process when some noise exists for certain relations, especially for informal texts. This problem of noisy labels has been addressed using perceptual consistency (Reed et al., 2015). Inspired by the taxonomy of dependency relations (de Marneffe and Manning, 2008), relations with similar functionalities could be grouped together, e.g., dobj, iobj and pobj all indicate objects. We propose to use an auto-encoder to automatically group these relations in an unsupervised manner. The reconstruction loss serves as the consistency objective that reduces label noise by aligning relation features with their intrinsic relation group. 4 Proposed Methodology Our model consists of two components. The first component is a Recursive Neural Structural Correspondence Network (RNSCN), and the second component is a sequence labeling classifier. In this paper, we focus on Gated Recurrent Unit (GRU) as an implementation for the sequence labeling classifier. We choose GRU because it is able to deal with long-term dependencies compared to a simple Recurrent neural network and requires less parameters making it easier to train than LSTM. The resultant deep learning model is denoted by RNSCN-GRU. We also implement Conditional Random Field as the sequence labeling classifier, and denote the model by RNSCN-CRF accordingly. The overall architecture of RNSCN-GRU without auto-encoder on relation denoising is shown in Figure 2. The left and right are two example sentences from the source and the target domain, respectively. In the first component, RNSCN, an auxiliary task to predict the dependency relation for each direct edge is integrated into a dependencytree-based RNN. We generate a relation vector for each direct edge from its child node to parent node, and use it to predict the relation and produce the hidden representation for the parent node in the dependency tree. To address the issues of noisy relation labels, we further incorporate an auto-encoder into RNSCN, as will be shown in Figure 3. While RNSCN mainly focuses on syntactic interactions among the words, the second component, GRU, aims to compute linear-context interactions. GRU takes the hidden representation of each word computed from RNSCN as inputs and further produces final representation of each word by taking linear contexts into consideration. We describe each component in detail in the following sections. 2174 appetizers good o er they root dobj amod nsubj h4 h3 h h2 r2 r43 r24 x x2 x3 x4 y 43 y 2 y 24 nice a has laptop det amod nsubj h6 h5 h h3 r2 r65 r36 x2 x3 x4 x5 y 2 y 65 y 36 x6 x The screen dobj h2 r32 y 32 det h4 r64 y 64 h  h 3 h 4 h  h 2 h 3 h 4 h 5 h 6 h 2 RNSCN GRU Source Target Figure 2: The architecture of RNSCN-GRU. 4.1 Recursive Neural Structural Correspondence Network RNSCN is built on the dependency tree of each sentence, which is pre-generated from a dependency parser. Specifically, each node in the tree is associated with a word wn, an input word embedding xn ∈Rd and a transformed hidden representation hn ∈Rd. Each direct edge in the dependency tree associates with a relation feature vector rnm ∈Rd and a true relation label vector yR nm ∈RK, where K is the total number of dependency relations, n and m denote the indices of the parent and child word of the dependency edge, respectively. Based on the dependency tree, the hidden representations are generated in a recursive manner from leaf nodes until reaching the root node. Consider the sourcedomain sentence shown in Figure 2 as an illustrative example, we first compute hidden representations for leaf nodes they and good: h1=tanh(Wxx1 + b), h3=tanh(Wxx3 + b), where Wx ∈Rd×d transforms word embeddings to hidden space. For non-leaf node appetizer, we first generate the relation vector r43 for the dependency edge x4 (appetizers) amod −−−−→x3 (good) by r43 = tanh(Whh3 + Wxx4), where Wh ∈Rd×d transforms the hidden representation to the relation vector space. We then compute the hidden representation for appetizer: h4 = tanh(Wamodr43 + Wxx4 + b). Moreover, the relation vector r43 is used for the auxiliary task on relation prediction: ˆyR 43 = softmax(WRr43 + bR), where WR ∈RK×d is the relation classification matrix. The supervised relation classifier enforces close proximity of similar {rnm}’s in the distributed relation vector space. The relation features bridge the gap of word representations in different domains by incorporating them into the forward computations. In general, the hidden representation hn for a non-leaf node is produced through hn =tanh( X m∈Mn WRnmrnm + Wxxn + b), (1) where rnm =tanh(Wh·hm+Wx·xn), Mn is the set of child nodes of wn, and WRnm is the relation transformation matrix tied with each relation Rnm. The predicted label vector ˆyR nm for rnm is ˆyR nm = softmax(WR · rnm + bR). (2) Here we adopt the the cross-entropy loss for relation classification between the predicted label vector ˆyR nm and the ground-truth yR nm to encode relation side information into feature learning: ℓR = K X k=1 −yR nm[k] log ˆyR nm[k]. (3) Through the auxiliary task, similar relations enforce participating words close to each other so 2175 that words with similar syntactic functionalities are clustered across domains. On the other hand, the pre-trained word embeddings group semanticallysimilar words. By taking them as input to RNN, together with the auxiliary task, our model encodes both semantic and syntactic information. 4.2 Reduce Label Noise with Auto-encoders As discussed in Section 3, it might be hard to learn an accurate relation classifier when each class is a unique relation, because the dependency parser may generate incorrect relations as noisy labels. To address it, we propose to integrate an autoencoder into RNSCN. Suppose there is a set of latent groups of relations: G = {1, 2, ..., |G|}, where each relation belongs to only one group. For each relation vector, rnm, an autoencoder is performed before feeding it into the auxiliary classifier (2). The goal is to encode the relation vector to a probability distribution of assigning this relation to any group. As can be seen Figure 3, each relation vector rnm is first passed through the autoencoder as follows, p(Gnm = i|rnm) = exp(r⊤ nmWencgi) P j∈G exp(r⊤ nmWencgj), (4) where Gnm denotes the inherent relation group for rnm, gi ∈Rd represents the feature embedding for group i, and Wenc∈Rd×d is the encoding matrix that computes bilinear interactions between relation vector rnm and relation group embedding gi. Thus, p(Gnm = i|rnm) represents the probability of rnm being mapped to group i. An accumulated relation group embedding is computed as: gnm = |G| X i=1 p(Gnm = i|rnm)gi. (5) For decoding, the decoder takes gnm as input and tries to reconstruct the relation feature input rnm. Moreover, gnm is also used as the higher-level feature vector for rnm for predicting the relation label. Therefore, the objective for the auxiliary task in (3) becomes: ℓR = ℓR1 + αℓR2 + βℓR3, (6) where ℓR1 = ∥rnm −Wdecgnm∥2 2 , (7) ℓR2 = K X k=1 −yR nm[k] log ˆyR nm[k], (8) ℓR3 = I −¯G⊤¯G 2 F . (9) y nm autoencoder rnm g g2 gjj Wenc gnm encode r nm decode Wdec autoencoder group emedding rnm hm xn hn y nm Figure 3: An autoencoder for relation grouping. Here ℓR1 is the reconstruction loss with Wdec being the decoding matrix, ℓR2 follows (3) with ˆyR nm = softmax(WRgnm + bR) and ℓR3 is the regularization term on the correlations among latent groups with I being the identity matrix and ¯G being a normalized group embedding matrix that consists of normalized gi’s as column vectors. This regularization term enforces orthogonality between gi and gj for i ̸= j. α and β are used to control the trade-off among different losses. With the auto-encoder, the auxiliary task of relation classification is conditioned on group assignment. The reconstruction loss further ensures the consistency between relation features and groupings, which is supposed to dominate classification loss when the observed labels are inaccurate. We denote RNSCN with auto-encoder by RNSCN+. 4.3 Joint Models for Sequence Labeling RNSCN or RNSCN+ focuses on capturing and representing syntactic relations to build a bridge between domains and learn more powerful representations for tokens. However, it ignores the linearchain correlations among tokens within a sentence, which is important for aspect and opinion terms extraction. Therefore, we propose a joint model, denoted by RNSCN-GRU (RNSCN+-GRU), which integrates a GRU-based recurrent neural network on top of RNSCN (RNSCN+), i.e., the input for GRU is the hidden representations hn learned by RNSCN or RNSCN+ for the n-th token in the sentence. For simplicity in presentation, we denote the computation of GRU by using the notation fGRU. To be specific, by taking hn as input, the final feature representation h′ n for each word is obtained through h′ n = fGRU(h′ n−1, hn; Θ), (10) where Θ is the collection of the GRU parameters. The final token-level prediction is made through ˆyn = softmax(Wl · h′ n + bl), (11) where Wl ∈R5×d′ transforms a d′-dimensional feature vector to class probabilities (note that we 2176 have 5 different classes as defined in Section 3). The second joint model, namely RNSCN-CRF, combines a linear-chain CRF with RNSCN to learn the discriminative mapping from high-level features to labels. The advantage of CRF is to learn sequential interactions between each pair of adjacent words as well as labels and provide structural outputs. Formally, the joint model aims to output a sequence of labels with maximum conditional probability given its input. Denote by y a sequence of labels for a sentence and by H the embedding matrix for each sentence (each column denotes a hidden feature vector of a word in the sentence learned by RNSCN), the inference is computed as: ˆy= arg max y p(y|H) = arg max y 1 Z(H) Y c∈C exp⟨Wc, g(H, yc)⟩(12) where C indicates the set of different cliques (unary and pairwise cliques in the context of linear-chain). Wc is tied for each different yc, which indicates the labels for clique c. The operator ⟨·, ·⟩is the element-wise multiplication, and g(·) produces the concatenation of {hn}’s in a context window of each word. The above two models both consider the sequential interaction of the words within each sentence, but the formalization and training are totally different. We will report the results for both joint models in the experiment section. 4.4 Training Recall that in our cross-domain setting, the labels for terms extraction are only available in the source domain, but the auxiliary relation labels can be automatically produced for both domains via the dependency parser. Besides the source domain labeled data DS = {(xSi, ySi)}nS i=1, we denote by DR ={(rj, yR j )}nR j=1 the combined source and target domain data with auxiliary relation labels. For training, the total loss consists of token-prediction loss ℓS and relation-prediction loss ℓR: L = X DS ℓS(ySi, ˆySi) + γ X DR ℓR(rj, yR j ), (13) where γ is the trade-off parameter, ℓS is the crossentropy loss between the predicted extraction label in (11) and the ground-truth, and ℓR is defined in (6) for RNSCN+ or (3) for RNSCN. For RNSCNCRF, the loss becomes the negative log probability of the true label given the corresponding input: ℓS(ySi, ˆySi) = −log(ySi|hSi). (14) Dataset Description # Sentences Training Testing R Restaurant 5,841 4,381 1,460 L Laptop 3,845 2,884 961 D Device 3,836 2,877 959 Table 1: Data statistics with number of sentences. The parameters for token-level predictions and relation-level predictions are updated jointly such that the information from the auxiliary task could be propagated to the target task to obtain better performance. This idea is in accordance with structural learning proposed by Ando and Zhang (2005), which shows that multiple related tasks are useful for finding the optimal hypothesis space. In our case, the set of multiple tasks includes the target terms extraction task and the auxiliary relation prediction task, which are closely related. The parameters are all shared across domains. The joint model is trained using back-propagation from the top layer of GRU or CRF to RNSCN until reaching to the input word embeddings in the bottom. 5 Experiments 5.1 Data & Experimental Setup The data is taken from the benchmark customer reviews in three different domains, namely restaurant, laptop and digital devices. The restaurant domain contains a combination of restaurant reviews from SemEval 2014 task 4 subtask 1 (Pontiki et al., 2014) and SemEval 2015 task 12 subtask 1 (Pontiki et al., 2015). The laptop domain consists of laptop reviews from SemEval 2014 task 4 subtask 1. For digital device, we take reviews from (Hu and Liu, 2004) containing sentences from 5 digital devices. The statistics for each domain are shown in Table 1. In our experiments, we randomly split the data in each domain into training set and testing set with the proportion being 3:1. To obtain more rigorous result, we make three random splits for each domain and test the learned model on each split. The number of sentences for training and testing after each split is also shown in Table 1. Each sentence is labeled with aspect terms and opinion terms. For each cross-domain task, we conduct both inductive and transductive experiments. Specifically, we train our model only on the training sets from both (labeled) source and (unlabeled) target domains. For testing, the inductive results are obtained using the test data from the target domain, and the transductive results are obtained using the (unlabeled) training data from the target domain. 2177 The evaluation metric we used is F1 score. Following the setting from existing work, only exact match could be counted as correct. For experimental setup, we use Stanford Dependency Parser (Klein and Manning, 2003) to generate dependency trees. There are in total 43 different dependency relations, i.e. 43 classes for the auxiliary task. We set the number of latent relation groups as 20. The input word features for RNSCN are pre-trained word embeddings using word2vec (Mikolov et al., 2013) which is trained on 3M reviews from the Yelp dataset2 and electronics dataset in Amazon reviews3 (McAuley et al., 2015). The dimension of word embeddings is 100. Because of the relatively small size of the training data compared with the number of parameters, we firstly pre-train RNSCN for 5 epochs with minibatch size 30 and rmsprop initialized at 0.01. The joint model of RNSCN+-GRU is then trained with rmsprop initialized at 0.001 and mini-batch size 30. The trade-off parameter α, β and γ are set to be 1, 0.001 and 0.1, respectively. The hidden-layer dimension for GRU is 50, and the context window size is 3 for input feature vectors of GRU. For the joint model of RNSCN-CRF, we implement SGD with a decaying learning rate initialized at 0.02. The context window size is also 3 in this case. Both joint models are trained for 10 epochs. 5.2 Comparison & Results We compared our proposed model with several baselines and variants of the proposed model: • RNCRF: A joint model of recursive neural network and CRF proposed by (Wang et al., 2016) for single-domain aspect and opinion terms extraction. We make all the parameters shared across domains for target prediction. • RNGRU: A joint model of RNN and GRU. The hidden layer of RNN is taken as input for GRU. We share all the parameters across domains, similar to RNCRF. • CrossCRF: A linear-chain CRF with handengineered features that are useful for crossdomain settings (Jakob and Gurevych, 2010), e.g., POS tags, dependency relations. • RAP: The Relational Adaptive bootstraPping method proposed by (Li et al., 2012) that uses TrAdaBoost to expand lexicons. 2http://www.yelp.com/dataset challenge 3http://jmcauley.ucsd.edu/data/amazon/links.html • Hier-Joint: A recent deep model proposed by Ding et al. (2017) that achieves state-ofthe-art performance on aspect terms extraction across domains. • RNSCN-GRU: Our proposed joint model integrating auxiliary relation prediction task into RNN that is further combined with GRU. • RNSCN-CRF: The second proposed model similar to RNSCN-GRU, which replace GRU with CRF. • RNSCN+-GRU: Our final joint model with auto-encoders to reduce auxiliary label noise. Note that we do not implement other recent deep adaptation models for comparison (Chen et al., 2012; Yang and Hospedales, 2015), because HierJoint (Ding et al., 2017) has already demonstrated better performances than these models. The overall comparison results with the baselines are shown in Table 2 with average F1 scores and standard deviations over three random splits. Clearly, the results for aspect terms (AS) transfer are much lower than opinion terms (OP) transfer, which indicate that the aspect terms are usually quite different across domains, whereas the opinion terms could be more common and similar. Hence the ability to adapt the aspect extraction from the source domain to the target domain becomes more crucial. On this behalf, our proposed model shows clear advantage over other baselines for this more difficult transfer problem. Specifically, we achieve 6.77%, 5.88%, 10.55% improvement over the bestperforming baselines for aspect extraction in R→L, L→D and D→L, respectively. By comparing with RNCRF and RNGRU, we show that the structural correspondence network is indeed effective when integrated into RNN. To show the effect of the integration of the autoencoder, we conduct experiments over different variants of the proposed model in Table 3. RNSCNGRU represents the model without autoencoder, which achieves much better F1 scores on most experiments compared with the baselines in Table 2. RNSCN+-GRU outperforms RNSCN-GRU in almost all experiments. This indicates the autoencoder automatically learns data-dependent groupings, which is able to reduce unnecessary label noise. To further verify that the autoencoder indeed reduces label noise when the parser is inaccurate, we generate new noisy parse trees by replacing some relations within each sentence with a random 2178 Models R→L R→D L→R L→D D→R D→L AS OP AS OP AS OP AS OP AS OP AS OP CrossCRF 19.72 59.20 21.07 52.05 28.19 65.52 29.96 56.17 6.59 39.38 24.22 46.67 (1.82) (1.34) (0.44) (1.67) (0.58) (0.89) (1.69) (1.49) (0.49) (3.06) (2.54) (2.43) RAP 25.92 62.72 22.63 54.44 46.90 67.98 34.54 54.25 45.44 60.67 28.22 59.79 (2.75) (0.49) (0.52) (2.20) (1.64) (1.05) (0.64) (1.65) (1.61) (2.15) (2.42) (4.18) Hier-Joint 33.66 33.20 48.10 31.25 47.97 34.74 (1.47) (0.52) (1.45) (0.49) (0.46) (2.27) RNCRF 24.26 60.86 24.31 51.28 40.88 66.50 31.52 55.85 34.59 63.89 40.59 60.17 (3.97) (3.35) (2.57) (1.78) (2.09) (1.48) (1.40) (1.09) (1.34) (1.59) (0.80) (1.20) RNGRU 24.23 60.65 20.49 52.28 39.78 62.99 32.51 52.24 38.15 64.21 39.44 60.85 (2.41) (1.04) (2.68) (2.69) (0.61) (0.95) (1.12) (2.37) (2.82) (1.11) (2.79) (1.25) RNSCN-CRF 35.26 61.67 32.00 52.81 53.38 67.60 34.63 56.22 48.13 65.06 46.71 61.88 (1.31) (1.35) (1.48) (1.29) (1.49) (0.99) (1.38) (1.10) (0.71) (0.66) (1.16) (1.52) RNSCN-GRU 37.77 62.35 33.02 57.54 53.18 71.44 35.65 60.02 49.62 69.42 45.92 63.85 (0.45) (1.85) (0.58) (1.27) (0.75) (0.97) (0.77) (0.80) (0.34) (2.27) (1.14) (1.97) RNSCN+-GRU 40.43 65.85 35.10 60.17 52.91 72.51 40.42 61.15 48.36 73.75 51.14 71.18 (0.96) (1.50) (0.62) (0.75) (1.82) (1.03) (0.70) (0.60) (1.14) (1.76) (1.68) (1.58) Table 2: Comparisons with different baselines. Models R→L R→D L→R L→D D→R D→L AS OP AS OP AS OP AS OP AS OP AS OP RNSCN-GRU 37.77 62.35 33.02 57.54 53.18 71.44 35.65 60.02 49.62 69.42 45.92 63.85 RNSCN-GRU (r) 32.97 50.18 26.21 53.58 35.88 65.73 32.87 57.57 40.03 67.34 40.06 59.18 RNSCN+-GRU 40.43 65.85 35.10 60.17 52.91 72.51 40.42 61.15 48.36 73.75 51.14 71.18 RNSCN+-GRU (r) 39.27 59.41 33.42 57.24 45.79 69.96 38.21 59.12 45.36 72.84 50.45 68.05 Table 3: Comparisons with different variants of the proposed model. R→L R→D L→R L→D D→R D→L AS OP AS OP AS OP AS OP AS OP AS OP OUT Hier-Joint 33.66 33.20 48.10 31.25 47.97 34.74 RNSCN+-GRU* 39.06 34.07 47.98 38.51 47.49 48.49 RNSCN+ 31.60 65.89 24.37 60.01 39.58 71.03 34.40 60.47 41.02 71.23 45.54 69.00 RNSCN+-GRU 40.43 65.85 35.10 60.17 52.91 72.51 40.42 61.15 48.36 73.75 51.14 71.18 IN Hier-Joint 32.41 29.79 47.04 31.26 47.41 33.80 RNSCN+-GRU* 40.34 30.75 48.69 37.40 46.49 48.50 RNSCN+ 30.76 63.65 22.48 59.24 39.54 70.25 35.32 60.00 37.75 70.64 43.72 68.27 RNSCN+-GRU 41.27 65.44 33.58 60.28 52.48 72.10 39.73 60.18 47.10 72.19 50.23 70.21 Table 4: Comparisons with different transfer setting. relation. Specifically, in each source domain, for each relation that connects to any aspect or opinion word, it has 0.5 probability of being replaced by any other relation. In Table 3, We denote the model with noisy relations with (r). Obviously, the performance of RNSCN-GRU without an autoencoder significantly deteriorates when the auxiliary labels are very noisy. On the contrary, RNSCN+GRU (r) achieves acceptable results compared to RNSCN+-GRU. This proves that the autoencoder makes the model more robust to label noise and helps to adapt the information more accurately to the target data. Note that a large drop for L →R in aspect extraction might be caused by a large portion of noisy replacements for this particular data which makes it too hard to train a good classifier. This may not greatly influence opinion extraction, as shown, because the two domains usually share many common opinion terms. However, the significant difference in aspect terms makes the learning more dependent on common relations. The above comparisons are made using the test data from target domains which are not available during training (i.e., the inductive setting). For more complete comparison, we also conduct experiments in the transductive setting. We pick our best model RNSCN+-GRU, and show the effect of different components. To do that, we first remove the sequential structure on top, resulting in RNSCN+. Moreover, we create another variant by removing opinion term labels to show the effect of the double propogation between aspect terms and opinion terms. The resulting model is named RNSCN+GRU*. As shown in Table 4, we denote by OUT and IN the inductive and transductive setting, respectively. The results shown are the average F1 scores among three splits4. In general, RNSCN+GRU shows similar performances for both inductive and transductive settings. This indicates the 4We omit standard deviation here due to the limit of space. 2179 G Word 1 this, the, their, my, here, it, I, our, not 2 quality, jukebox, maitre-d, sauces, portions, volume, friend, noodles, calamari 3 in, slightly, often, overall, regularly, since, back, much, ago 4 handy, tastier, white, salty, right, vibrant, first, ok 5 get, went, impressed, had, try, said, recommended, call, love 6 is, are, feels, believes, seems, like, will, would Table 5: Case studies on word clustering robustness and the ability to learn well when test data is not presented during training. Without opinion labels, RNSCN+-GRU* still achieves better results than Hier-Joint most of the time. Its lower performance compared to RNSCN+-GRU also indicates that in the cross-domain setting, the dual information between aspects and opinions is beneficial to find appropriate and discriminative relation feature space. Finally, the results for RNSCN+ by removing GRU are lower than the joint model, which proves the importance of combining syntactic tree structure with sequential modeling. To qualitatively show the effect of the auxiliary task with auto-encoders for clustering syntactically similar words across domains, we provide some case studies on the predicted groups of some words in Table 5. Specifically, for each relation in the dependency tree, we use (4) to obtain the most probable group to assign the word in the child node. The left column shows the predicted group index with the right column showing the corresponding words. Clearly, the words in the same group have similar syntactic functionalities, whereas the word types vary across groups. In the end, we verify the robustness and capability of the model by conducting sensitivity studies and experiments with varying number of unlabeled target data for training, respectively. Figure 4 shows the sensitivity test for L→D, which indicates that changing of the trade-off parameter γ or the number of groups |G| does not affect the model’s performance greatly, i.e., less than 1% for aspect extraction and 2% for opinion extraction. This proves that our model is robust and stable against small variations. Figure 5 compares the results of RNSCN+-GRU with Hier-Joint when increasing the proportion of unlabeled target training data from 0 to 1. Obviously, our model shows steady improvement with the increasing number of unlabeled target data. This pattern proves our 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.58 0.59 0.60 0.61 0.62 0.63 0.64 f1-opinion 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 trade-off parameter (γ) 0.36 0.37 0.38 0.39 0.40 0.41 0.42 f1-aspect (a) On trade-off parameter. 5 10 15 20 25 30 35 40 0.58 0.59 0.60 0.61 0.62 0.63 0.64 f1-opinion 5 10 15 20 25 30 35 40 number of groups (|G|) 0.36 0.37 0.38 0.39 0.40 0.41 0.42 f1-aspect (b) On number of groups. Figure 4: Sensitivity studies for L→D. 0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7 proportion of unlabeled target data 0.28 0.29 0.30 0.31 0.32 0.33 0.34 f1 (Hier-Joint) 0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7 0.35 0.36 0.37 0.38 0.39 0.40 0.41 f1 (ours) (a) F1-aspect on R→L 0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7 proportion of unlabeled target data 0.30 0.31 0.32 0.33 0.34 0.35 0.36 f1 (Hier-Joint) 0/7 1/7 2/7 3/7 4/7 5/7 6/7 7/7 0.46 0.47 0.48 0.49 0.50 0.51 0.52 f1 (ours) (b) F1-aspect on D→L Figure 5: F1 vs proportion of unlabeled target data. model’s capability of learning from target domain for adaptation. 6 Conclusion We propose a novel dependency-tree-based RNN, namely RNSCN (or RNSCN+), for domain adaptation. The model integrates an auxiliary task into representation learning of nodes in the dependency tree. The adaptation takes place in a common relation feature space, which builds the structural correspondences using syntactic relations among the words in each sentence. We further develop a joint model to combine RNSCN/RNSCN+ with a sequential labeling model for terms extraction. Acknowledgements This work is supported by NTU Singapore Nanyang Assistant Professorship (NAP) grant M4081532.020, MOE AcRF Tier-1 grant 2016T1-001-159, and Fuji Xerox Corporation through joint research on Multilingual Semantic Analysis. References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR 6:1817–1853. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boomboxes and blenders: 2180 Domain adaptation for sentiment classification. In ACL. pages 187–205. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP. pages 120–128. Danushka Bollegala, Takanori Maehara, and Ken ichi Kawarabayashi. 2015. Unsupervised cross-domain word representation learning. In ACL. pages 730– 740. Minmin Chen, Zhixiang Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In ICML. pages 1627– 1634. Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2007. Boosting for transfer learning. In ICML. pages 193–200. Marie C. de Marneffe and Christopher D. Manning. 2008. The stanford typed dependencies representation. In CrossParser. pages 1–8. Ying Ding, Jianfei Yu, and Jing Jiang. 2017. Recurrent neural networks with auxiliary labels for crossdomain opinion target extraction. In AAAI. pages 3436–3442. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML. pages 97–110. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD. pages 168–177. Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single- and cross-domain setting with conditional random fields. In EMNLP. pages 1035–1045. Wei Jin and Hung Hay Ho. 2009. A novel lexicalized hmm-based learning framework for web opinion mining. In ICML. pages 465–472. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In ACL. pages 423–430. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Ying-Ju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In COLING. pages 653–661. Fangtao Li, Sinno Jialin Pan, Ou Jin, Qiang Yang, and Xiaoyan Zhu. 2012. Cross-domain co-extraction of sentiment and topic lexicons. In ACL. pages 410– 419. Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Finegrained opinion mining with recurrent neural networks and word embeddings. In EMNLP. pages 1433–1443. Yue Lu, ChengXiang Zhai, and Neel Sundaresan. 2009. Rated aspect summarization of short comments. In WWW. pages 131–140. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recommendations on styles and substitutes. In SIGIR. pages 43–52. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR abs/1301.3781. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In WWW. pages 751–760. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In SemEval 2015. pages 486–495. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In SemEval. pages 27–35. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Comput. Linguist. 37(1):9–27. Scott E. Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. 2015. Training deep neural networks on noisy labels with bootstrapping. In ICLR 2015. Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2010. Learning Continuous Phrase Representations and Syntactic Parsing with Recursive Neural Networks. In NIPS Workshop. pages 1–9. Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In WWW. pages 111–120. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In EMNLP. pages 616–626. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer tensor network for co-extraction of aspect and opinion terms. In AAAI. pages 3316–3322. Yongxin Yang and Timothy M. Hospedales. 2015. A unified perspective on multi-domain and multi-task learning. In ICLR. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In IJCAI. pages 2979–2985. 2181 Jianfei Yu and Jing Jiang. 2016. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In EMNLP. pages 236–246. Lei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O’Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In COLING. pages 1462–1470. Meishan Zhang, Yue Zhang, and Duy Tin Vo. 2015. Neural networks for open domain targeted sentiment. In EMNLP. Guangyou Zhou, Zhiwen Xie, Jimmy Xiangji Huang, and Tingting He. 2016. Bi-transferring deep neural networks for domain adaptation. In ACL. pages 322– 332.
2018
202
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2182–2192 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2182 Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning Baolin Peng⋆ Xiujun Li† Jianfeng Gao† Jingjing Liu† Kam-Fai Wong⋆‡ †Microsoft Research, Redmond, WA, USA ⋆The Chinese University of Hong Kong, Hong Kong ‡MoE Key Lab of High Confidence Software Technologies, China {blpeng, kfwong}@se.cuhk.edu.hk {xiul,jfgao,jingjl}@microsoft.com Abstract Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to degrade the agent. To address these issues, we present Deep Dyna-Q, which to our knowledge is the first deep RL framework that integrates planning for task-completion dialogue policy learning. We incorporate into the dialogue agent a model of the environment, referred to as the world model, to mimic real user response and generate simulated experience. During dialogue policy learning, the world model is constantly updated with real user experience to approach real user behavior, and in turn, the dialogue agent is optimized using both real experience and simulated experience. The effectiveness of our approach is demonstrated on a movie-ticket booking task in both simulated and human-in-theloop settings1. 1 Introduction Learning policies for task-completion dialogue is often formulated as a reinforcement learning (RL) problem (Young et al., 2013; Levin et al., 1997). However, applying RL to real-world dialogue systems can be challenging, due to the constraint that an RL learner needs an environment to operate in. In the dialogue setting, this requires a dialogue agent to interact with real users and adjust 1The source code of this work is available at https:// github.com/MiuLab/DDQ its policy in an online fashion, as illustrated in Figure 1(a). Unlike simulation-based games such as Atari games (Mnih et al., 2015) and AlphaGo (Silver et al., 2016a, 2017) where RL has made its greatest strides, task-completion dialogue systems may incur significant real-world cost in case of failure. Thus, except for very simple tasks (Singh et al., 2002; Gaˇsi´c et al., 2010, 2011; Pietquin et al., 2011; Li et al., 2016a; Su et al., 2016b), RL is too expensive to be applied to real users to train dialogue agents from scratch. One strategy is to convert human-interacting dialogue to a simulation problem (similar to Atari games), by building a user simulator using human conversational data (Schatzmann et al., 2007; Li et al., 2016b). In this way, the dialogue agent can learn its policy by interacting with the simulator instead of real users (Figure 1(b)). The simulator, in theory, does not incur any real-world cost and can provide unlimited simulated experience for reinforcement learning. The dialogue agent trained with such a user simulator can then be deployed to real users and further enhanced by only a small number of human interactions. Most of recent studies in this area have adopted this strategy (Su et al., 2016a; Lipton et al., 2016; Zhao and Eskenazi, 2016; Williams et al., 2017; Dhingra et al., 2017; Li et al., 2017; Liu and Lane, 2017; Peng et al., 2017b; Budzianowski et al., 2017; Peng et al., 2017a). However, user simulators usually lack the conversational complexity of human interlocutors, and the trained agent is inevitably affected by biases in the design of the simulator. Dhingra et al. (2017) demonstrated a significant discrepancy in a simulator-trained dialogue agent when evaluated with simulators and with real users. Even more challenging is the fact that there is no universally accepted metric to evaluate a user simulator (Pietquin and Hastie, 2013). Thus, it remains 2183 User Human Conversational Data Policy Model Direct Reinforcement Learning Acting Imitation Learning (a) Learning with real users Simulator Human Conversational Data Policy Model Direct Reinforcement Learning Acting Imitation Learning Supervised Learning (b) Learning with user simulators Policy Model User World Model Real Experience Direct Reinforcement Learning World model learning Planning Acting Human Conversational Data Imitation Learning Supervised Learning (c) Learning with real users via DDQ Figure 1: Three strategies of learning task-completion dialogue policies via RL. controversial whether training task-completion dialogue agent via simulated users is a valid approach. We propose a new strategy of learning dialogue policy by interacting with real users. Compared to previous works (Singh et al., 2002; Li et al., 2016a; Su et al., 2016b; Papangelis, 2012), our dialogue agent learns in a much more efficient way, using only a small number of real user interactions, which amounts to an affordable cost in many nontrivial dialogue tasks. Our approach is based on the Dyna-Q framework (Sutton, 1990) where planning is integrated into policy learning for task-completion dialogue. Specifically, we incorporate a model of the environment, referred to as the world model, into the dialogue agent, which simulates the environment and generates simulated user experience. During the dialogue policy learning, real user experience plays two pivotal roles: first, it can be used to improve the world model and make it behave more like real users, via supervised learning; second, it can also be used to directly improve the dialogue policy via RL. The former is referred to as world model learning, and the latter direct reinforcement learning. Dialogue policy can be improved either using real experience directly (i.e., direct reinforcement learning) or via the world model indirectly (referred to as planning or indirect reinforcement learning). The interaction between world model learning, direct reinforcement learning and planning is illustrated in Figure 1(c), following the Dyna-Q framework (Sutton, 1990). The original papers on Dyna-Q and most its early extensions used tabular methods for both planning and learning (Singh, 1992; Peng and Williams, 1993; Moore and Atkeson, 1993; Kuvayev and Sutton, 1996). This table-lookup representation limits its application to small problems only. Sutton et al. (2012) extends the Dyna architecture to linear function approximation, making it applicable to larger problems. In the dialogue setting, we are dealing with a much larger action-state space. Inspired by Mnih et al. (2015), we propose Deep Dyna-Q (DDQ) by combining Dyna-Q with deep learning approaches to representing the state-action space by neural networks (NN). By employing the world model for planning, the DDQ method can be viewed as a model-based RL approach, which has drawn growing interest in the research community. However, most model-based RL methods (Tamar et al., 2016; Silver et al., 2016b; Gu et al., 2016; Racani`ere et al., 2017) are developed for simulation-based, synthetic problems (e.g., games), but not for human-in-the-loop, real-world problems. To these ends, our main contributions in this work are two-fold: • We present Deep Dyna-Q, which to the best of our knowledge is the first deep RL framework that incorporates planning for taskcompletion dialogue policy learning. • We demonstrate that a task-completion dialogue agent can efficiently adapt its policy on the fly, by interacting with real users via RL. This results in a significant improvement in success rate on a nontrivial task. 2 Dialogue Policy Learning via Deep Dyna-Q (DDQ) Our DDQ dialogue agent is illustrated in Figure 2, consisting of five modules: (1) an LSTMbased natural language understanding (NLU) module (Hakkani-T¨ur et al., 2016) for identifying user intents and extracting associated slots; (2) a state tracker (Mrkˇsi´c et al., 2016) for tracking the dialogue states; (3) a dialogue policy which selects 2184 NLG NLU 𝑜1 𝑜2 Dialogue State Tracker 𝑜𝑡 Dialogue Policy Dialogue Manager System Action (Policy) 𝑠𝑡 𝑠1 𝑠2 𝑠𝑛 𝑎1 𝑎2 𝑎𝑘 …… … Semantic Frame State Representation 𝑎∗= max 𝑎 𝜋𝑎|𝑠 World Model User Goal Figure 2: Illustration of the task-completion DDQ dialogue agent. the next action2 based on the current state; (4) a model-based natural language generation (NLG) module for converting dialogue actions to natural language response (Wen et al.); and (5) a world model for generating simulated user actions and simulated rewards. As illustrated in Figure 1(c), starting with an initial dialogue policy and an initial world model (both trained with pre-collected human conversational data), the training of the DDQ agent consists of three processes: (1) direct reinforcement learning, where the agent interacts with a real user, collects real experience and improves the dialogue policy; (2) world model learning, where the world model is learned and refined using real experience; and (3) planning, where the agent improves the dialogue policy using simulated experience. Although these three processes conceptually can occur simultaneously in the DDQ agent, we implement an iterative training procedure, as shown in Algorithm 1, where we specify the order in which they occur within each iteration. In what follows, we will describe these processes in details. 2.1 Direct Reinforcement Learning In this process (lines 5-18 in Algorithm 1) we use the DQN method (Mnih et al., 2015) to improve the dialogue policy based on real experience. We consider task-completion dialogue as a Markov Decision Process (MDP), where the agent inter2In the dialogue scenario, actions are dialogue-acts, consisting of a single act and a (possibly empty) collection of (slot = value) pairs (Schatzmann et al., 2007). acts with a user in a sequence of actions to accomplish a user goal. In each step, the agent observes the dialogue state s, and chooses the action a to execute, using an ϵ-greedy policy that selects a random action with probability ϵ or otherwise follows the greedy policy a = argmaxa′Q(s, a′; θQ). Q(s, a; θQ) which is the approximated value function, implemented as a Multi-Layer Perceptron (MLP) parameterized by θQ. The agent then receives reward3 r, observes next user response au, and updates the state to s′. Finally, we store the experience (s, a, r, au, s′) in the replay buffer Du. The cycle continues until the dialogue terminates. We improve the value function Q(s, a; θQ) by adjusting θQ to minimize the mean-squared loss function, defined as follows: L(θQ) = E(s,a,r,s′)∼Du[(yi −Q(s, a; θQ))2] yi = r + γ max a′ Q′(s′, a′; θQ′) (1) where γ ∈[0, 1] is a discount factor, and Q′(.) is the target value function that is only periodically updated (line 42 in Algorithm 1). By differentiating the loss function with respect to θQ, we arrive at the following gradient: ∇θQL(θQ) = E(s,a,r,s′)∼Du[(r+ γ max a′ Q′(s′, a′; θQ′) −Q(s, a; θQ)) ∇θQQ(s, a; θQ)] (2) As shown in lines 16-17 in Algorithm 1, in each iteration, we improve Q(.) using minibatch Deep Q-learning. 2.2 Planning In the planning process (lines 23-41 in Algorithm 1), the world model is employed to generate simulated experience that can be used to improve dialogue policy. K in line 24 is the number of planning steps that the agent performs per step of direct reinforcement learning. If the world model is able to accurately simulate the environment, a big K can be used to speed up the policy learning. In DDQ, we use two replay buffers, Du for storing real experience and Ds for simulated experience. Learning and planning are accomplished 3In the dialogue scenario, reward is defined to measure the degree of success of a dialogue. In our experiment, for example, success corresponds to a reward of 80, failure to a reward of −40, and the agent receives a reward of −1 at each turn so as to encourage shorter dialogues. 2185 Algorithm 1 Deep Dyna-Q for Dialogue Policy Learning Require: N, ϵ, K, L, C, Z Ensure: Q(s, a; θQ), M(s, a; θM) 1: initialize Q(s, a; θQ) and M(s, a; θM) via pre-training on human conversational data 2: initialize Q′(s, a; θQ′) with θQ′ = θQ 3: initialize real experience replay buffer Du using Reply Buffer Spiking (RBS), and simulated experience replay buffer Ds as empty 4: for n=1:N do 5: # Direct Reinforcement Learning starts 6: user starts a dialogue with user action au 7: generate an initial dialogue state s 8: while s is not a terminal state do 9: with probability ϵ select a random action a 10: otherwise select a = argmaxa′Q(s, a′; θQ) 11: execute a, and observe user response au and reward r 12: update dialogue state to s′ 13: store (s, a, r, au, s′) to Du 14: s = s′ 15: end while 16: sample random minibatches of (s, a, r, s′) from Du 17: update θQ via Z-step minibatch Q-learning according to Equation (2) 18: # Direct Reinforcement Learning ends 19: # World Model Learning starts 20: sample random minibatches of training samples (s, a, r, au, s′) from Du 21: update θM via Z-step minibatch SGD of multi-task learning 22: # World Model Learning ends 23: # Planning starts 24: for k=1:K do 25: t = FALSE, l = 0 26: sample a user goal G 27: sample user action au from G 28: generate an initial dialogue state s 29: while t is FALSE ∧l ≤L do 30: with probability ϵ select a random action a 31: otherwise select a = argmaxa′Q(s, a′; θQ) 32: execute a 33: world model responds with au, r and t 34: update dialogue state to s′ 35: store (s, a, r, s′) to Ds 36: l = l + 1, s = s′ 37: end while 38: sample random minibatches of (s, a, r, s′) from Ds 39: update θQ via Z-step minibatch Q-learning according to Equation (2) 40: end for 41: # Planning ends 42: every C steps reset θQ′ = θQ 43: end for by the same DQN algorithm, operating on real experience in Du for learning and on simulated experience in Ds for planning. Thus, here we only describe the way the simulated experience is generated. Similar to Schatzmann et al. (2007), at the beginning of each dialogue, we uniformly draw a user goal G = (C, R), where C is a set of constraints and R is a set of requests (line 26 in Algorithm 1). For movie-ticket booking dialogues, constraints are typically the name and the date of the movie, the number of tickets to buy, etc. Requests can contain these slots as well as the location of the theater, its start time, etc. Table 3 presents some sampled user goals and dialogues generated by simulated and real users, respectively. The first user action au (line 27) can be either a request or an inform dialogueact. A request, such as request(theater; moviename=batman), consists of a request slot and multiple (⩾ 1) constraint slots, uniformly sampled from R and C, respectively. An inform contains constraint slots only. The user action can also be converted to natural language via NLG, e.g., "which theater will show batman?" In each dialogue turn, the world model takes as input the current dialogue state s and the last agent action a (represented as an one-hot vector), and generates user response au, reward r, and a binary variable t, which indicates whether the dialogue terminates (line 33). The generation is accomplished using the world model M(s, a; θM), a MLP shown in Figure 3, as follows: h = tanh(Wh(s, a) + bh) r = Wrh + br au = softmax(Wah + ba) t = sigmoid(Wth + bt) where (s, a) is the concatenation of s and a, and W and b are parameter matrices and vectors, respectively. Task-Specific Representation s: state a: agent action au r t Shared layers Figure 3: The world model architecture. 2186 2.3 World Model Learning In this process (lines 19-22 in Algorithm 1), M(s, a; θM) is refined via minibatch SGD using real experience in the replay buffer Du. As shown in Figure 3, M(s, a; θM) is a multi-task neural network (Liu et al., 2015) that combines two classification tasks of simulating au and t, respectively, and one regression task of simulating r. The lower layers are shared across all tasks, while the top layers are task-specific. 3 Experiments and Results We evaluate the DDQ method on a movie-ticket booking task in both simulation and human-in-theloop settings. 3.1 Dataset Raw conversational data in the movie-ticket booking scenario was collected via Amazon Mechanical Turk. The dataset has been manually labeled based on a schema defined by domain experts, as shown in Table 4, which consists of 11 dialogue acts and 16 slots. In total, the dataset contains 280 annotated dialogues, the average length of which is approximately 11 turns. 3.2 Dialogue Agents for Comparison To benchmark the performance of DDQ, we have developed different versions of task-completion dialogue agents, using variations of Algorithm 1. • A DQN agent is learned by standard DQN, implemented with direct reinforcement learning only (lines 5-18 in Algorithm 1) in each epoch. • The DDQ(K) agents are learned by DDQ of Algorithm 1, with an initial world model pretrained on human conversational data, as described in Section 3.1. K is the number of planning steps. We trained different versions of DDQ(K) with different K’s. • The DDQ(K, rand-init θM) agents are learned by the DDQ method with a randomly initialized world model. • The DDQ(K, fixed θM) agents are learned by DDQ with an initial world model pretrained on human conversational data. But the world model is not updated afterwards. That is, the world model learning part in Algorithm 1 (lines 19-22) is removed. The DDQ(K, fixed θM) agents are evaluated in the simulation setting only. • The DQN(K) agents are learned by DQN, but with K times more real experiences than the DQN agent. DQN(K) is evaluated in the simulation setting only. Its performance can be viewed as the upper bound of its DDQ(K) counterpart, assuming that the world model in DDQ(K) perfectly matches real users. Implementation Details All the models in these agents (Q(s, a; θQ), M(s, a; θM)) are MLPs with tanh activations. Each policy network Q(.) has one hidden layer with 80 hidden nodes. As shown in Figure 3, the world model M(.) contains two shared hidden layers and three task-specific hidden layers, with 80 nodes in each. All the agents are trained by Algorithm 1 with the same set of hyper-parameters. ϵ-greedy is always applied for exploration. We set the discount factor γ = 0.95. The buffer sizes of both Du and Ds are set to 5000. The target value function is updated at the end of each epoch. In each epoch, Q(.) and M(.) are refined using one-step (Z = 1) 16-tupleminibatch update. 4 In planning, the maximum length of a simulated dialogue is 40 (L = 40). In addition, to make the dialogue training efficient, we also applied a variant of imitation learning, called Reply Buffer Spiking (RBS) (Lipton et al., 2016). We built a naive but occasionally successful rule-based agent based on human conversational dataset (line 1 in Algorithm 1), and prefilled the real experience replay buffer Du with 100 dialogues of experience (line 2) before training for all the variants of agents. 3.3 Simulated User Evaluation In this setting the dialogue agents are optimized by interacting with user simulators, instead of real users. Thus, the world model is learned to mimic user simulators. Although the simulator-trained agents are sub-optimal when applied to real users due to the discrepancy between simulators and real users, the simulation setting allows us to perform a detailed analysis of DDQ without much cost and to reproduce the experimental results easily. 4We found in our experiments that setting Z > 1 improves the performance of all agents, but does not change the conclusion of this study: DDQ consistently outperforms DQN by a statistically significant margin. Conceptually, the optimal value of Z used in planning is different from that in direct reinforcement learning, and should vary according to the quality of the world model. The better the world model is, the more aggressive update (thus bigger Z) is being used in planning. We leave it to future work to investigate how to optimize Z for planning in DDQ. 2187 Agent Epoch = 100 Epoch = 200 Epoch = 300 Success Reward Turns Success Reward Turns Success Reward Turns DQN .4260 -3.84 31.93 .5308 10.78 22.72 .6480 27.66 22.21 DDQ(5) .6056 20.35 26.65 .7128 36.76 19.55 .7372 39.97 18.99 DDQ(5, rand-init θM) .5904 18.75 26.21 .6888 33.47 20.36 .7032 36.06 18.64 DDQ(5, fixed θM) .5540 14.54 25.89 .6660 29.72 22.39 .6860 33.58 19.49 DQN(5) .6560 29.38 21.76 .7344 41.09 16.07 .7576 43.97 15.88 DDQ(10) .6624 28.18 24.62 .7664 42.46 21.01 .7840 45.11 19.94 DDQ(10, rand-init θM) .6132 21.50 26.16 .6864 32.43 21.86 .7628 42.37 20.32 DDQ(10, fixed θM) .5884 18.41 26.41 .6196 24.17 22.36 .6412 26.70 22.49 DQN(10) .7944 48.61 15.43 .8296 54.00 13.09 .8356 54.89 12.77 Table 1: Results of different agents at training epoch = {100, 200, 300}. Each number is averaged over 5 runs, each run tested on 2000 dialogues. Excluding DQN(5) and DQN(10) which serve as the upper bounds, any two groups of success rate (except three groups: at epoch 100, DDQ(5, rand-init θM) and DDQ(10, fixed θM), at epoch 200, DDQ(5, rand-init θM) and DDQ(10, rand-init θM), at epoch 300, DQN and DDQ(10, fixed θM)) evaluated at the same epoch is statistically significant in mean with p < 0.01. (Success: success rate) 0 50 100 150 200 250 300 350 400 Epoch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Success rate DQN DDQ(2) DDQ(5) DDQ(10) DDQ(20) Figure 4: Learning curves of the DDQ(K) agents with K = 2, 5, 10, 20. The DQN agent is identical to a DDQ(K) agent with K = 0. User Simulator We adapted a publicly available user simulator (Li et al., 2016b) to the taskcompletion dialogue setting. During training, the simulator provides the agent with a simulated user response in each dialogue turn and a reward signal at the end of the dialogue. A dialogue is considered successful only when a movie ticket is booked successfully and when the information provided by the agent satisfies all the user’s constraints. At the end of each dialogue, the agent receives a positive reward of 2 ∗L for success, or a negative reward of −L for failure, where L is the maximum number of turns in each dialogue, and is set to 40 in our experiments. Furthermore, in each turn, the agent receives a reward of −1, so that shorter dialogues are encouraged. Readers can refer to Appendix B for details on the user simulator. 0 50 100 150 200 250 300 350 400 Epoch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Success rate DQN DDQ(10) DDQ(10, rand-init) DDQ(10, fixed) DQN(10) Figure 5: Learning curves of DQN, DDQ(10), DDQ(10, rand-init θM), DDQ(10, fixed θM), and DQN(10). Results The main simulation results are reported in Table 1 and Figures 4 and 5. For each agent, we report its results in terms of success rate, average reward, and average number of turns (averaged over 5 repetitions of the experiments). Results show that the DDQ agents consistently outperform DQN with a statistically significant margin. Figure 4 shows the learning curves of different DDQ agents trained using different planning steps. Since the training of all RL agents started with RBS using the same rule-based agent, their performance in the first few epochs is very close. After that, performance improved for all values of K, but much more rapidly for larger values. Recall that the DDQ(K) agent with K=0 is identical to the DQN agent, which does no planning but relies on direct reinforcement learning only. Without planning, the DQN agent took about 180 epochs (real dialogues) to reach the success rate of 50%, 2188 Agent Epoch = 100 Epoch = 150 Epoch = 200 Success Reward Turns Success Reward Turns Success Reward Turns DQN .0000 -58.69 39.38 .4080 -5.730 30.38 .4545 0.350 30.38 DDQ(5) .4620 00.78 31.33 .5637 15.05 26.17 .6000 19.84 26.32 DDQ(5, rand-init θM) .3600 -11.67 31.74 .5500 13.71 26.58 .5752 16.84 26.37 DDQ(10) .5555 14.69 25.92 .6416 25.85 24.28 .7332 38.88 20.21 DDQ(10, rand-init θM) .5010 6.27 29.70 .6055 22.11 23.11 .7023 36.90 21.20 Table 2: The performance of different agents at training epoch = {100, 150, 200} in the human-in-theloop experiments. The difference between the results of all agent pairs evaluated at the same epoch is statistically significant (p < 0.01). (Success: success rate) and DDQ(10) took only 50 epochs. Intuitively, the optimal value of K needs to be determined by seeking the best trade-off between the quality of the world model and the amount of simulated experience that is useful for improving the dialogue agent. This is a non-trivial optimization problem because both the dialogue agent and the world model are updated constantly during training and the optimal K needs to be adjusted accordingly. For example, we find in our experiments that at the early stages of training, it is fine to perform planning aggressively by using large amounts of simulated experience even though they are of low quality, but in the late stages of training where the dialogue agent has been significantly improved, low-quality simulated experience is likely to hurt the performance. Thus, in our implementation of Algorithm 1, we use a heuristic5 to reduce the value of K in the late stages of training (e.g., after 150 epochs in Figure 4) to mitigate the negative impact of low-qualify simulated experience. We leave it to future work how to optimize the planning step size during DDQ training in a principled way. Figure 5 shows that the quality of the world model has a significant impact on the agent’s performance. The learning curve of DQN(10) indicates the best performance we can expect with a perfect world model. With a pre-trained world model, the performance of the DDQ agent improves more rapidly, although eventually, the DDQ and DDQ(rand-init θM) agents reach the same success rate after many epochs. The world model learning process is crucial to both the efficiency of dialogue policy learning and the final performance of the agent. For example, in the early stages (before 60 epochs), the performances of DDQ and DDQ(fixed θM) remain very close to each other, but DDQ reaches a success rate almost 5The heuristic is not presented in Algorithm 1. Readers can refer to the released source code for details. 0 50 100 150 200 Epoch 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Success rate DQN DDQ(5) DDQ(5, rand-init) DDQ(10) DDQ(10, rand-init) Figure 6: Human-in-the-loop dialogue policy learning curves in four different agents. 10% better than DDQ(fixed θM) after 400 epochs. 3.4 Human-in-the-Loop Evaluation In this setting, five dialogue agents (i.e., DQN, DDQ(10), DDQ(10, rand-init θM), DDQ(5), and DDQ(5, rand-init θM)) are trained via RL by interacting with real human users. In each dialogue session, one of the agents was randomly picked to converse with a user. The user was presented with a user goal sampled from the corpus, and was instructed to converse with the agent to complete the task. The user had the choice of abandoning the task and ending the dialogue at any time, if she or he believed that the dialogue was unlikely to succeed or simply because the dialogue dragged on for too many turns. In such cases, the dialogue session is considered failed. At the end of each session, the user was asked to give explicit feedback whether the dialogue succeeded (i.e., whether the movie tickets were booked with all the user constraints satisfied). Each learning curve is trained with two runs, with each run generating 150 dialogues (and K∗150 additional simulated dialogues when planning is applied). In total, we collected 1500 dialogue sessions for training all five agents. The main results are presented in Table 2 and 2189 Simulation Sample Real User Sample movie-ticket booking user goal: { “request slots”: { “constraint slots”: { “ticket”: “?” “numberofpeople”:“2” “theater”: “?” “moviename”: “deadpool” “starttime”: “?” “city”: “seattle” “date”: “?” } } } movie-ticket booking user goal: { “request slots”: { “constraint slots”: { “ticket”: “?” “date”:“this weekend” “theater”: “?” “numberofpeople”: “1” “starttime”: “?” “moviename”: “batman” } } } DQN agent 1 usr: Hi, what date can I book 2 tickets for deadpool? usr: Hi, which theater will show batman? I want 1 ticket. 2 agt: Which city would you like? agt: Which city would you like? 3 usr: I want to watch at seattle. usr: Boston. 4 agt: What time would you like to see it? agt: What time would you like to see it? 5 usr: What start time is available? usr: What start time is available? 6 agt: Which theater would you like? agt: What date would you like to see it? 7 usr: Which theater is available? usr: This weekend. 8 agt: What time would you like to see it? agt:Which theater would you like? 9 usr: What start time is available? usr: Which theater is available? 10 agt: Which theater would you like? agt: Which theater would you like? 11 usr: Which theater is available? usr: Which theater is available? 12 agt: How many tickets do you need? agt: What time would you like to see it? 13 usr: I want 2 tickets please! usr: What start time is available? 14 agt: Which theater would you like? agt: Which theater would you like? 15 usr: Which theater is available? usr: Thanks, I do not need the ticket. 16 agt: What time would you like to see it? agt: Thank you. ... 39 agt: Which theater would you like? 40 usr: Which theater is available? Failure: Agent does not book the right ticket within the max turns. Failure: User terminates the dialogue before reaching the max turns because the agent repeatly asks the same question. DDQ(10) agent 1 usr: Hi, when is deadpool playing at seattle? usr: Hi, which theater will show batman? I want 1 ticket. 2 agt: Tonight. agt: Century Eastport 16. 3 usr: What is the start time? usr: What is the start time? 4 agt: Which theater would you like? agt: 9PM. 5 usr: I want to watch it at amc pacific place 11. usr: Can I buy a ticket here? 6 agt: 10pm is available. agt: This weekend? 7 usr: I want 2 tickets please! usr: Yes, one ticket please! 8 agt: Great, here are 2 deadpool tickets for you at amc pacific place 11 in seattle tonight 10 pm. agt: Sure, here it is - batman at Century Eastport 16, 9PM this weekend. 9 usr: Thank you. usr: Thank you. 10 agt: Thank you. agt: Thank you. Success Success Table 3: Two sample dialogue sessions by DQN and DDQ(10) agents trained at epoch 100: Left: simulated user experiments; Right: human-in-the-loop experiments. (agt: agent, usr: user) Figure 6, with each agent averaged over two independent runs. The results confirm what we observed in the simulation experiments. The conclusions are summarized as below: • The DDQ agent significantly outperforms DQN, as demonstrated by the comparison between DDQ(10) and DQN. Table 3 presents four example dialogues produced by two dialogue agents interacting with simulated and human users, respectively. The DQN agent, after being trained with 100 dialogues, still behaved like a naive rule-based agent that requested information bit by bit in a fixed order. When the user did not answer the request explicitly (e.g., usr: which theater is available?), the agent failed to respond properly. On the other hand, with planning, the DDQ agent trained with 100 real dialogues is much more robust and can complete 50% of user tasks successfully. • A larger K leads to more aggressive planning and better results, as shown by DDQ(10) vs. DDQ(5). • Pre-training world model with human con2190 versational data improves the learning efficiency and the agent’s performance, as shown by DDQ(5) vs. DDQ(5, rand-init θM), and DDQ(10) vs. DDQ(10, rand-init θM). 4 Conclusion We propose a new strategy for a task-completion dialogue agent to learn its policy by interacting with real users. Compared to previous work, our agent learns in a much more efficient way, using only a small number of real user interactions, which amounts to an affordable cost in many nontrivial domains. Our strategy is based on the Deep Dyna-Q (DDQ) framework where planning is integrated into dialogue policy learning. The effectiveness of DDQ is validated by human-in-theloop experiments, demonstrating that a dialogue agent can efficiently adapt its policy on the fly by interacting with real users via deep RL. One interesting topic for future research is exploration in planning. We need to deal with the challenge of adapting the world model in a changing environment, as exemplified by the domain extension problem (Lipton et al., 2016). As pointed out by Sutton and Barto (1998), the general problem here is a particular manifestation of the conflict between exploration and exploitation. In a planning context, exploration means trying actions that may improve the world model, whereas exploitation means trying to behave in the optimal way given the current model. To this end, we want the agent to explore in the environment, but not so much that the performance would be greatly degraded. Additional Authors Shang-Yu Su (National Taiwan University, Room 524, CSIE Bldg., No. 1, Sec. 4, Roosevelt Rd., Taipei 10617, Taiwan. email: [email protected]) Acknowledgments We would like to thank Chris Brockett, Yun-Nung Chen, Michel Galley and Lihong Li for their insightful comments on the paper. We would like to acknowledge the volunteers from Microsoft Research for helping us with the human-in-the-loop experiments. This work was done when Baolin Peng and Shang-Yu Su were visiting Microsoft. Baolin Peng is in part supported by Innovation and Technology Fund (6904333), and General Research Fund of Hong Kong (12183516). References Pawel Budzianowski, Stefan Ultes, Pei-Hao Su, Nikola Mrksic, Tsung-Hsien Wen, Inigo Casanueva, Lina Rojas-Barahona, and Milica Gasic. 2017. Subdomain modelling for dialogue management with hierarchical reinforcement learning. arXiv preprint arXiv:1706.06210 . Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 484–495. Milica Gaˇsi´c, Filip Jurˇc´ıˇcek, Simon Keizer, Franc¸ois Mairesse, Blaise Thomson, Kai Yu, and Steve Young. 2010. Gaussian processes for fast policy optimisation of pomdp-based dialogue managers. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, pages 201– 204. Milica Gaˇsi´c, Filip Jurˇc´ıˇcek, Blaise Thomson, Kai Yu, and Steve Young. 2011. On-line policy optimisation of spoken dialogue systems via live interaction with human subjects. In Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on. IEEE, pages 312–317. Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. 2016. Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning. pages 2829– 2838. Dilek Hakkani-T¨ur, Gokhan Tur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association. Leonid Kuvayev and Richard S Sutton. 1996. Modelbased reinforcement learning with an approximate, learned model. In in Proceedings of the Ninth Yale Workshop on Adaptive and Learning Systems. Citeseer. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on. IEEE, pages 72–79. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016a. 2191 Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823 . Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016b. A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688 . Xuijun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end taskcompletion neural dialogue systems. In Proceedings of the The 8th International Joint Conference on Natural Language Processing. pages 733–743. Zachary C Lipton, Jianfeng Gao, Lihong Li, Xiujun Li, Faisal Ahmed, and Li Deng. 2016. Efficient exploration for dialogue policy learning with bbq networks & replay buffer spiking. arXiv preprint arXiv:1608.05081 . Bing Liu and Ian Lane. 2017. Iterative policy learning in end-to-end trainable task-oriented neural dialog models. In Proceedings of 2017 IEEE Workshop on Automatic Speech Recognition and Understanding. Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and information retrieval . Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529–533. Andrew W Moore and Christopher G Atkeson. 1993. Prioritized sweeping: Reinforcement learning with less data and less time. Machine learning 13(1):103–130. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2016. Neural belief tracker: Data-driven dialogue state tracking. arXiv preprint arXiv:1606.03777 . Alexandros Papangelis. 2012. A comparative study of reinforcement learning techniques on dialogue management. In Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 22–31. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, and Kam-Fai Wong. 2017a. Adversarial advantage actor-critic model for taskcompletion dialogue policy learning. arXiv preprint arXiv:1710.11277 . Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, and Kam-Fai Wong. 2017b. Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2221–2230. Jing Peng and Ronald J Williams. 1993. Efficient learning and planning within the dyna framework. Adaptive Behavior 1(4):437–454. Olivier Pietquin, Matthieu Geist, Senthilkumar Chandramohan, et al. 2011. Sample efficient online learning of optimal dialogue policies with kalman temporal differences. In IJCAI ProceedingsInternational Joint Conference on Artificial Intelligence. volume 22, page 1878. Olivier Pietquin and Helen Hastie. 2013. A survey on metrics for the evaluation of user simulations. The knowledge engineering review . S´ebastien Racani`ere, Th´eophane Weber, David Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adri`a Puigdom`enech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, et al. 2017. Imagination-augmented agents for deep reinforcement learning. In Advances in Neural Information Processing Systems. pages 5694–5705. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In NAACL 2007; Companion Volume, Short Papers. Association for Computational Linguistics, pages 149–152. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016a. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. 2017. Mastering the game of go without human knowledge. Nature 550(7676):354. David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel DulacArnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. 2016b. The predictron: Endto-end learning and planning. arXiv preprint arXiv:1612.08810 . Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the njfun system. Journal of Artificial Intelligence Research 16:105–133. Satinder P Singh. 1992. Reinforcement learning with a hierarchy of abstract models. In Proceedings of the National Conference on Artificial Intelligence. JOHN WILEY & SONS LTD, 10, page 202. 2192 Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016a. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689 . Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016b. On-line active reward learning for policy optimisation in spoken dialogue systems. arXiv preprint arXiv:1605.07669 . Richard S Sutton. 1990. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the seventh international conference on machine learning. pages 216–224. Richard S Sutton and Andrew G Barto. 1998. Introduction to reinforcement learning, volume 135. MIT press Cambridge. Richard S Sutton, Csaba Szepesv´ari, Alborz Geramifard, and Michael P Bowling. 2012. Dyna-style planning with linear function approximation and prioritized sweeping. arXiv preprint arXiv:1206.3285 . Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. 2016. Value iteration networks. In Advances in Neural Information Processing Systems. pages 2154–2162. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Peihao Su, David Vandyke, and Steve J. Young. ???? Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In EMNLP 2015, Lisbon, Portugal, September 17-21, 2015. pages 1711–1721. Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: Practical and efficient end-to-end dialog control with supervised and reinforcement learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE 101(5):1160–1179. Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv preprint arXiv:1606.02560 . A Dataset Annotation Schema Table 4 lists all annotated dialogue acts and slots in details. Annotations request, inform, deny, confirm question, Intent confirm answer, greeting, closing, not sure, multiple choice, thanks, welcome Slot city, closing, date, distanceconstraints, greeting, moviename, numberofpeople, price, starttime, state, taskcomplete, theater, theater chain, ticket, video format, zip Table 4: The data annotation schema B User Simulator In the task-completion dialogue setting, the entire conversation is around a user goal implicitly, but the agent knows nothing about the user goal explicitly and its objective is to help the user to accomplish this goal. Generally, the definition of user goal contains two parts: • inform slots contain a number of slot-value pairs which serve as constraints from the user. • request slots contain a set of slots that user has no information about the values, but wants to get the values from the agent during the conversation. ticket is a default slot which always appears in the request slots part of user goal. To make the user goal more realistic, we add some constraints in the user goal: slots are split into two groups. Some of slots must appear in the user goal, we called these elements as Required slots. In the movie-booking scenario, it includes moviename, theater, starttime, date, numberofpeople; the rest slots are Optional slots, for example, theater chain, video format etc. We generated the user goals from the labeled dataset mentioned in Section 3.1, using two mechanisms. One mechanism is to extract all the slots (known and unknown) from the first user turns (excluding the greeting user turn) in the data, since usually the first turn contains some or all the required information from user. The other mechanism is to extract all the slots (known and unknown) that first appear in all the user turns, and then aggregate them into one user goal. We dump these user goals into a file as the user-goal database. Every time when running a dialogue, we randomly sample one user goal from this user goal database.
2018
203
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2193–2203 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2193 Learning to Ask Questions in Open-domain Conversational Systems with Typed Decoders Yansen Wang1,∗, Chenyi Liu1,∗, Minlie Huang1,†, Liqiang Nie2 1Conversational AI group, AI Lab., Department of Computer Science, Tsinghua University 1Beijing National Research Center for Information Science and Technology, China 2Shandong University [email protected];[email protected]; [email protected];[email protected] Abstract Asking good questions in large-scale, open-domain conversational systems is quite significant yet rather untouched. This task, substantially different from traditional question generation, requires to question not only with various patterns but also on diverse and relevant topics. We observe that a good question is a natural composition of interrogatives, topic words, and ordinary words. Interrogatives lexicalize the pattern of questioning, topic words address the key information for topic transition in dialogue, and ordinary words play syntactical and grammatical roles in making a natural sentence. We devise two typed decoders (soft typed decoder and hard typed decoder) in which a type distribution over the three types is estimated and used to modulate the final generation distribution. Extensive experiments show that the typed decoders outperform state-of-the-art baselines and can generate more meaningful questions. 1 Introduction Learning to ask questions (or, question generation) aims to generate a question to a given input. Deciding what to ask and how is an indicator of machine understanding (Mostafazadeh et al., 2016), as demonstrated in machine comprehension (Du et al., 2017; Zhou et al., 2017b; Yuan et al., 2017) and question answering (Tang et al., 2017; Wang et al., 2017). Raising good questions is essential to conversational systems because a good system can well interact with users by asking and responding (Li et al., 2016). Furthermore, asking ∗Authors contributed equally to this work. †Corresponding author: Minlie Huang. questions is one of the important proactive behaviors that can drive dialogues to go deeper and further (Yu et al., 2016). Question generation (QG) in open-domain conversational systems differs substantially from the traditional QG tasks. The ultimate goal of this task is to enhance the interactiveness and persistence of human-machine interactions, while for traditional QG tasks, seeking information through a generated question is the major purpose. The response to a generated question will be supplied in the following conversations, which may be novel but not necessarily occur in the input as that in traditional QG (Du et al., 2017; Yuan et al., 2017; Tang et al., 2017; Wang et al., 2017; Mostafazadeh et al., 2016). Thus, the purpose of this task is to spark novel yet related information to drive the interactions to continue. Due to the different purposes, this task is unique in two aspects: it requires to question not only in various patterns but also about diverse yet relevant topics. First, there are various questioning patterns for the same input, such as Yes-no questions and Wh-questions with different interrogatives. Diversified questioning patterns make dialogue interactions richer and more flexible. Instead, traditional QG tasks can be roughly addressed by syntactic transformation (Andrenucci and Sneiders, 2005; Popowich and Winne, 2013), or implicitly modeled by neural models (Du et al., 2017). In such tasks, the information questioned on is pre-specified and usually determines the pattern of questioning. For instance, asking Whoquestion for a given person, or Where-question for a given location. Second, this task requires to address much more transitional topics of a given input, which is the nature of conversational systems. For instance, for the input “I went to dinner with my friends”, we may question about topics such as friend, cuisine, 2194 price, place and taste. Thus, this task generally requires scene understanding to imagine and comprehend a scenario (e.g., dining at a restaurant) that can be interpreted by topics related to the input. However, in traditional QG tasks, the core information to be questioned on is pre-specified and rather static, and paraphrasing is more required. Figure 1: Good questions in conversational systems are a natural composition of interrogatives, topic words, and ordinary words. Undoubtedly, asking good questions in conversational systems needs to address the above issues (questioning with diversified patterns, and addressing transitional topics naturally in a generated question). As shown in Figure 1, a good question is a natural composition of interrogatives, topic words, and ordinary words. Interrogatives indicate the pattern of questioning, topic words address the key information of topic transition, and ordinary words play syntactical and grammatical roles in making a natural sentence. We thus classify the words in a question into three types: interrogative, topic word, and ordinary word automatically. We then devise two decoders, Soft Typed Decoder (STD) and Hard Typed Decoder (HTD), for question generation in conversational systems1. STD deals with word types in a latent and implicit manner, while HTD in a more explicit way. At each decoding position, we firstly estimate a type distribution over word types. STD applies a mixture of type-specific generation distributions where type probabilities are the coefficients. By contrast, HTD reshapes the type distribution by Gumbel-softmax and modulates the generation distribution by type probabilities. Our contributions are as follows: • To the best of our knowledge, this is the first study on question generation in the setting of 1To simplify the task, as a preliminary research, we consider the one-round conversational system. conversational systems. We analyze the key differences between this new task and other traditional question generation tasks. • We devise soft and hard typed decoders to ask good questions by capturing different roles of different word types. Such typed decoders may be applicable to other generation tasks if word semantic types can be identified. 2 Related Work Traditional question generation can be seen in task-oriented dialogue system (Curto et al., 2012), sentence transformation (Vanderwende, 2008), machine comprehension (Du et al., 2017; Zhou et al., 2017b; Yuan et al., 2017; Subramanian et al., 2017), question answering (Qin, 2015; Tang et al., 2017; Wang et al., 2017; Song et al., 2017), and visual question answering (Mostafazadeh et al., 2016). In such tasks, the answer is known and is part of the input to the generated question. Meanwhile, the generation tasks are not required to predict additional topics since all the information has been provided in the input. They are applicable in scenarios such as designing questions for reading comprehension (Du et al., 2017; Zhou et al., 2017a; Yuan et al., 2017), and justifying the visual understanding by generating questions to a given image (video) (Mostafazadeh et al., 2016). In general, traditional QG tasks can be addressed by the heuristic rule-based reordering methods (Andrenucci and Sneiders, 2005; Ali et al., 2010; Heilman and Smith, 2010), slotfilling with question templates (Popowich and Winne, 2013; Chali and Golestanirad, 2016; Labutov et al., 2015), or implicitly modeled by recent neural models(Du et al., 2017; Zhou et al., 2017b; Yuan et al., 2017; Song et al., 2017; Subramanian et al., 2017). These tasks generally do not require to generate a question with various patterns: for a given answer and a supporting text, the question type is usually decided by the input. Question generation in large-scale, opendomain dialogue systems is relatively unexplored. Li et al. (2016) showed that asking questions in task-oriented dialogues can offer useful feedback to facilitate learning through interactions. Several questioning mechanisms were devised with handcrafted templates, but unfortunately not applicable to open-domain conversational systems. Similar to our goal, a visual QG task is proposed to generate a question to interact with other people, given 2195 an image as input (Mostafazadeh et al., 2016). 3 Methodology 3.1 Overview The task of question generation in conversational systems can be formalized as follows: given a user post X = x1x2 · · · xm, the system should generate a natural and meaningful question Y = y1y2 · · · yn to interact with the user, formally as Y ∗= argmax Y P(Y |X). As aforementioned, asking good questions in conversational systems requires to question with diversified patterns and address transitional topics naturally in a question. To this end, we classify the words in a sentence into three types: interrogative, topic word, and ordinary word, as shown in Figure 1. During training, the type of each word in a question is decided automatically2. We manually collected about 20 interrogatives. The verbs and nouns in a question are treated as topic words, and all the other words as ordinary words. During test, we resort to PMI (Church and Hanks, 1990) to predict a few topic words for a given post. On top of an encoder-decoder framework, we propose two decoders to effectively use word types in question generation. The first model is soft typed decoder (STD). It estimates a type distribution over word types and three type-specific generation distributions over the vocabulary, and then obtains a mixture of type-specific distributions for word generation. The second one is a hard form of STD, hard typed decoder (HTD), in which we can control the decoding process more explicitly by approximating the operation of argmax with Gumbel-softmax (Jang et al., 2016). In both decoders, the final generation probability of a word is modulated by its word type. 3.2 Encoder-Decoder Framework Our model is based on the general encoderdecoder framework (Cho et al., 2014; Sutskever et al., 2014). Formally, the model encodes an input sequence X = x1x2 · · · xm into a sequence of hidden states hi, as follows, ht = GRU(ht−1, e(xt)), 2Though there may be errors in word type classification, we found it works well in response generation. where GRU denotes gated recurrent units (Cho et al., 2014), and e(x) is the word vector of word x. The decoder generates a word sequence by sampling from the probability P(yt|y<t, X) (y<t = y1y2 · · · yt−1, the generated subsequence) which can be computed via P(yt|y<t, X) = MLP(st, e(yt−1), ct), st = GRU(st−1, e(yt−1), ct), where st is the state of the decoder at the time step t, and this GRU has different parameters with the one of the encoder. The context vector ct is an attentive read of the hidden states of the encoder as ct = PT i=1 αt,ihi, where the weight αt,i is scored by another MLP(st−1, hi) network. 3.3 Soft Typed Decoder (STD) In a general encoder-decoder model, the decoder tends to generate universal, meaningless questions like “What’s up?” and “So what?”. In order to generate more meaningful questions, we propose a soft typed decoder. It assumes that each word has a latent type among the set {interrogative, topic word, ordinary word}. The soft typed decoder firstly estimates a word type distribution over latent types in the given context, and then computes type-specific generation distributions over the entire vocabulary for different word types. The final probability of generating a word is a mixture of type-specific generation distributions where the coefficients are type probabilities. The final generation distribution P(yt|y<t, X) from which a word can be sampled, is given by P(yt|y<t, X) = k X i=1 P(yt|tyt = ci, y<t, X) · P(tyt = ci|y<t, X), (1) where tyt denotes the word type at time step t and ci is a word type. Apparently, this formulation states that the final generation probability is a mixture of the type-specific generation probabilities P(yt|tyt = ci, y<t, X), weighted by the probability of the type distribution P(tyt = ci|y<t, X). We name this decoder as soft typed decoder. In this model, word type is latent because we do not need to specify the type of a word explicitly. In other words, each word can belong to any of the three types, but with different probabilities given the current context. The probability distribution over word types C = {c1, c2, · · · , ck} (k = 3 in this paper) (termed 2196 Figure 2: Illustration of STD and HTD. STD applies a mixture of type-specific generation distributions where type probabilities are the coefficients. In HTD, the type probability distribution is reshaped by Gumbel-softmax and then used to modulate the generation distribution. In STD, the generation distribution is over the same vocabulary whereas dynamic vocabularies are applied in HTD. as type distribution) is given by P(tyt|y<t, X) = softmax(W0st + b0), (2) where st is the hidden state of the decoder at time step t, W0 ∈Rk×d, and d is the dimension of the hidden state. The type-specific generation distribution is given by P(yt|tyt = ci, y<t, X) = softmax(Wcist + bci), where Wci ∈R|V |×d and |V | is the size of the entire vocabulary. Note that the type-specific generation distribution is parameterized by Wci, indicating that the distribution for each word type has its own parameters. Instead of using a single distribution P(yt|y<t, X) as in a general Seq2Seq decoder, our soft typed decoder enriches the model by applying multiple type-specific generation distributions. This enables the model to express more information about the next word to be generated. Also note that the generation distribution is over the same vocabulary, and therefore there is no need to specify word types explicitly. 3.4 Hard Typed Decoder (HTD) In the soft typed decoder, we assume that each word is a distribution over the word types. In this sense, the type of a word is implicit. We do not need to specify the type of each word explicitly. In the hard typed decoder, words in the entire vocabulary are dynamically classified into three types for each post, and the decoder first estimates a type distribution at each position and then generates a word with the highest type probability. This process can be formulated as follows: c∗= arg max ci P(tyt = ci|y<t, X), (3) P(yt|y<t, X) = P(yt|tyt = c∗, y<t, X). (4) This is essentially the hard form of Eq. 1, which just selects the type with the maximal probability. However, this argmax process may cause two problems. First, such a cascaded decision process (firstly selecting the most probable word type and secondly choosing a word from that type) may lead to severe grammatical errors if the first selection is wrong. Second, argmax is discrete and nondifferentiable, and it breaks the back-propagation path during training. To make best use of word types in hard typed decoder, we address the above issues by applying Gumbel-Softmax (Jang et al., 2016) to approximate the operation of argmax. There are several steps in the decoder (see Figure 2): First, the type of each word (interrogative, topic, or ordinary) in a question is decided automatically during training, as aforementioned. Second, the generation probability distribution is estimated as usual, P(yt|y<t, X) = softmax(W0st + b0). (5) Further, the type probability distribution at each decoding position is estimated as follows, P(tyt|y<t, X) = softmax(W1st + b1). (6) Third, the generation probability for each word is modulated by its corresponding type probabil2197 ity: P′(yt|y<t, X) = P(yt|y<t, X)·m(yt), m(yt) = ( 1 , c(yt) = c∗ 0 , c(yt) ̸= c∗ (7) where c(yt) looks up the word type of word yt, and c∗is the type with the highest probability as defined in Eq. 3. This formulation has exactly the effect of argmax, where the decoder will only generate words of type with the highest probability. To make P∗(yt|y<t, X) a distribution, we normalize these values by a normalization factor Z: Z = 1 P yt∈V P′(yt|y<t, X) where V is the decoding vocabulary. Then, the final probability can be denoted by P∗(yt|y<t, X) = Z · P′(yt|y<t, X). (8) As mentioned, in order to have an effect of argmax but still maintain the differentiability, we resort to Gumbel-Softmax (Jang et al., 2016), which is a differentiable surrogate to the argmax function. The type probability distribution is then adjusted to the following form: m(yt) = GS(P(tyt = c(yt)|y<t, X)), GS(πi) = e(log(πi)+gi)/τ Pk j=1 e(log(πj)+gj)/τ , (9) where π1, π2, · · · , πk represents the probabilities of the original categorical distribution, gj are i.i.d samples drawn from Gumbel(0,1)3 and τ is a constant that controls the smoothness of the distribution. When τ →0, Gumbel-Softmax performs like argmax, while if τ →∞, Gumbel-Softmax performs like a uniform distribution. In our experiments, we set τ a constant between 0 and 1, making Gumbel-Softmax smoother than argmax, but sharper than normal softmax. Note that in HTD, we apply dynamic vocabularies for different responses during training. The words in a response are classified into the three types dynamically. A specific type probability will only affect the words of that type. During test, for each post, topic words are predicted with PMI, interrogatives are picked from a small dictionary, and the rest of words in the vocabulary are treated as ordinary words. 3If u ∼Uniform(0, 1), then g = −log(−log(u)) ∼ Gumbel(0, 1). 3.5 Loss Function We adopt negative data likelihood (equivalent to cross entropy) as the loss function, and additionally, we apply supervision on the mixture weights of word types, formally as follows: Φ1 = X t −log P(yt = ˜yt|y<t, X), (10) Φ2 = X t −log P(tyt = etyt|y<t, X), (11) Φ = Φ1 + λΦ2, (12) where etyt represents the reference word type and ˜yt represents the reference word at time t. λ is a factor to balance the two loss terms, and we set λ=0.8 in our experiments. Note that for HTD, we substitute P∗(yt = wj|y<t, X) (as defined by Eq. 8) into Eq. 10. 3.6 Topic Word Prediction The only difference between training and inference is the means of choosing topic words. During training, we identify the nouns and verbs in a response as topic words; whereas during inference, we adopt PMI (Church and Hanks, 1990) and Rel(ki, X) to predict a set of topic words ki for an input post X, as defined below: PMI(wx, wy) = log p(wx, wy) p1(wx) ∗p2(wy), Rel(ki, X) = X wx∈X ePMI(wx,ki), where p1(w)/p2(w) represent the probability of word w occurring in a post/response, respectively, and p(wx, wy) is the probability of word wx occurring in a post and wy in a response. During inference, we predict at most 20 topic words for an input post. Too few words will affect the grammaticality since the predicted set contains infrequent topic words, while too many words introduce more common topics leading to more general responses. 4 Experiment 4.1 Dataset To estimate the probabilities in PMI, we collected about 9 million post-response pairs from Weibo. To train our question generation models, we distilled the pairs whereby the responses are in question form with the help of around 20 hand-crafted 2198 templates. The templates contain a list of interrogatives and other implicit questioning patterns. Such patterns detect sentences led by words like what, how many, how about or sentences ended with a question mark. After that, we removed the pairs whose responses are universal questions that can be used to reply many different posts. This is a simple yet effective way to avoid situations where the type probability distribution is dominated by interrogatives and ordinary words. Ultimately, we obtained the dataset comprising about 491,000 post-response pairs. We randomly selected 5,000 pairs for testing and another 5,000 for validation. The average number of words in post/response is 8.3/9.3 respectively. The dataset contains 66,547 different words, and 18,717 words appear more than 10 times. The dataset is available at: http://coai.cs.tsinghua.edu. cn/hml/dataset/. 4.2 Baselines We compared the proposed decoders with four state-of-the-art baselines. Seq2Seq: A simple encoder-decoder with attention mechanisms (Luong et al., 2015). MA: The mechanism-aware (MA) model applies multiple responding mechanisms represented by real-valued vectors (Zhou et al., 2017a). The number of mechanisms is set to 4 and we randomly picked one response from the generated responses for evaluation to avoid selection bias. TA: The topic-aware (TA) model generates informative responses by incorporating topic words predicted from the input post (Xing et al., 2017). ERM: Elastic responding machine (ERM) adaptively selects a subset of responding mechanisms using reinforcement learning (Zhou et al., 2018a). The settings are the same as the original paper. 4.3 Experiment Settings Parameters were set as follows: we set the vocabulary size to 20, 000 and the dimension of word vectors as 100. The word vectors were pretrained with around 9 million post-response pairs from Weibo and were being updated during the training of the decoders. We applied the 4-layer GRU units (hidden states have 512 dimensions). These settings were also applied to all the baselines. λ in Eq. 12 is 0.8. We set different values of τ in Gumbel-softmax at different stages of training. At the early stage, we set τ to a small value (0.6) to obtain a sharper reformed distribution (more like argmax). After several steps, we set τ to a larger value (0.8) to apply a more smoothing distribution. Our codes are available at: https://github.com/victorywys/ Learning2Ask_TypedDecoder. 4.4 Automatic Evaluation We conducted automatic evaluation over the 5, 000 test posts. For each post, we obtained responses from the six models, and there are 30, 000 post-response pairs in total. 4.4.1 Evaluation Metrics We adopted perplexity to quantify how well a model fits the data. Smaller values indicate better performance. To evaluate the diversity of the responses, we employed distinct-1 and distinct-2 (Li et al., 2015). These two metrics calculates the proportion of the total number of distinct unigrams or bigrams to the total number of generated tokens in all the generated responses. Further, we calculated the proportion of the responses containing at least one topic word in the list predicted by PMI. This is to evaluate the ability of addressing topic words in response. We term this metric as topical response ratio (TRR). We predicted 20 topic words with PMI for each post. 4.4.2 Results Comparative results are presented in Table 1. STD and HTD perform fairly well with lower perplexities, higher distinct-1 and distinct-2 scores, and remarkably better topical response ratio (TRR). Note that MA has the lowest perplexity because the model tends to generate more universal responses. Model Perplexity Distinct-1 Distinct-2 TRR Seq2Seq 63.71 0.0573 0.0836 6.6% MA 54.26 0.0576 0.0644 4.5% TA 58.89 0.1292 0.1781 8.7% ERM 67.62 0.0355 0.0710 4.5% STD 56.77 0.1325 0.2509 12.1% HTD 56.10 0.1875 0.3576 43.6% Table 1: Results of automatic evaluation. Our decoders have better distinct-1 and distinct2 scores than baselines do, and HTD performs much better than the strongest baseline TA. Noticeably, the means of using topic information in our models differs substantially from that in TA. Our decoders predict whether a topic word should be decoded at each position, whereas TA takes as 2199 Models Appropriateness Richness Willingness Win (%) Lose (%) Tie (%) Win (%) Lose (%) Tie (%) Win (%) Lose (%) Tie (%) STD vs. Seq2Seq 42.0 38.6 19.4 37.2∗∗ 15.2 47.6 45.4∗ 38.6 16.0 STD vs. MA 39.6∗ 31.2 29.2 32.6∗∗ 16.8 50.6 49.4∗∗ 27.0 23.6 STD vs. TA 42.2 40.0 17.8 49.0∗∗ 5.4 45.6 47.6∗ 40.2 12.2 STD vs. ERM 43.4∗ 34.4 22.2 60.6∗∗ 13.2 26.2 43.2∗ 36.8 20.0 HTD vs. Seq2Seq 50.6∗∗ 30.6 18.8 46.0∗∗ 10.2 43.8 58.4∗∗ 33.2 8.4 HTD vs. MA 54.8∗∗ 24.4 20.8 45.0∗∗ 17.0 38.0 67.0∗∗ 18.0 15.0 HTD vs. TA 52.0∗∗ 38.2 9.8 55.0∗∗ 5.4 39.6 62.6∗∗ 31.0 6.4 HTD vs. ERM 64.8∗∗ 23.2 12.0 72.2∗∗ 8.4 19.4 56.6∗∗ 36.6 6.8 HTD vs. STD 52.0∗∗ 33.0 15.0 38.0∗∗ 26.2 35.8 61.8∗∗ 30.6 7.6 Table 2: Annotation results. Win for “A vs. B” means A is better than B. Significance tests with Z-test were conducted. Values marked with ∗means p-value < 0.05, and ∗∗for p-value < 0.01. input topic word embeddings at all decoding positions. Our decoders have remarkably better topic response ratios (TRR), indicating that they are more likely to include topic words in generation. 4.5 Manual Evaluation We resorted to a crowdsourcing service for manual annotation. 500 posts were sampled for manual annotation4. We conducted pair-wise comparison between two responses generated by two models for the same post. In total, there are 4,500 pairs to be compared. For each response pair, five judges were hired to give a preference between the two responses, in terms of the following three metrics. Tie was allowed, and system identifiers were masked during annotation. 4.5.1 Evaluation Metrics Each of the following metrics is evaluated independently on each pair-wise comparison: Appropriateness: measures whether a question is reasonable in logic and content, and whether it is questioning on the key information. Inappropriate questions are either irrelevant to the post, or have grammatical errors, or universal questions. Richness: measures whether a response contains topic words that are relevant to a given post. Willingness to respond: measures whether a user will respond to a generated question. This metric is to justify how likely the generated questions can elicit further interactions. If people are willing to respond, the interactions can go further. 4During the sampling process, we removed those posts that are only interpretable with other context or background. 4.5.2 Results The label of each pair-wise comparison is decided by majority voting from five annotators. Results shown in Table 2 indicate that STD and HTD outperform all the baselines in terms of all the metrics. This demonstrates that our decoders produce more appropriate questions, with richer topics. Particularly, our decoders have substantially better willingness scores, indicating that questions generated by our models are more likely to elicit further interactions. Noticeably, HTD outperforms STD significantly, indicating that it is beneficial to specify word types explicitly and apply dynamic vocabularies in generation. We also observed that STD outperforms Seq2Seq and TA, but the differences are not significant in appropriateness. This is because STD generated about 7% non-question responses which were judged as inappropriate, while Seq2Seq and TA generated universal questions (inappropriate too but beat STD in annotation) to these posts. 4.5.3 Annotation Statistics The proportion of the pair-wise annotations in which at least three of five annotators assign the same label to a record is 90.57%/93.11%/96.62% for appropriateness/ richness/willingness, respectively. The values show that we have fairly good agreements with majority voting. 4.6 Questioning Pattern Distribution To analyze whether the model can question with various patterns, we manually annotated the questioning patterns of the responses to 100 sampled posts. The patterns are classified into 11 types including Yes-No, How-, Why-, What-, When-, and Who- questions. We then calculated the KL diver2200 gence between the pattern type distribution by a model and that by human (i.e., gold responses). Results in Table 3 show that the pattern distribution by our model is closer to that in humanwritten responses, indicating that our decoders can better learn questioning patterns from human language. Further investigation reveals that the baselines tend to generate simple questions like What?(什么?) or Really?(真的吗), and constantly focus on using one or two question patterns whereas our decoders use more diversified patterns as appeared in the human language. Model Seq2Seq TA MA ERM STD HTD KLD 0.431 0.416 0.332 0.276 0.072 0.140 Table 3: KL divergence between the questioning pattern distribution by a model and that by human. 4.7 Examples of the Generated Questions Table 4 presents some generated questions by our decoders, which are more appropriate. On the contrary, Seq2Seq, MA and ERM tend to generate more universal questions. These examples also clearly show that asking questions in open-domain conversational systems requires scene understanding, which is verified by this scene example of singing at karaoke(在卡拉ok唱歌). Post: 我们昨天去唱歌了 We went to karaoke yesterday. Seq2Seq: 什 什 什么 么 么时候? When? MA: 你怎 怎 怎么 么 么知道我是在问 How did you know I am questioning you? TA: 什 什 什么 么 么? What? ERM: 什 什 什么 么 么情况? What happened ? STD: 去哪 哪 哪里 里 里唱歌? Where did you sing karaoke? HTD: 你们几 几 几个人唱歌? How many people were singing with you? Table 4: Generated question examples. Interrogative words are bolded and topic words underlined. To further understand how to ask good questions in conversational systems, we presented more generated question examples by human and our typed decoders. The first example (Post-1) in Table 5 shows that a good question can be composed of only interrogatives and ordinary words. This shows the complexity of this task and also the necessity of modeling interrogatives. Post-2 indicates a typical questioning pattern which is quesPost-1: 不好不好。哼!讨厌你啦。 No, no. Humph, I hate you! Human: 我怎 怎 怎么 么 么了? What is wrong with me? STD: 我怎 怎 怎么 么 么了? What is wrong with me? HTD: 我怎 怎 怎么 么 么了? What is wrong with me? Post-2: 刚用了杀虫剂,现在已经有效果了。 I just sprayed the insecticide, and now the effect appears. Human: 这么快!效果怎 怎 怎么 么 么样? So quick! What is the effect? STD: 效果如 如 如何 何 何? What is the effect? HTD: 效果如 如 如何 何 何? What is the effect? Post-3: 我在体育中心上班。 I work at the sports center. Human: 你每天从家过去要多 多 多久 久 久? How long does it take you to go there from home? STD: 你是什 什 什么 么 么部门的啊? Which department are you from? HTD: 那你是体育学院的学生吗 吗 吗? Are you a student from the College of Physical Education? Post-4: 今天的寿司真好吃。 The sushi today is so delicious! Human: 我记得你不喜欢吃日式料理的不是吗 吗 吗? I remember you didn’t like eating Japanese food, did you? STD: 你什 什 什么 么 么时候来无锡,我请你。 When will you come to Wuxi? It is my treat. HTD: 是吗 吗 吗?改天尝尝! Really? I will try sometime! Table 5: Examples for typical questioning patterns. Interrogative words in response are bolded and topic words are underlined. tioning on a particular topic word (效果-effect) of the input. While for Post-3, the questions are asking about transitional topics of the input (上 班-work →部门-department; 体育中心-sports center →体育学院-college of Physical Education), indicating a typical case of topic transition in our task (also seen in Post-4, 寿司-sushi →日式 料理-Japanese food). This example also demonstrates that for the same input, there are various questioning patterns: a How-question asked by human, a Which-question by STD, and a Yes-No question by HTD. As for Post-4, the gold question requires a background that is only shared between the poster and responder, while STD and HTD tend to raise more general questions due to the lack of such shared knowledge. 4.8 Visualization of Type Distribution To gain more insights into how a word type influence the generation process, we visualized the type probability at each decoding position in HTD. This example (Figure 3) shows that the model can capture word types well at different positions. For instance, at the first and second positions, ordinary words have the highest probabilities for generating 你-you and 喜欢-like, and at the third position, a 2201 topic word 兔子-rabbit is predicted while the last two positions are for interrogatives (a particle and a question mark). Figure 3: Type distribution examples from HTD. The generated question is “你喜欢兔子吗?do you like rabbit?”. EOS means end of sentence. 4.9 Error Analysis We presented error type distribution by manually analyzing 100 bad responses sampled from STD and HTD respectively, where bad means the response by our model is worse than that by some baseline during the pair-wise annotation. There are 4 typical error types: no topic words (NoT) in a response (mainly universal questions), wrong topics (WrT) where topic words are irrelevant, type generation error (TGE) where a wrong word type is predicted (See Eq. 2) and it causes grammatical errors, and other errors. Error Type NoT WrT TGE Others STD 34% 34% 29% 3% HTD 29% 39% 29% 3% Table 6: Error type distribution. The error distribution is shown in Table 6. For STD, most of the errors are attributed to no topic or wrong topics, while for HTD, the majority of errors fall into wrong topics. NoT      Post-1: 今天好开心啊! I am so happy today! STD: 你怎 怎 怎么 么 么知道? How do you know ? WrT              Post-2: 海报非常棒,期待若曦与我们男人的首 度合作。 The poster is great and we look forward to our first cooperation with Ruoxi. HTD: 你海报怎 怎 怎么 么 么样啊? How about your poster ? TGE                  Post-3: 又生病啦?吃点药就好了。 Got sick again? Just take some medicine and you’ll be fine soon. STD: 我也不知道怎 怎 怎么 么 么回事。 I don’t know what happened. HTD: 肠胃痛了,上火吗 吗 吗? Stomach aching, ulcer ? Table 7: Cases for the error types with interrogative words bolded and topic words underlined. There are typical cases for these error types: (1) Posts such as “I am so happy today!” contains no topic words or rare topic words. In this case, our method is unable to predict the topic words so that the models tend to generate universal questions. This happens more frequently in STD because the topic words are not specified explicitly. (2) Posts contains multiple topic words, but the model sometimes focuses on an inappropriate one. For instance, for Post-2 in Table 7, HTD focused on 海报-poster but 合作-cooperation is a proper one to be focused on. (3) For complex posts, the models failed to predict the correct word type in response. For Post-3, STD generated a declarative sentence and HTD generated a question which, however, is not adequate within the context. These cases show that controlling the questioning patterns and the informativeness of the content faces with the compatibility issue, which is challenging in language generation. These errors are also partially due to the imperfect ability of topic word prediction by PMI, which is challenging itself in open-domain conversational systems. 5 Conclusion and Future Work We present two typed decoders to generate questions in open-domain conversational systems. The decoders firstly estimate a type distribution over word types, and then use the type distribution to modulate the final word generation distribution. Through modeling the word types in language generation, the proposed decoders are able to question with various patterns and address novel yet related transitional topics in a generated question. Results show that our models can generate more appropriate questions, with richer topics, thereby more likely to elicit further interactions. The work can be extended to multi-turn conversation generation by including an additional detector predicting when to ask a question. The detector can be implemented by a classifier or some heuristics. Furthermore, the typed decoders are applicable to the settings where word types can be easily obtained, such as in emotional text generation (Ghosh et al., 2017; Zhou et al., 2018b). Acknowledgements This work was partly supported by the National Science Foundation of China under grant No.61272227/61332007 and the National Basic Research Program (973 Program) under grant No. 2013CB329403. We would like to thank Prof. Xiaoyan Zhu for her generous support. 2202 References Husam Ali, Yllias Chali, and Sadid A Hasan. 2010. Automation of question generation from sentences. In Proceedings of QG2010: The Third Workshop on Question Generation. pages 58–67. A. Andrenucci and E. Sneiders. 2005. Automated question answering: review of the main approaches. In ICITA. pages 514–519. Yllias Chali and Sina Golestanirad. 2016. Ranking automatically generated questions using common human queries. In INLG. pages 217–221. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP. pages 1724–1734. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational linguistics 16(1):22–29. S´ergio Curto, Ana Cristina Mendes, and Lu´ısa Coheur. 2012. Question generation based on lexico-syntactic patterns learned from the web. Dialogue Discourse . Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In ACL. pages 1342–1352. Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-lm: A neural language model for customizable affective text generation. In ACL. pages 634– 642. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In NAACL HLT. pages 609–617. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 . Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In ACL (1). pages 889–898. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. In NAACL-HLT. pages 110–119. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016. Learning through dialogue interactions by asking questions. arXiv preprint arXiv:1612.04936 . Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In EMNLP. pages 1412–1421. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In ACL. pages 1802–1813. David Lindberg Fred Popowich and John Nesbit Phil Winne. 2013. Generating natural language questions to support learning on-line. ENLG pages 105– 114. Haocheng Qin. 2015. Question Paraphrase Generation for Question Answering System. Master’s thesis, University of Waterloo. Linfeng Song, Zhiguo Wang, and Wael Hamza. 2017. A unified query-based generative model for question generation and question answering. arXiv preprint arXiv:1709.01058 . Sandeep Subramanian, Tong Wang, Xingdi Yuan, Saizheng Zhang, Adam Trischler, and Yoshua Bengio. 2017. Neural models for key phrase detection and question generation. arXiv preprint arXiv:1706.04560 . Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. pages 3104–3112. Duyu Tang, Nan Duan, Tao Qin, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027 . Lucy Vanderwende. 2008. The importance of being important: Question generation. In Proceedings of the 1st Workshop on the Question Generation Shared Task Evaluation Challenge. Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. arXiv preprint arXiv:1706.01450 . Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI. pages 3351– 3357. Zhou Yu, Ziyu Xu, Alan W Black, and Alex I. Rudnicky. 2016. Strategy and policy learning for nontask-oriented conversational systems. In SIGDIAL. pages 404–412. Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, and Adam Trischler. 2017. Machine comprehension by text-to-text neural question generation. In The Workshop on Representation Learning for NLP. pages 15–25. Ganbin Zhou, Ping Luo, Rongyu Cao, Fen Lin, Bo Chen, and Qing He. 2017a. Mechanism-aware neural machine for dialogue response generation. In AAAI. pages 3400–3407. 2203 Ganbin Zhou, Ping Luo, Yijun Xiao, Fen Lin, Bo Chen, and Qing He. 2018a. Elastic responding machine for dialog generation with dynamically mechanism selecting. In AAAI Conference on Artificial Intelligence, AAAI. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018b. Emotional chatting machine: Emotional conversation generation with internal and external memory. AAAI . Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017b. Neural question generation from text: A preliminary study. arXiv preprint arXiv:1704.01792 .
2018
204
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2204–2213 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2204 Personalizing Dialogue Agents: I have a dog, do you have pets too? Saizheng Zhang†,1, Emily Dinan‡, Jack Urbanek‡, Arthur Szlam‡, Douwe Kiela‡, Jason Weston‡ † Montreal Institute for Learning Algorithms, MILA ‡ Facebook AI Research [email protected], {edinan,jju,aszlam,dkiela,jase}@fb.com Abstract Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors. 1 Introduction Despite much recent success in natural language processing and dialogue research, communication between a human and a machine is still in its infancy. It is only recently that neural models have had sufficient capacity and access to sufficiently large datasets that they appear to generate meaningful responses in a chit-chat setting. Still, conversing with such generic chit-chat models for even a short amount of time quickly exposes their weaknesses (Serban et al., 2016; Vinyals and Le, 2015). Common issues with chit-chat models include: (i) the lack of a consistent personality (Li et al., 2016a) as they are typically trained over many dialogs each with different speakers, (ii) the lack of an explicit long-term memory as they are typically trained to produce an utterance given only the recent dialogue history (Vinyals and Le, 2015); 1Work done while at Facebook AI Research. and (iii) a tendency to produce non-specific answers like “I don’t know” (Li et al., 2015). Those three problems combine to produce an unsatisfying overall experience for a human to engage with. We believe some of those problems are due to there being no good publicly available dataset for general chit-chat. Because of the low quality of current conversational models, and because of the difficulty in evaluating these models, chit-chat is often ignored as an end-application. Instead, the research community has focused on task-oriented communication, such as airline or restaurant booking (Bordes and Weston, 2016), or else single-turn information seeking, i.e. question answering (Rajpurkar et al., 2016). Despite the success of the latter, simpler, domain, it is well-known that a large quantity of human dialogue centers on socialization, personal interests and chit-chat (Dunbar et al., 1997). For example, less than 5% of posts on Twitter are questions, whereas around 80% are about personal emotional state, thoughts or activities, authored by so called “Meformers” (Naaman et al., 2010). In this work we make a step towards more engaging chit-chat dialogue agents by endowing them with a configurable, but persistent persona, encoded by multiple sentences of textual description, termed a profile. This profile can be stored in a memory-augmented neural network and then used to produce more personal, specific, consistent and engaging responses than a persona-free model, thus alleviating some of the common issues in chit-chat models. Using the same mechanism, any existing information about the persona of the dialogue partner can also be used in the same way. Our models are thus trained to both ask and answer questions about personal topics, and the resulting dialogue can be used to build a model of the persona of the speaking partner. To support the training of such models, we 2205 present the PERSONA-CHAT dataset, a new dialogue dataset consisting of 164,356 utterances between crowdworkers who were randomly paired and each asked to act the part of a given provided persona (randomly assigned, and created by another set of crowdworkers). The paired workers were asked to chat naturally and to get to know each other during the conversation. This produces interesting and engaging conversations that our agents can try to learn to mimic. Studying the next utterance prediction task during dialogue, we compare a range of models: both generative and ranking models, including Seq2Seq models and Memory Networks (Sukhbaatar et al., 2015) as well as other standard retrieval baselines. We show experimentally that in either the generative or ranking case conditioning the agent with persona information gives improved prediction of the next dialogue utterance. The PERSONA-CHAT dataset is designed to facilitate research into alleviating some of the issues that traditional chitchat models face, and with the aim of making such models more consistent and engaging, by endowing them with a persona. By comparing against chit-chat models built using the OpenSubtitles and Twitter datasets, human evaluations show that our dataset provides more engaging models, that are simultaneously capable of being fluent and consistent via conditioning on a persistent, recognizable profile. 2 Related Work Traditional dialogue systems consist of building blocks, such as dialogue state tracking components and response generators, and have typically been applied to tasks with labeled internal dialogue state and precisely defined user intent (i.e., goal-oriented dialogue), see e.g. (Young, 2000). The most successful goal-oriented dialogue systems model conversation as partially observable Markov decision processes (POMDPs) (Young et al., 2013). All those methods typically do not consider the chit-chat setting and are more concerned with achieving functional goals (e.g. booking an airline flight) than displaying a personality. In particular, many of the tasks and datasets available are constrained to narrow domains (Serban et al., 2015). Non-goal driven dialogue systems go back to Weizenbaum’s famous program ELIZA (Weizenbaum, 1966), and hand-coded systems have continued to be used in applications to this day. For example, modern solutions that build an openended dialogue system to the Alexa challenge combine hand-coded and machine-learned elements (Serban et al., 2017a). Amongst the simplest of statistical systems that can be used in this domain, that are based on data rather than handcoding, are information retrieval models (Sordoni et al., 2015), which retrieve and rank responses based on their matching score with the recent dialogue history. We use IR systems as a baseline in this work. End-to-end neural approaches are a class of models which have seen growing recent interest. A popular class of methods are generative recurrent systems like seq2seq applied to dialogue (Sutskever et al., 2014; Vinyals and Le, 2015; Sordoni et al., 2015; Li et al., 2016b; Serban et al., 2017b). Rooted in language modeling, they are able to produce syntactically coherent novel responses, but their memory-free approach means they lack long-term coherence and a persistent personality, as discussed before. A promising direction, that is still in its infancy, to fix this issue is to use a memory-augmented network instead (Sukhbaatar et al., 2015; Dodge et al., 2015) by providing or learning appropriate memories. Serban et al. (2015) list available corpora for training dialogue systems. Perhaps the most relevant to learning chit-chat models are ones based on movie scripts such as OpenSubtitles and Cornell Movie-Dialogue Corpus, and dialogue from web platforms such as Reddit and Twitter, all of which have been used for training neural approaches (Vinyals and Le, 2015; Dodge et al., 2015; Li et al., 2016b; Serban et al., 2017b). Naively training on these datasets leads to models with the lack of a consistent personality as they will learn a model averaged over many different speakers. Moreover, the data does little to encourage the model to engage in understanding and maintaining knowledge of the dialogue partner’s personality and topic interests. According to Serban et al. (2015)’s survey, personalization of dialogue systems is “an important task, which so far has not received much attention”. In the case of goal-oriented dialogue some work has focused on the agent being aware of the human’s profile and adjusting the dialogue accordingly, but without a personality to the agent itself (Lucas et al., 2009; Joshi et al., 2017). For 2206 the chit-chat setting, the most relevant work is (Li et al., 2016a). For each user in the Twitter corpus, personas were captured via distributed embeddings (one per speaker) to encapsulate individual characteristics such as background information and speaking style, and they then showed using those vectors improved the output of their seq2seq model for the same speaker. Their work does not focus on attempting to engage the other speaker by getting to know them, as we do here. For that reason, our focus is on explicit profile information, not hard-to-interpret latent variables. 3 The PERSONA-CHAT Dataset The aim of this work is to facilitate more engaging and more personal chit-chat dialogue. The PERSONA-CHAT dataset is a crowd-sourced dataset, collected via Amazon Mechanical Turk, where each of the pair of speakers condition their dialogue on a given profile, which is provided. The data collection consists of three stages: (i) Personas: we crowdsource a set of 1155 possible personas, each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation, and 100 for test. (ii) Revised personas: to avoid modeling that takes advantage of trivial word overlap, we crowdsource additional rewritten sets of the same 1155 personas, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. (iii) Persona chat: we pair two Turkers and assign them each a random (original) persona from the pool, and ask them to chat. This resulted in a dataset of 164,356 utterances over 10,981 dialogs, 15,705 utterances (968 dialogs) of which are set aside for validation, and 15,119 utterances (1000 dialogs) for test. The final dataset and its corresponding data collection source code, as well as models trained on the data, are all available open source in ParlAI2. In the following, we describe each data collection stage and the resulting tasks in more detail. 3.1 Personas We asked the crowdsourced workers to create a character (persona) description using 5 sentences, providing them only a single example: 2https://github.com/facebookresearch/ ParlAI/tree/master/projects/personachat “I am a vegetarian. I like swimming. My father used to work for Ford. My favorite band is Maroon5. I got a new job last month, which is about advertising design.” Our aim was to create profiles that are natural and descriptive, and contain typical topics of human interest that the speaker can bring up in conversation. Because the personas are not the real profiles of the Turkers, the dataset does not contain personal information (and they are told specifically not to use any). We asked the workers to make each sentence short, with a maximum of 15 words per sentence. This is advantageous both for humans and machines: if they are too long, crowdsourced workers are likely to lose interest, and for machines the task could become more difficult. Some examples of the personas collected are given in Table 1 (left). 3.2 Revised Personas A difficulty when constructing dialogue datasets, or text datasets in general, is that in order to encourage research progress, the task must be carefully constructed so that is neither too easy nor too difficult for the current technology (Voorhees et al., 1999). One issue with conditioning on textual personas is that there is a danger that humans will, even if asked not to, unwittingly repeat profile information either verbatim or with significant word overlap. This may make any subsequent machine learning tasks less challenging, and the solutions will not generalize to more difficult tasks. This has been a problem in some recent datasets: for example, the dataset curation technique used for the well-known SQuAD dataset suffers from this word overlap problem to a certain extent (Chen et al., 2017). To alleviate this problem, we presented the original personas we collected to a new set of crowdworkers and asked them to rewrite the sentences so that a new sentence is about “a related characteristic that the same person may have”, hence the revisions could be rephrases, generalizations or specializations. For example “I like basketball” can be revised as “I am a big fan of Michael Jordan” not because they mean the same thing but because the same persona could contain both. In the revision task, workers are instructed not to trivially rephrase the sentence by copying the original words. However, during the entry stage if a non-stop word is copied we issue a warning, 2207 Original Persona Revised Persona I love the beach. To me, there is nothing like a day at the seashore. My dad has a car dealership My father sales vehicles for a living. I just got my nails done I love to pamper myself on a regular basis. I am on a diet now I need to lose weight. Horses are my favorite animal. I am into equestrian sports. I play a lot of fantasy videogames. RPGs are my favorite genre. I have a computer science degree. I also went to school to work with technology. My mother is a medical doctor The woman who gave birth to me is a physician. I am very shy. I am not a social person. I like to build model spaceships. I enjoy working with my hands. Table 1: Example Personas (left) and their revised versions (right) from the PERSONA-CHAT dataset. The revised versions are designed to be characteristics that the same persona might have, which could be rephrases, generalizations or specializations. Persona 1 Persona 2 I like to ski I am an artist My wife does not like me anymore I have four children I have went to Mexico 4 times this year I recently got a cat I hate Mexican food I enjoy walking for exercise I like to eat cheetos I love watching Game of Thrones [PERSON 1:] Hi [PERSON 2:] Hello ! How are you today ? [PERSON 1:] I am good thank you , how are you. [PERSON 2:] Great, thanks ! My children and I were just about to watch Game of Thrones. [PERSON 1:] Nice ! How old are your children? [PERSON 2:] I have four that range in age from 10 to 21. You? [PERSON 1:] I do not have children at the moment. [PERSON 2:] That just means you get to keep all the popcorn for yourself. [PERSON 1:] And Cheetos at the moment! [PERSON 2:] Good choice. Do you watch Game of Thrones? [PERSON 1:] No, I do not have much time for TV. [PERSON 2:] I usually spend my time painting: but, I love the show. Table 2: Example dialog from the PERSONA-CHAT dataset. Person 1 is given their own persona (top left) at the beginning of the chat, but does not know the persona of Person 2, and vice-versa. They have to get to know each other during the conversation. and ask them to rephrase, guaranteeing that the instructions are followed. For example, “My father worked for Ford.” can be revised to “My dad worked in the car industry”, but not “My dad was employed by Ford.” due to word overlap. Some examples of the revised personas collected are given in Table 1 (right). 3.3 Persona Chat After collecting personas, we then collected the dialogues themselves, conditioned on the personas. For each dialogue, we paired two random crowdworkers, and gave them the instruction that they will chit-chat with another worker, while playing the part of a given character. We then provide them with a randomly chosen persona from our pool, different to their partners. The instructions are on purpose quite terse and simply ask them to “chat with the other person naturally and try to get to know each other”. In an early study we noticed the crowdworkers tending to talk about themselves (their own persona) too much, so we also added the instructions “both ask questions and answer questions of your chat partner” which seemed to help. We also gave a bonus for high quality dialogs. The dialog is turn-based, with a maximum of 15 words per message. We again gave instructions to not trivially copy the character descriptions into the messages, but also wrote explicit code sending them an error if they tried to do so, using simple string matching. We define a minimum dialogue length which is randomly between 6 and 8 turns each for each dialogue. An example dialogue from the dataset is given in Table 2. 2208 3.4 Evaluation We focus on the standard dialogue task of predicting the next utterance given the dialogue history, but consider this task both with and without the profile information being given to the learning agent. Our goal is to enable interesting directions for future research, where chatbots can for instance have personalities, or imputed personas could be used to make dialogue more engaging to the user. We consider this in four possible scenarios: conditioning on no persona, your own persona, their persona, or both. These scenarios can be tried using either the original personas, or the revised ones. We then evaluate the task using three metrics: (i) the log likelihood of the correct sequence, measured via perplexity, (ii) F1 score, and (iii) next utterance classification loss, following Lowe et al. (2015). The latter consists of choosing N random distractor responses from other dialogues (in our setting, N=19) and the model selecting the best response among them, resulting in a score of one if the model chooses the correct response, and zero otherwise (called hits@1 in the experiments). 4 Models We consider two classes of model for next utterance prediction: ranking models and generative models. Ranking models produce a next utterance by considering any utterance in the training set as a possible candidate reply. Generative models generate novel sentences by conditioning on the dialogue history (and possibly, the persona), and then generating the response word-by-word. Note one can still evaluate the latter as ranking models by computing the probability of generating a given candidate, and ranking candidates by those scores. 4.1 Baseline ranking models We first consider two baseline models, an IR baseline (Sordoni et al., 2015) and a supervised embedding model, Starspace (Wu et al., 2017)3. While there are many IR variants, we adopt the simplest one: find the most similar message in the (training) dataset and output the response from that exchange. Similarity is measured by the tfidf weighted cosine similarity between the bags of words. Starspace is a recent model that also performs information retrieval but by learning the 3github.com/facebookresearch/StarSpace similarity between the dialog and the next utterance by optimizing the embeddings directly for that task using the margin ranking loss and k-negative sampling. The similarity function sim(q, c′) is the cosine similarity of the sum of word embeddings of the query q and candidate c′. Denoting the dictionary of D word embeddings as W which is a D × d matrix, where Wi indexes the ith word (row), yielding its d-dimensional embedding, it embeds the sequences q and c′. In both methods, IR and StarSpace, to incorporate the profile we simply concatenate it to the query vector bag of words. 4.2 Ranking Profile Memory Network Both the previous models use the profile information by combining it with the dialogue history, which means those models cannot differentiate between the two when deciding on the next utterance. In this model we instead use a memory network with the dialogue history as input, which then performs attention over the profile to find relevant lines from the profile to combine with the input, and then finally predicts the next utterance. We use the same representation and loss as in the Starspace model, so without the profile, the two models are identical. When the profile is available attention is performed by computing the similarity of the input q with the profile sentences pi, computing the softmax, and taking the weighted sum: q+ = q + X sipi, si = Softmax(sim(q, pi)) where Softmax(zi) = ezi/ P j ezj. One can then rank the candidates c′ using sim(q+, c′). One can also perform multiple “hops” of attention over the profile rather than one, as shown here, although that did not bring significant gains in our parameter sweeps. 4.3 Key-Value Profile Memory Network The key-value (KV) memory network (Miller et al., 2016) was proposed as an improvement to the memory network by performing attention over keys and outputting the values (instead of the same keys as in the original), which can outperform memory networks dependent on the task and definition of the key-value pairs. Here, we apply this model to dialogue, and consider the keys as dialog histories (from the training set), and the values as the next dialogue utterances, i.e., the replies from the speaking partner. This allows the model 2209 to have a memory of past dialogues that it can directly use to help influence its prediction for the current conversation. The model we choose is identical to the profile memory network just described in the first hop over profiles, while in the second hop, q+ is used to attend over the keys and output a weighted sum of values as before, producing q++. This is then used to rank the candidates c′ using sim(q++, c′) as before. As the set of (key-value) pairs is large this would make training very slow. In our experiments we simply trained the profile memory network and used the same weights from that model and applied this architecture at test time instead. Training the model directly would presumably give better results, however this heuristic already proved beneficial compared to the original network. 4.4 Seq2Seq The input sequence x is encoded by applying he t = LSTMenc(xt | he t−1). We use GloVe (Pennington et al., 2014) for our word embeddings. The final hidden state, he t, is fed into the decoder LSTMdec as the initial state hd 0. For each time step t, the decoder then produces the probability of a word j occurring in that place via the softmax, i.e., p(yt,j = 1 | yt−1, . . . , y1) = exp(wjhd t ) PK j′=1 exp(wj′hd t ) . The model is trained via negative log likelihood. The basic model can be extended to include persona information, in which case we simply prepend it to the input sequence x, i.e., x = ∀p ∈ P || x, where || denotes concatenation. For the OpenSubtitles and Twitter datasets trained in Section 5.2 we found training a language model (LM), essentially just the decoder part of this model, worked better and we report that instead. 4.5 Generative Profile Memory Network Finally, we introduce a generative model that encodes each of the profile entries as individual memory representations in a memory network. As before, the dialogue history is encoded via LSTMenc, the final state of which is used as the initial hidden state of the decoder. Each entry pi = ⟨pi,1, . . . , pi,n⟩∈P is then encoded via f(pi) = P|pi| j αipi,j. That is, we weight words by their inverse term frequency: αi = 1/(1 + log(1 + tf)) where tf is computed from the GloVe index via Zipf’s law4. Let F be the set of encoded memories. The decoder now attends over the encoded profile entries, i.e., we compute the mask at, context ct and next input ˆxt as: at = softmax(FWahd t ), ct = a⊺ t F; ˆxt = tanh(Wc[ct−1, xt]). If the model has no profile information, and hence no memory, it becomes equivalent to the Seq2Seq model. 5 Experiments We first report results using automated evaluation metrics, and subsequently perform an extrinsic evaluation where crowdsourced workers perform a human evaluation of our models. 5.1 Automated metrics The main results are reported in Table 3. Overall, the results show the following key points: Persona Conditioning Most models improve significantly when conditioning prediction on their own persona at least for the original (non-revised) versions, which is an easier task than the revised ones which have no word overlap. For example, the Profile Memory generation model has improved perplexity and hits@1 compared to Seq2Seq, and all the ranking algorithms (IR baseline, Starspace and Profile Memory Networks) obtain improved hits@1. Ranking vs. Generative. Ranking models are far better than generative models at ranking. This is perhaps obvious as that is the metric they are optimizing, but still the performance difference is quite stark. It may be that the word-based probability which generative models use works well, but is not calibrated well enough to give a sentencebased probability which ranking requires. Human evaluation is also used to compare these methods, which we perform in Sec. 5.2. Ranking Models. For the ranking models, the IR baseline is outperformed by Starspace due to its learnt similarity metric, which in turn is outperformed by Profile Memory networks due to the attention mechanism over the profiles (as all other parts of the models are the same). Finally KV Profile Memory networks outperform Profile Memory Networks in the no persona case due to the ability to consider neighboring dialogue history and next 4tf = 1e6 ∗1/(idx1.07) 2210 Method No Persona Original Persona Revised Persona ppl hits@1 ppl hits@1 ppl hits@1 Generative Models Seq2Seq 38.08 0.092 40.53 0.084 40.65 0.082 Profile Memory 38.08 0.092 34.54 0.125 38.21 0.108 Ranking Models IR baseline 0.214 0.410 0.207 Starspace 0.318 0.491 0.322 Profile Memory 0.318 0.509 0.354 KV Profile Memory 0.349 0.511 0.351 Table 3: Evaluation of dialog utterance prediction with various models in three settings: without conditioning on a persona, conditioned on the speakers given persona (“Original Persona”), or a revised persona that does not have word overlap. Method Persona Model Profile Fluency Engagingness Consistency Detection Human Self 4.31(1.07) 4.25(1.06) 4.36(0.92) 0.95(0.22) Generative PersonaChat Models Seq2Seq None 3.17(1.10) 3.18(1.41) 2.98(1.45) 0.51(0.50) Profile Memory Self 3.08(1.40) 3.13(1.39) 3.14(1.26) 0.72(0.45) Ranking PersonaChat Models KV Memory None 3.81(1.14) 3.88(0.98) 3.36(1.37) 0.59(0.49) KV Profile Memory Self 3.97(0.94) 3.50(1.17) 3.44(1.30) 0.81(0.39) Twitter LM None 3.21(1.54) 1.75(1.04) 1.95(1.22) 0.57(0.50) OpenSubtitles 2018 LM None 2.85(1.46) 2.13(1.07) 2.15(1.08) 0.35(0.48) OpenSubtitles 2009 LM None 2.25(1.37) 2.12(1.33) 1.96(1.22) 0.38(0.49) OpenSubtitles 2009 KV Memory None 2.14(1.20) 2.22(1.22) 2.06(1.29) 0.42(0.49) Table 4: Human Evaluation of various PERSONA-CHAT models, along with a comparison to human performance, and Twitter and OpenSubtitles based models (last 4 rows), standard deviation in parenthesis. utterance pairs in the training set that are similar to the current dialogue, however when using persona information the performance is similar. Revised Personas. Revised personas are much harder to use. We do however still see some gain for the Profile Memory networks compared to none (0.354 vs. 0.318 hits@1). We also tried two variants of training: with the original personas in the training set or the revised ones, a comparison of which is shown in Table 6 of the Appendix. Training on revised personas helps, both for test examples that are in original form or revised form, likely due to the model be forced to learn more than simple word overlap, forcing the model to generalize more (i.e., learn semantic similarity of differing phrases). Their Persona. We can also condition a model on the other speaker’s persona, or both personas at once, the results of which are in Tables 5 and 6 in the Appendix. Using “Their persona” has less impact on this dataset. We believe this is because most speakers tend to focus on themselves when it comes to their interests. It would be interesting how often this is the case in other datasets. Certainly this is skewed by the particular instructions one could give to the crowdworkers. For example if we gave the instructions “try not to talk about yourself, but about the other’s interests’ likely these metrics would change. 2211 5.2 Human Evaluation As automated metrics are notoriously poor for evaluating dialogue (Liu et al., 2016) we also perform human evaluation using crowdsourced workers. The procedure is as follows. We perform almost exactly the same setup as in the dataset collection process itself as in Section 3.3. In that setup, we paired two Turkers and assigned them each a random (original) persona from the collected pool, and asked them to chat. Here, from the Turker’s point of view everything looks the same except instead of being paired with a Turker they are paired with one of our models instead (they do not know this). In this setting, for both the Turker and the model, the personas come from the test set pool. After the dialogue, we then ask the Turker some additional questions in order to evaluate the quality of the model. We ask them to evaluate fluency, engagingness and consistency (scored between 15). Finally, we measure the ability to detect the other speaker’s profile by displaying two possible profiles, and ask which is more likely to be the profile of the person the Turker just spoke to. More details of these measures are given in the Appendix. The results are reported in Table 4 for the best performing generative and ranking models, in both the No Persona and Self Persona categories, 100 dialogues each. We also evaluate the scores of human performance by replacing the chatbot with a human (another Turker). This effectively gives us upper bound scores which we can aim for with our models. Finally, and importantly, we compare our models trained on PERSONA-CHAT with chit-chat models trained with the Twitter and OpenSubtitles datasets (2009 and 2018 versions) instead, following Vinyals and Le (2015). Example chats from a few of the models are shown in the Appendix in Tables 7, 8, 9, 10, 11 and 12. Firstly, we see a difference in fluency, engagingness and consistency between all PERSONACHAT models and the models trained on OpenSubtitles and Twitter. PERSONA-CHAT is a resource that is particularly strong at providing training data for the beginning of conversations, when the two speakers do not know each other, focusing on asking and answering questions, in contrast to other resources. We also see suggestions of more subtle differences between the models, although these differences are obscured by the high variance of the human raters’ evaluations. For example, in both the generative and ranking model cases, models endowed with a persona can be detected by the human conversation partner, as evidenced by the persona detection accuracies, whilst maintaining fluency and consistency compared to their nonpersona driven counterparts. Finding the balance between fluency, engagement, consistency, and a persistent persona remains a strong challenge for future research. 5.3 Profile Prediction Two tasks could naturally be considered using PERSONACHAT: (1) next utterance prediction during dialogue, and (2) profile prediction given dialogue history. The main study of this work has been Task 1, where we have shown the use of profile information. Task 2, however, can be used to extract such information. While a full study is beyond the scope of this paper, we conducted some preliminary experiments, the details of which are in Appendix D. They show (i) human speaker’s profiles can be predicted from their dialogue with high accuracy (94.3%, similar to human performance in Table 4) or even from the model’s dialogue (23% with KV Profile Memory) showing the model is paying attention to the human’s interests. Further, the accuracies clearly improve with further dialogue, as shown in Table 14. Combining Task 1 and Task 2 into a full system is an exciting area of future research. 6 Conclusion & Discussion In this work we have introduced the PERSONACHAT dataset, which consists of crowd-sourced dialogues where each participant plays the part of an assigned persona; and each (crowd-sourced) persona has a word-distinct paraphrase. We test various baseline models on this dataset, and show that models that have access to their own personas in addition to the state of the dialogue are scored as more consistent by annotators, although not more engaging. On the other hand, we show that models trained on PERSONA-CHAT (with or without personas) are more engaging than models trained on dialogue from other resources (movies, Twitter). We believe PERSONA-CHAT will be a useful resource for training components of future dialogue systems. Because we have paired human generated profiles and conversations, the data aids the construction of agents that have consistent per2212 sonalities and viewpoints. Furthermore, predicting the profiles from a conversation moves chitchat tasks in the direction of goal-directed dialogue, which has metrics for success. Because we collect paraphrases of the profiles, they cannot be trivially matched; indeed, we believe the original and rephrased profiles are interesting as a semantic similarity dataset in their own right. We hope that the data will aid training agents that can ask questions about users’ profiles, remember the answers, and use them naturally in conversation. References Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. 2015. Evaluating prerequisite qualities for learning end-to-end dialog systems. arXiv preprint arXiv:1511.06931. Robin IM Dunbar, Anna Marriott, and Neil DC Duncan. 1997. Human conversational behavior. Human nature, 8(3):231–246. Chaitanya K Joshi, Fei Mi, and Boi Faltings. 2017. Personalization in goal-oriented dialog. arXiv preprint arXiv:1706.07503. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055. Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016b. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909. JM Lucas, F Fern´andez, J Salazar, J Ferreiros, and R San Segundo. 2009. Managing speaker identity and user profiles in a spoken dialogue system. Procesamiento del Lenguaje Natural, 43:77–84. Alexander Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126. Mor Naaman, Jeffrey Boase, and Chih-Hui Lai. 2010. Is it really about me?: message content in social awareness streams. In Proceedings of the 2010 ACM conference on Computer supported cooperative work, pages 189–192. ACM. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Iulian V Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, et al. 2017a. A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349. Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2015. A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742. Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2016. Generative deep neural networks for dialogue: A short review. arXiv preprint arXiv:1611.06216. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dialogues. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. 2213 Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77– 82. Joseph Weizenbaum. 1966. Elizaa computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45. Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. 2017. Starspace: Embed all the things! arXiv preprint arXiv:1709.03856. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Steve J Young. 2000. Probabilistic methods in spoken– dialogue systems. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 358(1769):1389–1402.
2018
205
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2214–2224 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2214 Efficient Large-Scale Neural Domain Classification with Personalized Attention Young-Bum Kim Dongchan Kim Anjishnu Kumar Ruhi Sarikaya Amazon Alexa {youngbum,dongchan,anjikum,rsarikaya}@amazon.com Abstract In this paper, we explore the task of mapping spoken language utterances to one of thousands of natural language understanding domains in intelligent personal digital assistants (IPDAs). This scenario is observed in mainstream IPDAs in industry that allow third parties to develop thousands of new domains to augment builtin first party domains to rapidly increase domain coverage and overall IPDA capabilities. We propose a scalable neural model architecture with a shared encoder, a novel attention mechanism that incorporates personalization information and domain-specific classifiers that solves the problem efficiently. Our architecture is designed to efficiently accommodate incremental domain additions achieving two orders of magnitude speed up compared to full model retraining. We consider the practical constraints of real-time production systems, and design to minimize memory footprint and runtime latency. We demonstrate that incorporating personalization significantly improves domain classification accuracy in a setting with thousands of overlapping domains. 1 Introduction Intelligent personal digital assistants (IPDAs) are one of the most advanced and successful artificial intelligence applications that have spoken language understanding (SLU). Many IPDAs have recently emerged in industry including Amazon Alexa, Google Assistant, Apple Siri, and Microsoft Cortana (Sarikaya, 2017). IPDAs have traditionally supported only dozens of well-separated domains, each defined in terms of a specific application or functionality such as calendar and local search (Tur and de Mori, 2011; Sarikaya et al., 2016). To rapidly increase domain coverage and extend capabilities, some IPDAs have released Software Development Toolkits (SDKs) to allow third-party developers to quickly build and integrate new domains, which we refer to as skills henceforth. Amazon’s Alexa Skills Kit (Kumar et al., 2017a), Google’s Actions and Microsoft’s Cortana Skills Kit are all examples of such SDKs. Alexa Skills Kit is the largest of these services with over 40,000 skills. For IPDAs, finding the most relevant skill to handle an utterance is an open problem for three reasons. First, the sheer number of skills makes the task difficult. Unlike traditional systems that have on the order of 10-20 built-in domains, largescale IPDAs can have up to 40,000 skills. Second, the number of skills is rapidly expanding with 100+ new skills added per week. Largescale IPDAs should be able to accommodate new skills efficiently without compromising performance. Third, unlike traditional built-in domains that are carefully designed to be disjoint by a central team, skills are built independently by different developers and can cover overlapping functionalities. For instance, there are over 50 recipe skills in Alexa that can handle recipe-related utterances. One simple solution to this problem has been to require the user to explicitly mention a skill name and follow a strict invocation pattern as in ”Ask {Uber} to {get me a ride}.” However, this significantly limits the natural interaction with IPDAs. Users have to remember skill names and invocation patterns, and it places a cognitive burden on users who tend to forget both. Skill discovery is difficult with a pure voice user interface, it is hard for users to know the capabilities of thousands of skills a priori, which may leads to limited user en2215 gagement with skills and potentially with IPDAs. In this paper, we propose a solution that addresses all three practical challenges without requiring skill names or invocation patterns. Our approach is based on a scalable neural model architecture with a shared encoder, a skill attention mechanism and skill-specific classification networks that can efficiently perform large-scale skill classification in IPDAs using a weakly supervised training dataset. We demonstrate that our model achieves a high accuracy on a manually transcribed test set after being trained with weak supervision. Moreover, our architecture is designed to efficiently integrate new skills that appear in-between full model retraining cycles into the model. Besides accuracy, we also keep practical constraints in mind and focus on minimizing memory footprint and runtime latency, while ensuring architecture is scalable to thousands of skills, all of which are important for real-time production systems. Furthermore, we investigate two different ways of incorporating user personalization information into the model, our naive baseline method adds the information as a 1-bit flag in the feature space of the skill-specific networks, the personalized attention technique computes a convex combination of skill embeddings for the user’s enabled skills and significantly outperforms the naive personalization baseline. We show the effectiveness of our approach with extensive experiments using 1,500 skills from a deployed IPDA system. 2 Related Work Traditional multi-domain SLU/NLU systems are designed hierarchically, starting with domain classification to classify an incoming utterance into one of many possible domains, followed by further semantic analysis with domain-specific intent classification and slot tagging (Tur and de Mori, 2011). Traditional systems have typically been limited to a small number of domains, designed by specialists to be well-separable. Therefore, domain classification has been considered a less complex task compared to other semantic analysis such as intent and slot predictions. Traditional domain classifiers are built using simple linear models such as Multinomial Logistic Regression or Support Vector Machines in a one-versusall setting for multi-class prediction. The models typically use word n-gram features and also those based on static lexicon match, and there have been several recent studies applying deep learning techniques (Xu and Sarikaya, 2014). There is also a line of prior work on enhancing sequential text classification or tagging. Hierarchical character-to-word level LSTM (Hochreiter and Schmidhuber, 1997) architectures similar to our models have been explored for the Named Entity Recognition task by Lample et al. (2016). Character-informed sequence models have also been explored for simple text classification with small sets of classes by Xiao and Cho (2016). Joulin et al. (2016) explored highly scalable text classification using a shared hierarchical encoder, but their hierarchical softmax-based output formulation is unsuitable for incremental model updates. Work on zero-shot domain classifier expansion by Kumar et al. (2017b) struggled to rank incoming domains higher than training domains. The attention-based approach of Kim et al. (2017d) does not require retraining from scratch, but it requires keeping all models stored in memory which is computationally expensive. Multi-Task learning was used in the context of SLU by Tur (2006) and has been further explored using neural networks for phoneme recognition (Seltzer and Droppo, 2013) and semantic parsing (Fan et al., 2017; Bapna et al., 2017). There have been many other pieces of prior work on improving NLU systems with pre-training (Kim et al., 2015b; Celikyilmaz et al., 2016; Kim et al., 2017e), multi-task learning (Zhang and Wang, 2016; Liu and Lane, 2016; Kim et al., 2017b), transfer learning (ElKahky et al., 2014; Kim et al., 2015a,c; Chen et al., 2016a; Yang et al., 2017), domain adaptation (Kim et al., 2016; Jaech et al., 2016; Liu and Lane, 2017; Kim et al., 2017d,c) and contextual signals (Bhargava et al., 2013; Chen et al., 2016b; Hori et al., 2016; Kim et al., 2017a). 3 Weakly Supervised Training Data Generation Our model addresses the domain classification task in SLU systems. In traditional IPDA systems, these domains are hand-crafted by experts to be well separable and can easily be annotated by humans because they are small in number. The emergence of self-service SLU results in a large number of potentially mutually overlapping SLU domains. This means that eliciting large volumes of high quality human annotations to train our model 2216 Char-level Bidirectional LSTM Word-level Bidirectional LSTM Character embedding 𝑤" 𝑤# 𝑐%," 𝑐%,# 𝑐%,' 𝑤% … 𝑐#," 𝑐#,# 𝑐#,' 𝑐"," 𝑐",# 𝑐",' Word embedding … … 𝜙) * 𝜙+ * … … 𝜙) * 𝜙+ * 𝜙) * 𝜙+ * … 𝜙) * 𝜙+ * 𝜙) * 𝜙+ * 𝜙) * 𝜙+ * … 𝜙) * 𝜙+ * 𝜙) * 𝜙+ * 𝜙) * 𝜙+ * … 𝜙) * 𝜙+ * 𝜙) * 𝜙+ * 𝜙) * 𝜙+ * Utterance … … 𝑑" 𝑑# 𝑑Domain Feedforward 𝑑. 𝑑" 𝑑. 𝑑/ Enabled Domains Domain embedding Domain 1 Domain 2 Domain 3 Domain K … Figure 1: The overall architecture of the personalized dynamic domain classifier. is no longer feasible, and we cannot assume that domains are designed to be well separable. Instead we can generate training data by adopting the weak supervision paradigm introduced by (Hoffmann et al., 2011), which proposes using heuristic labeling functions generate large numbers of noisy data samples. Clean data generation with weak supervision is a challenging problem, so we address it by decomposing it into two simpler problems, of candidate generation and noise suppression, however it remains important for our model to be noise robust. 3.1 Data Programming The key insight of the Data Programming approach is that O(1) simple labeling functions can be used to approximate O(n) human annotated data points with much less effort. We adopt the formalism used by (Ratner et al., 2016) to treat each of instance data generation rule as a rich generative model, defined by a labeling function λ and describe different families of labeling functions. Our data programming pipeline is analogous to the noisy channel model proposed for spelling correction by (Kernighan et al., 1990), and consists of a set of candidate generation and noise detection functions. arg max µ P(µ|si) = arg max µ P(si|µ). P(µ) where µ and si represent utterances and the ith skill respectively. P(si|µ) the probability of a skill being valid for an utterance is approximated by simple functions that act as candidate data generators λg ∈Λg based on recognitions produced by a family of query patterns λq ∈Λq. P(µ) is represented by a family of simple functions that act as noise detectors λn ∈Λn, which mark utterances as likely being noise. We apply the technique to the query logs of a popular IPDA, which has support for personalized third party domains. Looking at the structure of utterances that match query pattern λq, each utterance of form ”Ask {Uber} to {get me a car}” can be considered as being parametrized by the underlying latent command µz, that is ”Get me a car”, a target domain corresponding to service st, which in this case is Uber and the query recognition pattern λq, in this case ”Ask {st} to {µz}”. Next we assume that the distribution of latent commands over domains are independent of the query pattern. P(µz, st) ≈P(µ, st, λq) Making this simple distributional approximation allows us to generate a large number of noisy training samples. The family of generator functions λg ∈Λg is thus defined such that uz = λi g(µ, λi q) 3.2 Noise Reduction The distribution defined above contains a large number of noisy positive samples. Related to P(µ) in the noisy channel in the spell correction context, we defined a small family of heuristic noise detection functions λn ∈Λn that discards 2217 training data instances that are not likely to be well formed. For instance, • λ1 n requires u to contain a minimum threshold of information by removing those with µz that has token length fewer than 3. Utterances shorter than this mostly consist of nonactionable commands. • λ2 n discards all data samples below a certain threshold of occurrences in live traffic, since utterances that are rarely observed are more likely to be ASR errors or unnatural. • λ3 n discards the data samples for a domain if they come from an overly broad pattern with a catch-all behavior. • λ4 n discards utterances that belong to shared intents provided by the SLU SDK. The end result of this stage is to retain utterances such as ‘call me a cab’ from ‘Ask Uber to call me a cab’ but discard ‘Boston’ from ‘Ask Accuweather for Boston’. One can easily imagine extending this framework with other high recall noise detectors, for example, using language models to discard candidates that are unlikely to be spoken sentences. 4 Model Architecture Our model consists of a shared encoder network consisting of an orthography-sensitive hierarchical LSTM encoder that feeds into a set of domain specific classification layers trained to make a binary decision for each output label. Our main novel contribution is the extension of this architecture with a personalized attention mechanism which uses the attention mechanism of (Bahdanau et al., 2014) to attend to memory locations corresponding to the specific domains enabled by a user, and allows the system to learn semantic representations of each domain via domain embeddings. As we will show, incorporating personalization features is key to disambiguating between multiple overlapping domains1, and the personalized attention mechanism outperforms more naive forms of personalization. The personalized attention mechanism first computes an attention weight for each of enabled domains, performs a convex combination to compute a context 1We assume that users can customize their IPDA settings to enable certain domains. vector and then concatenates this vector to the encoded utterance before the final domain classification. Figure 1 depicts the model in detail. Our model can efficiently accommodate new domains not seen during initial training by keeping the shared encoder frozen, bootstrapping a domain embedding based on existing ones, then optimizing a small number of network parameters corresponding to domain-specific classifier, which is orders of magnitude faster and more data efficient than retraining the full classifier. We make design decisions to ensure that our model has a low memory and latency footprint. We avoid expensive large vocabulary matrix multiplications on both the input and output stages, and instead use a combination of character embeddings and word embeddings in the input stage.2 The output matrix is lightweight because each domain-specific classifier is a matrix of only 201×2 parameters. The inference task can be trivially parallelized across cores since there’s no requirement to compute a partition function across a high-dimensional softmax layer, which is the slowest component of large label multiclass neural networks. Instead, we achieve comparability between the probability scores generated by individual models by using a customized loss formulation.3 4.1 Shared Encoder First we describe our shared hierarchical utterance encoder, which is marked by the almond colored box in Figure 1. Our hierarchical character to word to utterance design is motivated by the need to make the model operate on an open vocabulary in terms of words and to make it robust to small changes in orthography resulting from fluctuations in the upstream ASR system, all while avoiding expensive large matrix multiplications associated with one-hot word encoding in large vocabulary systems. We denote an LSTM simply as a mapping φ : Rd × Rd′ →Rd′ that takes a d dimensional input vector x and a d′ dimensional state vector h to output a new d′ dimensional state vector h′ = 2Using a one-hot representation of word vocabulary size 60,000 and hidden dimension 100 would require learning a matrix of size 60000 x 100 - using 100-dim word embeddings requires only a O(1) lookup followed by a 100 x 100 matrix, thus allowing our model to be significantly smaller and faster despite having what is effectively an open vocabulary 3Current inference consumes 50MB memory and the p99 latency is 15ms. 2218 φ(x, h). Let C denote the set of characters and W the set of words in a given utterance. Let ⊕denote the vector concatenation operation. We encode an utterance using BiLSTMs, and the model parameters Θ associated with this BiLSTM layer are • Char embeddings ec ∈R25 for each c ∈C • Char LSTMs φC f, φC b : R25 × R25 →R25 • Word embeddings ew ∈R50 for each w ∈W • Word LSTMs φW f , φW b : R100 × R50 →R50 Let w1 . . . wn ∈W denote a word sequence where word wi has character wi(j) ∈C at position j. First, the model computes a character-sensitive word representation vi ∈R100 as fC j = φC f ewi(j), fC j−1  ∀j = 1 . . . |wi| bC j = φC b ewi(j), bC j+1  ∀j = |wi| . . . 1 vi = fC |wi| ⊕bC 1 ⊕ewi for each i = 1 . . . n.4 These word representation vectors are encoded by forward and backward LSTMs for word φW f , φW b as fW i = φW f vi, fW i−1  ∀i = 1 . . . n bW i = φW b vi, bW i+1  ∀i = n . . . 1 and induces a character and context-sensitive word representation hi ∈R100 as hi = fW i ⊕bW i for each i = 1 . . . n. For convenience, we write the entire operation as a mapping BiLSTMΘ: (h1 . . . hn) ←BiLSTMΘ(w1 . . . wn) ¯h = n X i=1 hi (1) 4.2 Domain Classification Our Multitask domain classification formulation is motivated by a desire to avoid computing the full partition function during test time, which tends to be the slowest component of a multiclass neural network classifer, as has been documented before by (Jozefowicz et al., 2016) and (Mikolov et al., 2013), amongst others. 4For simplicity, we assume some random initial state vectors such as f C 0 and bC |wi|+1 when we describe LSTMs. However, we also want access to reliable probability estimates instead of raw scores - we accomplish this by constructing a custom loss function. During training, each domain classifier receives in-domain (IND) and out-of-domain (OOD) utterances, and we adapt the one-sided selection mechanism of (Kubat et al., 1997) to prevent OOD utterances from overpowering IND utterances, thus an utterance in a domain d ∈D is considered as an IND utterance in the viewpoint of domain d and OOD for all other domains. We first use the shared encoder to compute the utterance representation ¯h as previously described. Then we define the probability of domain ˜d for the utterance by mapping ¯h to a 2-dimensional output vector with a linear transformation for each domain ˜d as z ˜d = σ(W ˜d · ¯h + b ˜d) p( ˜d|¯h) ∝    exp  [z ˜d]IND  , if ˜d = d exp  [z ˜d]OOD  , otherwise where σ is scaled exponential linear unit (SeLU) for normalized activation outputs (Klambauer et al., 2017) and [z ˜d]IND and [z ˜d]OOD denote the values in the IND and OOD position of vector z ˜d. We define the joint domain classification loss LD as the summation of positive (LP ) and negative (LN) class loss functions 5: LP  Θ, Θ ˜d = −log p  ˜d|¯h  LN  Θ, Θ ˜d = − 1 k −1   X ¯d∈D, ¯d̸= ˜d log p ¯d|¯h    LD  Θ, Θ ˜d = LP  Θ, Θ ˜d + LN  Θ, Θ ˜d where k is the total number of domains. We divide the second term by k −1 so that LP and LN are balanced in terms of the ratio of the training examples for a domain to those for other domains. While a softmax over the entire domains tends to highlight only the ground-truth domain while suppressing all the rest, the our joint domain classification with a softmax over two classes is designed to produce a more balanced confidence score per domain independent of other domains. 5Θ ˜ d denotes the additional parameters in the classification layer for domain ˜d. 2219 4.3 Personalized Attention We explore encoding a user’s domain preferences in two ways. Our baseline method is a 1-bit flag that is appended to the input features of each domain-specific classifier. Our novel personalized attention method induces domain embeddings by supervising an attention mechanism to attend to a user’s enabled domains with different weights depending on their relevance. The domain embedding matrix in Figure 1 represents the embeddings of a user’s enabled domains. We hypothesize that attention enables the network learn richer representations of user preferences and domain co-occurrence features. Let eD( ˜d) ∈R100 and ¯h ∈R100 denote the domain embeddings for domain ˜d and the utterance representation calculated by Eq. (1), respectively. The domain attention weights for a given user u who has a preferred domain list d(u) =  ˜d(u) 1 , . . . , ˜d(u) k  are calculated by the dot-product operation, ai = ¯h · eD  ˜d(u) i  ∀i = 1 . . . k The final, normalized attention weights ¯a are obtained after normalization via a softmax layer, ¯ai = exp(ai) Pk j=1 exp(aj) ∀i = 1 . . . k The weighted combination of domain embeddings is ¯Sattended = k X i=1  ¯ai · eD  ˜d(u) i  Finally the two representations of enabled domains, namely the attention model and 1-bit flag are then concatenated with the utterance representation and used to make per-domain predictions via domain-specific affine transformations: ¯zatt = ¯h ⊕¯Sattended ¯z1bit = ¯h ⊕I( ˜d ∈enabled) Here I( ¯d ∈enabled) is a 1-bit indicator for whether the domain is enabled by the user or not. ¯zatt and ¯z1bit represent the encoded hidden state of the Attention and 1-Bit Flag configurations of the model from the experiment section. In our experiments we will compare these two ways of encoding personalization information, as well as evaluate a variant that combines the two. In this way we can ascertain whether the two personalization signals are complementary via an ablation study on the full model. 4.4 Domain Bootstrapping Our model separates the responsibilities for utterance representation and domain classification between the shared encoder and the domain-specific classifiers. That is, the shared encoder needs to be retrained only if it cannot encode an utterance well (e.g., a new domain introduces completely new words) and the existing domain classifiers need to be retrained only when the shared encoder changes. For adding new domains efficiently without full retraining, the only two components in the architecture need to be updated for each new domain ˜dnew, are the domain embeddings for the new domain and its domain-specific classifier.6 We keep the weights of the encoder network frozen and use the hidden state vector ¯h, calculated by Eq. 1, as a feature vector to feed into the downstream classifiers. To initialize the m-dimensional domain embeddings e ˜dnew, we use the column-wise average of all utterance vectors in the training data ¯havg, and project it to the domain embedding space using a matrix U ∈Rm×m. Thus, e ˜dnew = U ∗· ¯havg The parameters of U ∗are learned using the column-wise average utterance vectors ¯havg j and learned domain vectors for all existing domains dj U ∗= arg min U ||U · ¯havg j −edj|| ∀dj ∈D This is a write-to-memory operation that creates a new domain representation after attending to all existing domain representations. We then train the parameters of the domain-specific classifier with the new domain’s data while keeping the encoder fixed. This mechanism allows us to efficiently support new domains that appear in-between full model deployment cycles without compromising performance on existing domains. A full model refresh would require us to fully retrain with the domains that have appeared in the intermediate period. 6We have assumed that the shared encoder covers most of the vocabulary of new domains; otherwise, the entire network may need to be retrained. Based on our observation of live usage data, this assumption is reasonable since the shared encoder after initial training is still shown to cover 95% of the vocabulary of new domains added in the subsequent week. 2220 WEAK Mturk Top-1 Top-3 Top-5 Top-1 Top-3 Top-5 Binary 78.29 87.90 88.92 73.79 85.35 86.45 MultiClass 78.58 87.12 88.11 73.78 84.54 85.55 MultiTask 80.46 89.27 90.16 75.66 86.48 87.66 1-Bit Flag 91.97 95.89 96.68 86.50 92.47 93.09 Attention* 94.83 97.11 98.35 89.64 95.39 96.70 1-Bit + Att 95.19 97.32 98.64 89.65 95.79 96.98 Table 1: The performance of different variants of our neural model in terms of top-N accuracy. Binary trains a separate binary classifier for each skill. MultiClass has a shared encoder followed by a softmax. MultiTask replaces the softmax with per-skill classifiers. 1-Bit Flag adds a single bit for personalization to each skill classifier in MultiTask. Attention extends MultiTask with personalized attention. The last 3 models are personalized. *Best single encoding. 5 Experiments In this section we aim to demonstrate the effectiveness of our model architecture in two settings. First, we will demonstrate that attention based personalization significantly outperforms the baseline approach. Secondly, we will show that our model new domain bootstrapping procedure results in accuracies comparable to full retraining while requiring less than 1% of the orignal training time. 5.1 Experimental Setup Weak: This is a weakly supervised dataset was generated by preprocessing utterances with strict invocation patterns according to the setup mentioned in Section 3. The dataset consists of 5.34M utterances from 637,975 users across 1,500 different skills. Since we are interested in capturing the temporal effects of the dataset as well as personalization effects, we partitioned the data based both on user and time. Our core training data for the experiments in this paper was drawn from one month of live usage, the validation data came from the next 15 days of usage, and the test data came from the subsequent 15 days. The training, validation and test sets are user-independent, and each user belongs to only one of the three sets to ensure no leakage of information. MTurk: Since the Weak dataset is generated by weak supervision, we verified the performance of our approach with human generated utterances. A random sample of 12,428 utterances from the test partition of users were presented to 300 human judges, who were asked to produce two natural ways to issue the same command. This dataset is treated as a representative clean held out test set on which we can observe the generalization of our weakly supervised training and validation data to natural language. New Skills: In order to simulate the scenario in which new skills appear within a week between model updates, we selected 250 new skills which do not overlap with the skills in the Weak dataset. The vocabulary size of 1,500 skills is 200K words, and on average, 5% of the vocabulary for new skills is not covered. We randomly sampled 4,000 unique utterances for each skill using the same weak supervision method, and split them into 3,000 utterances for training and 1,000 for testing. 5.2 Results and Discussion Generalization from Weakly Supervised to Natural Utterances We first show the progression of model performance as we add more components to show their individual contribution. Secondly, we show that training our models on a weakly supervised dataset can generalize to natural speech by showing their test performance on the human-annotated test data. Finally, we compare two personalization strategies. The full results are summarized in Table 1, which shows the top-N test results separately for the Weak dataset (weakly supervised) and MTurk dataset (human-annotated). We report top-N accuracy to show the potential for further re-ranking or disambiguation downstream. For top-1 results on the Weak dataset, using a separate binary classifier for each domain (Binary) shows a prediction accuracy at 78.29% and using a softmax layer on top of the shared encoder (MultiClass) shows a comparable accuracy at 78.58%. The performance shows a slight improvement when using the Multitask neural loss structure, but adding personalization signals to the Multitask structure showed a significant boost in performance. We noted the large difference between the 1-bit and attention architecture. At 94.83% accuracy, attention resulted in 35.6% relative error reduction over the 1-bit baseline 91.97% on the Weak validation set and 23.25% relative on the MTurk test set. We hypothesize that this may be because the attention mechanism allows the model to focus on complementary features in case of overlapping domains as well as learning domain co-occurrence statistics, both of which are not possible with the simple 1-bit flag. When both personalization representations were combined, the performance peaked at 95.19% for the Weak dataset and a more modest 2221 Time Accuracy Binary 34.81 78.13 Expand 30.34 94.03 Refresh 5300.18 94.58 Table 2: Comparison of per-epoch training time (seconds) and top-1 accuracy (%) on an NVIDIA Tesla M40 GPU. 89.65% for the MTurk dataset. The improvement trend is extremely consistent across all top-N results for both of the Weak and MTurk datasets across all experiments. The disambiguation task is complex due to similar and overlapping skills, but the results suggest that incorporating personalization signals equip the models with much better discriminative power. The results also suggest that the two mechanisms for encoding personalization provide a small amount of complementary information since combining them together is better than using them individually. Although the performance on the Weak dataset tends to be more optimistic, the best performance on the humanannotated test data is still close to 90% for top-1 accuracy, which suggests that training our model with the samples derived from the invocation patterns can generalize well to natural utterances. Rapid Bootstrapping of New Skills We show the results when new domains are added to an IPDA and the model needs to efficiently accommodate them with a limited number of data samples. We show the classification performance on the skills in the New Skills dataset while assuming we have access to the WEAK dataset to pre-train our model for transfer learning. In the Binary setting, a domain-specific binary classifier is trained for each domain. Expand describes the case in which we incrementally train on top of an existing model. Refresh is the setting in which the model is fully retrained from scratch with the new data - which would be ideal in case there were no time constraints. We record the average training time for each epoch and the performance is measured with top-1 classification accuracy over new skills. The experiment results can be found in Table 2. Adapting a new skill is two orders of magnitude faster (30.34 seconds) than retraining the model (5300.18 seconds) while achieving 94.03% accuracy which is comparable to 94.58% accuracy of full retraining. The first two techniques can also be easily parallelized unlike the Refresh configuration. Top-1 Top-3 Top-5 Full 6.17 14.30 20.41 Enabled 85.62 96.15 98.06 Table 3: Top-N prediction accuracy (%) on the full skill set (Full) and only enabled skills (Enabled). Behavior of Attention Mechanism Our expectation is that the model is able to learn to attend the relevant skills during the inference process. To study the behavior of the attention layer, we compute the top-N prediction accuracy based on the most relevant skills defined by the attention weights. In this experiment, we considered the subset of users who had enabled more than 20 domains to exclude trivial cases7. The results are shown in Table 3. When the model attends to the entire set of 1500 (Full), the top-5 prediction accuracy is 20.41%, which indicates that a large number of skills can process the utterance, and thus it is highly likely to miss the correct one in the top-5 predictions. This ambiguity issue can be significantly improved by users’ enabled domain lists as proved by the accuracies (Enabled): 85.62% for top-1, 96.15% for top-3, and 98.06% for top-5.8 Thus the attention mechanism can thus be viewed as an initial soft selection which is then followed by a fine-grained selection at the classification stage. End-to-End User Evaluation All intermediate metrics on this task are proxies to a human customer’s eventual evaluation. In order to assess the user experience, we need to measure its end-toend performance. For a brief end-to-end system evaluation, 983 utterances from 283 domains were randomly sampled from the test set in the largescale IPDA setting. 15 human judges (male=12, female=3) rated the system responses, 1 judge per utterance, on a 5-point Likert scale with 1 being Terrible and 5 being Perfect. The judgment score of 3 or above was taken as SUCCESS and 2 or below as DEFECT. The end-to-end SUCCESS rate, 7Thus, the random prediction accuracy on enabled domains is less than 5% and across the Full domain list is 0.066% 8Visual inspection of the embeddings confirms that meaningful clusters are learned. We see clusters related to home automation, commerce, cooking, trivia etc, we show some examples in Figure 2, 3 and 4. However there are still other clusters where the the relationships cannot be established as easily. An example of these is show in Figure 5. The personalized attention mechanism is learned using the semantic content as well as personalization signals, so we hypothesize clusters like this may be capturing user tendencies to enable these domains in a correlated manner. 2222 Figure 2: Embeddings of different domain categories visualized in 2D using TSNE (van der Maaten and Hinton, 2008). Different colors represent different categories, for e.g. the large blue cluster on the left is Home Automation. thus user satisfaction, was shown to be 95.52%. The discrepancy between this score and the score produced on MTurk dataset indicates that even in cases in which the model makes classification mistakes, some of these interpretations remain perceptually meaningful to humans. Figure 3: A large cluster of home automation domains. Figure 4: A cluster of domains related to cooking. Figure 5: A mixed cluster with several different domain categories represented. 6 Conclusions We have described a neural model architecture to address large-scale skill classification in an IPDA used by tens of millions of users every day. We have described how personalization features and an attention mechanism can be used for handling ambiguity between domains. We have also shown that the model can be extended efficiently and incrementally for new domains, saving multiple orders of magnitude in terms of training time. The model also addresses practical constraints of having a low memory footprint, low latency and being easily parallelized, all of which are important characteristics for real-time production systems. In future work, we plan to incorporate various types of context (e.g. anaphora, device-specific capabilities) and dialogue history into a large-scale NLU system. 2223 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2017. Towards zero shot frame semantic parsing for domain scaling. In Interspeech 2017. A. Bhargava, Asli Celikyilmaz, Dilek Z. HakkaniTur, and Ruhi Sarikaya. 2013. Easy contextual intent prediction and slot detection. IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8337–8341. Asli Celikyilmaz, Ruhi Sarikaya, Dilek Hakkani-T¨ur, Xiaohu Liu, Nikhil Ramesh, and G¨okhan T¨ur. 2016. A new pre-training method for training deep learning models with application to spoken language understanding. In Interspeech, pages 3255–3259. Yun-Nung Chen, Dilek Hakkani-T¨ur, and Xiaodong He. 2016a. Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 6045–6049. Yun-Nung Chen, Dilek Hakkani-T¨ur, Gokhan Tur, Jianfeng Gao, and Li Deng. 2016b. End-toend memory networks with knowledge carryover for multi-turn spoken language understanding. In Interspeech. Ali El-Kahky, Xiaohu Liu, Ruhi Sarikaya, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2014. Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4067–4071. IEEE. Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer learning for neural semantic parsing. CoRR, abs/1706.04326. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 541–550. Association for Computational Linguistics. Chiori Hori, Takaaki Hori, Shinji Watanabe, and John R Hershey. 2016. Context-sensitive and roledependent spoken language understanding using bidirectional and attention lstms. Interspeech, pages 3236–3240. Aaron Jaech, Larry Heck, and Mari Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. In Interspeech. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. Mark D Kernighan, Kenneth W Church, and William A Gale. 1990. A spelling correction program based on a noisy channel model. In Proceedings of the 13th conference on Computational linguistics-Volume 2, pages 205–210. Association for Computational Linguistics. Young-Bum Kim, Minwoo Jeong, Karl Stratos, and Ruhi Sarikaya. 2015a. Weakly supervised slot tagging with partially labeled sequences from web search click logs. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 84–92. Young-Bum Kim, Sungjin Lee, and Ruhi Sarikaya. 2017a. Speaker-sensitive dual memory networks for multi-turn slot tagging. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 547–553. IEEE. Young-Bum Kim, Sungjin Lee, and Karl Stratos. 2017b. Onenet: Joint domain, intent, slot prediction for spoken language understanding. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 547–553. IEEE. Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017c. Adversarial adaptation of synthetic or stale data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1297–1307. Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, and Dongchan Kim. 2017d. Domain attention with an ensemble of experts. In Annual Meeting of the Association for Computational Linguistics. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2015b. Pre-training of hidden-unit crfs. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pages 192–198. Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 387–396. 2224 Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2017e. A framework for pre-training hidden-unit conditional random fields and its extension to long short term memory networks. Computer Speech & Language, 46:311–326. Young-Bum Kim, Karl Stratos, Ruhi Sarikaya, and Minwoo Jeong. 2015c. New transfer learning techniques for disparate label sets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 473–482. Gunter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-normalizing neural networks. CoRR, abs/1706.02515. Miroslav Kubat, Stan Matwin, et al. 1997. Addressing the curse of imbalanced training sets: one-sided selection. In ICML, volume 97, pages 179–186. Nashville, USA. Anjishnu Kumar, Arpit Gupta, Julian Chan, Sam Tucker, Bjorn Hoffmeister, and Markus Dreyer. 2017a. Just ask: Building an architecture for extensible self-service spoken language understanding. arXiv preprint arXiv:1711.00549. Anjishnu Kumar, Pavankumar Reddy Muddireddy, Markus Dreyer, and Bj¨orn Hoffmeister. 2017b. Zero-shot learning across heterogeneous overlapping domains. Proc. Interspeech 2017, pages 2914– 2918. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. In Interspeech, pages 685–689. Bing Liu and Ian Lane. 2017. Multi-domain adversarial learning for slot filling in spoken language understanding. In NIPS Workshop on Conversational AI. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579– 2605. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher R´e. 2016. Data programming: Creating large training sets, quickly. In Advances in Neural Information Processing Systems, pages 3567–3575. Ruhi Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system architecture and key components. IEEE Signal Processing Magazine, 34(1):67–81. Ruhi Sarikaya, Paul A Crook, Alex Marin, Minwoo Jeong, Jean-Philippe Robichaud, Asli Celikyilmaz, Young-Bum Kim, Alexandre Rochette, Omar Zia Khan, Xiaohu Liu, et al. 2016. An overview of end-to-end language understanding and dialog management for personal digital assistants. In Spoken Language Technology Workshop (SLT), 2016 IEEE, pages 391–397. IEEE. Michael L Seltzer and Jasha Droppo. 2013. Multitask learning in deep neural networks for improved phoneme recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6965–6969. IEEE. Gokhan Tur. 2006. Multitask learning for spoken language understanding. In Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, volume 1, pages I–I. IEEE. Gokhan Tur and Renato de Mori. 2011. Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. New York, NY: John Wiley and Sons. Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367. Puyang Xu and Ruhi Sarikaya. 2014. Contextual domain classification in spoken language understanding systems using recurrent neural network. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 136– 140. IEEE. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. International Conference on Learning Representation (ICLR). Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In International Joint Conference on Artificial Intelligence (IJCAI), pages 2993–2999.
2018
206
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2225–2235 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2225 Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment Yue Gu, Kangning Yang∗, Shiyu Fu∗, Shuhong Chen, Xinyu Li and Ivan Marsic Multimedia Image Processing Lab Electrical and Computer Engineering Department Rutgers University, Piscataway, NJ, USA {yue.guapp, ky189, sf568, sc1624, Xinyu.li1118, marsic}@rutgers.edu Abstract Multimodal affective computing, learning to recognize and interpret human affect and subjective information from multiple data sources, is still challenging because: (i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract levels, ignoring time-dependent interactions between modalities. Addressing such issues, we introduce a hierarchical multimodal architecture with attention and word-level fusion to classify utterancelevel sentiment and emotion from text and audio data. Our introduced model outperforms state-of-the-art approaches on published datasets, and we demonstrate that our model’s synchronized attention over modalities offers visual interpretability. 1 Introduction With the recent rapid advancements in social media technology, affective computing is now a popular task in human-computer interaction. Sentiment analysis and emotion recognition, both of which require applying subjective human concepts for detection, can be treated as two affective computing subtasks on different levels (Poria et al., 2017a). A variety of data sources, including voice, facial expression, gesture, and linguistic content have been employed in sentiment analysis and emotion recognition. In this paper, we focus on a multimodal structure to leverage the advantages of each data source. Specifically, given an utterance, we consider the linguistic content and acoustic characteristics together to recognize the opinion or emotion. Our work is important and useful ∗Equally Contribution because speech is the most basic and commonly used form of human expression. A basic challenge in sentiment analysis and emotion recognition is filling the gap between extracted features and the actual affective states (Zhang et al., 2017). The lack of high-level feature associations is a limitation of traditional approaches using low-level handcrafted features as representations (Seppi et al., 2008; Rozgic et al., 2012). Recently, deep learning structures such as CNNs and LSTMs have been used to extract high-level features from text and audio (Eyben et al., 2010a; Poria et al., 2015). However, not all parts of the text and vocal signals contribute equally to the predictions. A specific word may change the entire sentimental state of text; a different vocal delivery may indicate inverse emotions despite having the same linguistic content. Recent approaches introduce attention mechanisms to focus the models on informative words (Yang et al., 2016) and attentive audio frames (Mirsamadi et al., 2017) for each individual modality. However, to our knowledge, there is no common multimodal structure with attention for utterancelevel sentiment and emotion classification. To address such issue, we design a deep hierarchical multimodal architecture with an attention mechanism to classify utterance-level sentiments and emotions. It extracts high-level informative textual and acoustic features through individual bidirectional gated recurrent units (GRU) and uses a multi-level attention mechanism to select the informative features in both the text and audio module. Another challenge is the fusion of cues from heterogeneous data. Most previous works focused on combining multimodal information at a holistic level, such as integrating independent predictions of each modality via algebraic rules (W¨ollmer et al., 2013) or fusing the extracted modality-specific features from entire utterances 2226 (Poria et al., 2016). They extract word-level features in a text branch, but process audio at the frame-level or utterance-level. These methods fail to properly learn the time-dependent interactions across modalities and restrict feature integration at timestamps due to the different time scales and formats of features of diverse modalities (Poria et al., 2017a). However, to determine human meaning, it is critical to consider both the linguistic content of the word and how it is uttered. A loud pitch on different words may convey inverse emotions, such as the emphasis on “hell” for anger but indicating happy on “great”. Synchronized attentive information across text and audio would then intuitively help recognize the sentiments and emotions. Therefore, we compute a forced alignment between text and audio for each word and propose three fusion approaches (horizontal, vertical, and fine-tuning attention fusion) to integrate both the feature representations and attention at the word-level. We evaluated our model on four published sentiment and emotion datasets. Experimental results show that the proposed architecture outperforms state-of-the-art approaches. Our methods also allow for attention visualization, which can be used for interpreting the internal attention distribution for both single- and multi-modal systems. The contributions of this paper are: (i) a hierarchical multimodal structure with attention mechanism to learn informative features and high-level associations from both text and audio; (ii) three wordlevel fusion strategies to combine features and learn correlations in a common time scale across different modalities; (iii) word-level attention visualization to help human interpretation. The paper is organized as follows: We list related work in section 2. Section 3 describes the proposed structure in detail. We present the experiments in section 4 and provide the result analysis in section 5. We discuss the limitations in section 6 and conclude with section 7. 2 Related Work Despite the large body of research on audio-visual affective analysis, there is relatively little work on combining text data. Early work combined human transcribed lexical features and low-level handcrafted acoustic features using feature-level fusion (Forbes-Riley and Litman, 2004; Litman and Forbes-Riley, 2004). Others used SVMs fed bag of words (BoW) and part of speech (POS) features in addition to low-level acoustic features (Seppi et al., 2008; Rozgic et al., 2012; Savran et al., 2012; Rosas et al., 2013; Jin et al., 2015). All of the above extracted low-level features from each modality separately. More recently, deep learning was used to extract higher-level multimodal features. Bidirectional LSTMs were used to learn long-range dependencies from low-level acoustic descriptors and derivations (LLDs) and visual features (Eyben et al., 2010a; W¨ollmer et al., 2013). CNNs can extract both textual (Poria et al., 2015) and visual features (Poria et al., 2016) for multiple kernel learning of feature-fusion. Later, hierarchical LSTMs were used (Poria et al., 2017b). A deep neural network was used for feature-level fusion in (Gu et al., 2018) and (Zadeh et al., 2017) introduced a tensor fusion network to further improve the performance. A very recent work using word-level fusion was provided by (Chen et al., 2017). The key differences between this work and the proposed architecture are: (i) we design a fine-tunable hierarchical attention structure to extract word-level features for each individual modality, rather than simply using the initialized textual embedding and extracted LLDs from COVAREP (Degottex et al., 2014); (ii) we propose diverse representation fusion strategies to combine both the word-level representations and attention weights, instead of using only word-level fusion; (iii) our model allows visualizing the attention distribution at both the individual modality and at fusion to help model interpretability. Our architecture is inspired by the document classification hierarchical attention structure that works at both the sentence and word level (Yang et al., 2016). For audio, an attention-based BLSTM and CNN were applied to discovering emotion from frames (Huang and Narayanan, 2016; Neumann and Vu, 2017). Frame-level weighted-pooling with local attention was shown to outperform frame-wise, final-frame, and framelevel mean-pooling for speech emotion recognition (Mirsamadi et al., 2017). 3 Method We introduce a multimodal hierarchical attention structure with word-level alignment for sentiment analysis and emotion recognition (Figure 1). The model consists of three major parts: text attention module, audio attention module, and word2227 level fusion module. We first make a forced alignment between the text and audio during preprocessing. Then, the text attention module and audio attention module extract the features from the corresponding inputs (shown in Algorithm 1). The word-level fusion module fuses the extracted feature vectors and makes the final prediction via a shared representation (shown in Algorithm 2). 3.1 Forced Alignment and Preprocessing The forced alignment between the audio and text on the word-level prepares the different data for feature extraction. We align the data at the wordlevel because words are the basic unit in English for human speech comprehension. We used aeneas1 to determine the time interval for each word in the audio file based on the Sakoe-Chiba Band Dynamic Time Warping (DTW) algorithm (Sakoe and Chiba, 1978). For the text input, we first embedded the words into 300-dimensional vectors by word2vec (Mikolov et al., 2013), which gives us the best result compared to GloVe and LexVec. Unknown words were randomly initialized. Given a sentence S with N words, let wi represent the ith word. We embed the words through the word2vec embedding matrix We by: Ti = Wewi, i ∈[1, N] (1) where Ti is the embedded word vector. For the audio input, we extracted Melfrequency spectral coefficients (MFSCs) from raw audio signals as acoustic inputs for two reasons. Firstly, MFSCs maintain the locality of the data by preventing new bases of spectral energies resulting from discrete cosine transform in MFCCs extraction (Abdel-Hamid et al., 2014). Secondly, it has more dimensions in the frequency domain that aid learning in deep models (Gu et al., 2017). We used 64 filter banks to extract the MFSCs for each audio frame to form the MFSCs map. To facilitate training, we only used static coefficients. Each word’s MFSCs can be represented as a matrix with 64×n dimensions, where n is the interval for the given word in frames. We zero-pad all intervals to the same length L, the maximum frame numbers of the word in the dataset. We did extract LLD features using OpenSmile (Eyben et al., 2010b) software and combined them with the MFSCs during our training stage. However, we did not find an 1https://www.readbeyond.it/aeneas/ Figure 1: Overall Architecture obvious performance improvement, especially for the sentiment analysis. Considering the training cost of the proposed hierarchical acoustic architecture, we decided the extra features were not worth the tradeoff. The output is a 3D MFSCs map with dimensions [N, 64, L]. 3.2 Text Attention Module To extract features from embedded text input at the word level, we first used bidirectional GRUs, which are able to capture the contextual information between words. It can be represented as: t h→ i , t h← i = bi GRU(Ti), i ∈[1, N] (2) where bi GRU is the bidirectional GRU, t h→ i and t h← i denote respectively the forward and backward contextual state of the input text. We combined t h→ i and t h← i as t hi to represent the feature vector for the ith word. We choose GRUs instead of LSTMs because our experiments show that LSTMs lead to similar performance (0.07% higher accuracy) with around 25% more trainable parameters. To create an informative word representation, we adopted a word-level attention strategy that generates a one-dimensional vector denoting the importance for each word in a sequence (Yang et al., 2016). As defined by (Bahdanau et al., 2228 Algorithm 1 FEATURE EXTRACTION 1: procedure FORCED ALIGNMENT 2: Determine time interval of each word 3: find wi ←→[Aij], j ∈[1, L], i ∈[1, N] 4: end procedure 5: procedure TEXT BRANCH 6: Text Attention Module 7: for i ∈[1, N] do 8: Ti ←getEmbedded(wi) 9: t hi ←bi GRU(Ti) 10: t ei ←getEnergies(t hi) 11: t αi ←getDistribution(t ei) 12: end for 13: return t hi, t αi 14: end procedure 15: procedure AUDIO BRANCH 16: for i ∈[1, N] do 17: Frame-Level Attention Module 18: for j ∈[1, L] do 19: f hij ←bi GRU(Aij) 20: f eij ←getEnergies(f hij) 21: f αij ←getDistribution(f eij) 22: end for 23: f Vi ←weightedSum(f αij, f hij) 24: Word-Level Attention Module 25: w hi ←bi GRU(f Vi) 26: w ei ←getEnergies(w hi) 27: w αi ←getDistribution(w ei) 28: end for 29: return w hi, w αi 30: end procedure 2014), we compute the textual attentive energies t ei and textual attention distribution t αi by: t ei = tanh(Wtt hi + bt), i ∈[1, N] (3) t αi = exp(t ei⊤vt) PN k=1exp(t ek⊤vt) (4) where Wt and bt are the trainable parameters and vt is a randomly-initialized word-level weight vector in the text branch. To learn the word-level interactions across modalities, we directly use the textual attention distribution t αi and textual bidirectional contextual state t hi as the output to aid word-level fusion, which allows further computations between text and audio branch on both the contextual states and attention distributions. 3.3 Audio Attention Module We designed a hierarchical attention model with frame-level acoustic attention and word-level attention for acoustic feature extraction. Frame-level Attention captures the important MFSC frames from the given word to generate the word-level acoustic vector. Similar to the text attention module, we used a bidirectional GRU: f h→ ij , f h← ij = bi GRU(Aij), j ∈[1, L] (5) where f h→ ij and f h← ij denote the forward and backward contextual states of acoustic frames. Aij denotes the MFSCs of the jth frame from the ith word, i ∈[1, N]. f hij represents the hidden state of the jth frame of the ith word, which consists of f h→ ij and f h← ij . We apply the same attention mechanism used for textual attention module to extract the informative frames using equation 3 and 4. As shown in Figure 1, the input of equation 3 is f hij and the output is the framelevel acoustic attentive energies f eij. We calculate the frame-level attention distribution f αij by using f eij as the input for equation 4. We form the word-level acoustic vector f Vi by taking a weighted sum of bidirectional contextual state f hij of the frame and the corresponding framelevel attention distribution f αij Specifically, f Vi = X jf αijf hij (6) Word-level Attention aims to capture the word-level acoustic attention distribution w αi based on formed word vector f Vi. We first used equation 2 to generate the word-level acoustic contextual states w hi, where the input is f Vi and w hi = (w h→ i , w h← i ). Then, we compute the word-level acoustic attentive energies w ei via equation 3 as the input for equation 4. The final output is an acoustic attention distribution w αi from equation 4 and acoustic bidirectional contextual state w hi. 3.4 Word-level Fusion Module Fusion is critical to leveraging multimodal features for decision-making. Simple feature concatenation without considering the time scales ignores the associations across modalities. We introduce word-level fusion capable of associating the text and audio at each word. We propose three fusion strategies (Figure 2 and Algorithm 2): horizontal fusion, vertical fusion, and fine-tuning attention fusion. These methods allow easy synchronization between modalities, taking advantage of the attentive associations across text and audio, creating a shared high-level representation. 2229 Figure 2: Fusion strategies. t hi: word-level textual bidirectional state. t αi: word-level textual attention distribution. w hi: word-level acoustic bidirectional state. w αi: word-level acoustic attention distribution. s αi: shared attention distribution. u αi: fine-tuning attention distribution. Vi: shared word-level representation. Algorithm 2 FUSION 1: procedure FUSION BRANCH 2: Horizontal Fusion (HF) 3: for i ∈[1, N] do 4: t Vi ←weighted(t αi, t hi) 5: w Vi ←weighted(w αi, w hi) 6: Vi ←dense([t Vi, w Vi]) 7: end for 8: Vertical Fusion (VF) 9: for i ∈[1, N] do 10: hi ←dense([t hi, w hi]) 11: s αi ←average([t αi, w αi]) 12: Vi ←weighted(hi, s αi) 13: end for 14: Fine-tuning Attention Fusion (FAF) 15: for i ∈[1, N] do 16: u ei ←getEnergies(hi) 17: u αi ←getDistribution(u ei, s αi) 18: Vi ←weighted(hi, u αi) 19: end for 20: Decision Making 21: E ←convNet(V1, V2, ..., VN) 22: return E 23: end procedure Horizontal Fusion (HF) provides the shared representation that contains both the textual and acoustic information for a given word (Figure 2 (a)). The HF has two steps: (i) combining the bidirectional contextual states (t hi and w hi in Figure 1) and attention distributions for each branch (t αi and w αi in Figure 1) independently to form the word-level textual and acoustic representations. As shown in Figure 2, given the input (t αi, t hi) and (w αi, w hi), we first weighed each input branch by: t Vi = t αit hi (7) w Vi = w αiw hi (8) where t Vi and w Vi are word-level representations for text and audio branches, respectively; (ii) concatenating them into a single space and further applying a dense layer to create the shared context vector Vi, and Vi = (t Vi, w Vi). The HF combines the unimodal contextual states and attention weights; there is no attention interaction between the text modality and audio modality. The shared vectors retain the most significant characteristics from respective branches and encourages the decision making to focus on local informative features. Vertical Fusion (VF) combines textual attentions and acoustic attentions at the word-level, using a shared attention distribution over both modalities instead of focusing on local informative representations (Figure 2 (b)). The VF is computed in three steps: (i) using a dense layer after the concatenation of the word-level textual (t hi) and acoustic (w hi) bidirectional contextual states to form the shared contextual state hi; (ii) averaging the textual (t αi) and acoustic (w αi) attentions for each word as the shared attention distribution s αi; (iii) computing the weight of hi and s αi as final shared context vectors Vi, where Vi = his αi. Because the shared attention distribution (s αi) is based on averages of unimodal attentions, it is a joint attention of both textual and acoustic attentive information. Fine-tuning Attention Fusion (FAF) preserves the original unimodal attentions and provides 2230 a fine-tuning attention for the final prediction (Figure2 (c)). The averaging of attention weights in vertical fusion potentially limits the representational power. Addressing such issue, we propose a trainable attention layer to tune the shared attention in three steps: (i) computing the shared attention distribution s αi and shared bidirectional contextual states hi separately using the same approach as in vertical fusion; (ii) applying attention fine-tuning: u ei = tanh(Wuhi + bu) (9) u αi = exp(u ei⊤vu) PN k=1exp(u ek⊤vu) + s αi (10) where Wu, bu, and vu are additional trainable parameters. The u αi can be understood as the sum of the fine-tuning score and the original shared attention distribution s αi; (iii) calculating the weight of u αi and hi to form the final shared context vector Vi. 3.5 Decision Making The output of the fusion layer Vi is the ith shared word-level vectors. To further make use of the combined features for classification, we applied a CNN structure with one convolutional layer and one max-pooling layer to extract the final representation from shared word-level vectors (Poria et al., 2016; Wang et al., 2016). We set up various widths for the convolutional filters (Kim, 2014) and generated a feature map ck by: fi = tanh(WcVi:i+k−1 + bc) (11) ck = max{f1, f2, ..., fN} (12) where k is the width of the convolutional filters, fi represents the features from window i to i+k −1. Wc and bc are the trainable weights and biases. We get the final representation c by concatenating all the feature maps. A softmax function is used for the final classification. 4 Experiments 4.1 Datasets We evaluated our model on four published datasets: two multimodal sentiment datasets (MOSI and YouTube) and two multimodal emotion recognition datasets (IEMOCAP and EmotiW). MOSI dataset is a multimodal sentiment intensity and subjectivity dataset consisting of 93 reviews with 2199 utterance segments (Zadeh et al., 2016). Each segment was labeled by five individual annotators between -3 (strong negative) to +3 (strong positive). We used binary labels based on the sign of the annotations’ average. YouTube dataset is an English multimodal dataset that contains 262 positive, 212 negative, and 133 neutral utterance-level clips provided by (Morency et al., 2011). We only consider the positive and negative labels during our experiments. IEMOCAP is a multimodal emotion dataset including visual, audio, and text data (Busso et al., 2008). For each sentence, we used the label agreed on by the majority (at least two of the three annotators). In this study, we evaluate both the 4catgeory (happy+excited, sad, anger, and neutral) and 5-catgeory(happy+excited, sad, anger, neutral, and frustration) emotion classification problems. The final dataset consists of 586 happy, 1005 excited, 1054 sad, 1076 anger, 1677 neutral, and 1806 frustration. EmotiW2 is an audio-visual multimodal utterance-level emotion recognition dataset consist of video clips. To keep the consistency with the IEMOCAP dataset, we used four emotion categories as the final dataset including 150 happy, 117 sad, 133 anger, and 144 neutral. We used IBM Watson3 speech to text software to transcribe the audio data into text. 4.2 Baselines We compared the proposed architecture to published models. Because our model focuses on extracting sentiment and emotions from human speech, we only considered the audio and text branch applied in the previous studies. 4.2.1 Sentiment Analysis Baselines BL-SVM extracts a bag-of-words as textual features and low-level descriptors as acoustic features. An SVM structure is used to classify the sentiments (Rosas et al., 2013). LSTM-SVM uses LLDs as acoustic features and bag-of-n-grams (BoNGs) as textual features. The final estimate is based on decision-level fusion of text and audio predictions (W¨ollmer et al., 2013). 2https://cs.anu.edu.au/few/ChallengeDetails.html 3https://www.ibm.com/watson/developercloud/speechto-text/api/v1/ 2231 Sentiment Analysis (MOSI) Emotion Recognition (IEMOCAP) Approach Category WA(%) UA(%) Weighted-F1 Approach Category WA(%) UA(%) Weighted-F1 BL-SVM* 2-class 70.4 70.6 0.668 SVM Trees 4-class 67.4 67.4 LSTM-SVM* 2-class 72.1 72.1 0.674 GSV-e Vector 4-class 63.2 62.3 C-MKL1 2-class 73.6 0.752 C-MKL2 4-class 65.5 65.0 TFN 2-class 75.2 0.760 H-DMS 5-class 60.4 60.2 0.594 LSTM(A) 2-class 73.5 0.703 UL-Fusion* 4-class 66.5 66.8 0.663 UL-Fusion* 2-class 72.5 72.5 0.730 DL-Fusion* 4-class 65.8 65.7 0.665 DL-Fusion* 2-class 71.8 71.8 0.720 Ours-HF 4-class 70.0 69.7 0.695 Ours-HF 2-class 74.1 74.4 0.744 Ours-VF 4-class 71.8 71.8 0.713 Ours-VF 2-class 75.3 75.3 0.755 Ours-FAF 4-class 72.7 72.7 0.726 Ours-FAF 2-class 76.4 76.5 0.768 Ours-FAF 5-class 64.6 63.4 0.644 Table 1: Comparison of models. WA = weighted accuracy. UA = unweighted accuracy. * denotes that we duplicated the method from cited research with the corresponding dataset in our experiment. C-MKL1 uses a CNN structure to capture the textual features and fuses them via multiple kernel learning for sentiment analysis (Poria et al., 2015). TFN uses a tensor fusion network to extract interactions between different modality-specific features (Zadeh et al., 2017). LSTM(A) introduces a word-level LSTM with temporal attention structure to predict sentiments on MOSI dataset (Chen et al., 2017). 4.2.2 Emotion Recognition Baselines SVM Trees extracts LLDs and handcrafted bagof-words as features. The model automatically generates an ensemble of SVM trees for emotion classification (Rozgic et al., 2012). GSV-eVector generates new acoustic representations from selected LLDs using Gaussian Supervectors and extracts a set of weighed handcrafted textual features as an eVector. A linear kernel SVM is used as the final classifier (Jin et al., 2015). C-MKL2 extracts textual features using a CNN and uses openSMILE to extract 6373 acoustic features. Multiple kernel learning is used as the final classifier (Poria et al., 2016). H-DMS uses a hybrid deep multimodal structure to extract both the text and audio emotional features. A deep neural network is used for feature-level fusion (Gu et al., 2018). 4.2.3 Fusion Baselines Utterance-level Fusion (UL-Fusion) focuses on fusing text and audio features from an entire utterance (Gu et al., 2017). We simply concatenate the textual and acoustic representations into a joint feature representation. A softmax function is used for sentiment and emotion classification. Decision-level Fusion (DL-Fusion) Inspired by (W¨ollmer et al., 2013), we extract textual and acoustic sentence representations individually and infer the results via two softmax classifiers, respectively. As suggested by W¨ollmer, we calculate a weighted sum of the text (1.2) result and audio (0.8) result as the final prediction. 4.3 Model Training We implemented the model in Keras with Tensorflow as the backend. We set 100 as the dimension for each GRU, meaning the bidirectional GRU dimension is 200. For the decision making, we selected 2, 3, 4, and 5 as the filter width and apply 300 filters for each width. We used the rectified linear unit (ReLU) activation function and set 0.5 as the dropout rate. We also applied batch normalization functions between each layer to overcome internal covariate shift (Ioffe and Szegedy, 2015). We first trained the text attention module and audio attention module individually. Then, we tuned the fusion network based on the word-level representation outputs from each fine-tuning module. For all training procedures, we set the learning rate to 0.001 and used Adam optimization and categorical cross-entropy loss. For all datasets, we considered the speakers independent and used an 80-20 training-testing split. We further separated 20% from the training dataset for validation. We trained the model with 5-fold cross validation and used 8 as the mini batch size. We set the same amount of samples from each class to balance the training dataset during each iteration. 5 Result Analysis 5.1 Comparison with Baselines The experimental results of different datasets show that our proposed architecture achieves state-of-the-art performance in both sentiment 2232 analysis and emotion recognition (Table 1). We re-implemented some published methods (Rosas et al., 2013; W¨ollmer et al., 2013) on MOSI to get baselines. For sentiment analysis, the proposed architecture with FAF strategy achieves 76.4% weighted accuracy, which outperforms all the five baselines (Table 1). The result demonstrates that the proposed hierarchical attention architecture and word-level fusion strategies indeed help improve the performance. There are several findings worth mentioning: (i) our model outperforms the baselines without using the low-level handcrafted acoustic features, indicating the sufficiency of MFSCs; (ii) the proposed approach achieves performance comparable to the model using text, audio, and visual data together (Zadeh et al., 2017). This demonstrates that the visual features do not contribute as much during the fusion and prediction on MOSI; (iii) we notice that (Poria et al., 2017b) reports better accuracy (79.3%) on MOSI, but their model uses a set of utterances instead of a single utterance as input. For emotion recognition, our model with FAF achieves 72.7% accuracy, outperforming all the baselines. The result shows the proposed model brings a significant accuracy gain to emotion recognition, demonstrating the pros of the finetuning attention structure. It also shows that wordlevel attention indeed helps extract emotional features. Compared to C-MKL2 and SVM Trees that require feature selection before fusion and prediction, our model does not need an additional architecture to select features. We further evaluated our models on 5 emotion categories, including frustration. Our model shows 4.2% performance improvement over H-DMS and achieves 0.644 weighted-F1. As H-DMS only achieves 0.594 F1 and also uses low-level handcrafted features, our model is more robust and efficient. From Table 1, all the three proposed fusion strategies outperform UL-Fusion and DL-Fusion on both MOSI and IEMOCAP. Unlike utterancelevel fusion that ignores the time-scale-sensitive associations across modalities, word-level fusion combines the modality-specific features for each word by aligning text and audio, allowing associative learning between the two modalities, similar to what humans do in natural conversation. The result indicates that the proposed methods improve the model performance by around 6% accuModality MOSI IEMOCAP WA F1 WA F1 T 75.0 0.748 61.8 0.620 A 60.2 0.604 62.5 0.614 T+A 76.4 0.768 72.7 0.726 Table 2: Accuracy (%) and F1 score on text only (T), audio only (A), and multi-modality using FAF (T+A). Approach MOSI IEMOCAP ↓ ↓ YouTube EmotiW WA F1 WA F1 Ours-HF 62.9 0.627 59.3 0.584 Ours-VF 64.7 0.643 60.8 0.591 Ours-FAF 66.2 0.665 61.4 0.608 Table 3: Accuracy (%) and F1 score for generalization testing. racy. We also notice that the structure with FAF outperforms the HF and VF on both MOSI and IEMOCAP dataset, which demonstrates the effectiveness and importance of the FAF strategy. 5.2 Modality and Generalization Analysis From Table 2, we see that textual information dominates the sentiment prediction on MOSI and there is an only 1.4% accuracy improvement from fusing text and audio. However, on IEMOCAP, audio-only outperforms text-only, but as expected, there is a significant performance improvement by combining textual and audio. The difference in modality performance might because of the more significant role vocal delivery plays in emotional expression than in sentimental expression. We further tested the generalizability of the proposed model. For sentiment generalization testing, we trained the model on MOSI and tested on the YouTube dataset (Table 3), which achieves 66.2% accuracy and 0.665 F1 scores. For emotion recognition generalization testing, we tested the model (trained on IEMOCAP) on EmotiW and achieves 61.4% accuracy. The potential reasons that may influence the generalization are: (i) the biased labeling for different datasets (five annotators of MOSI vs one annotator of Youtube); (ii) incomplete utterance in YouTube dataset (such as “about”, “he”, etc.); (iii) without enough speech information (EmotiW is a wild audiovisual dataset that focuses on facial expression). 2233 Figure 3: Attention visualization. 5.3 Visualize Attentions Our model allows us to easily visualize the attention weights of text, audio, and fusion to better understand how the attention mechanism works. We introduce the emotional distribution visualizations for word-level acoustic attention (w αi), word-level textual attention (t αi), shared attention (s αi), and fine-tuning attention based on the FAF structure (u αi) for two example sentences (Figure 3). The color gradation represents the importance of the corresponding source data at the word-level. Based on our visualization, the textual attention distribution (t αi) denotes the words that carry the most emotional significance, such as “hell” for anger (Figure 3 a). The textual attention shows that “don’t”, “like”, and “west-sider” have similar weights in the happy example (Figure 3 b). It is hard to assign this sentence happy given only the text attention. However, the acoustic attention focuses on “you’re” and “west-sider”, removing emphasis from “don’t” and “like”. The shared attention (s αi) and fine-tuning attention (u αi) successfully combine both textual and acoustic attentions and assign joint attention to the correct words, which demonstrates that the proposed method can capture emphasis from both modalities at the word-level. 6 Discussion There are several limitations and potential solutions worth mentioning: (i) the proposed architecture uses both the audio and text data to analyze the sentiments and emotions. However, not all the data sources contain or provide textual information. Many audio-visual emotion clips only have acoustic and visual information. The proposed architecture is more related to spoken language analysis than predicting the sentiments or emotions based on human speech. Automatic speech recognition provides a potential solution for generating the textual information from vocal signals. (ii) The word alignment can be easily applied to human speech. However, it is difficult to align the visual information with text, especially if the text only describes the video or audio. Incorporating visual information into an aligning model like ours would be an interesting research topic. (iii) The limited amount of multimodal sentiment analysis and emotion recognition data is a key issue for current research, especially for deep models that require a large number of samples. Compared large unimodal sentiment analysis and emotion recognition datasets, the MOSI dataset only consists of 2199 sentence-level samples. In our experiments, the EmotiW and MOUD datasets could only be used for generalization analysis due to their small size. Larger and more general datasets are necessary for multimodal sentiment analysis and emotion recognition in the future. 7 Conclusion In this paper, we proposed a deep multimodal architecture with hierarchical attention for sentiment and emotion classification. Our model aligned the text and audio at the word-level and applied attention distributions on textual word vectors, acoustic frame vectors, and acoustic word vectors. We introduced three fusion strategies with a CNN structure to combine word-level features to classify emotions. Our model outperforms the state-ofthe-art methods and provides effective visualization of modality-specific features and fusion feature interpretation. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments and feedback. We thank the useful suggestions from Kaixiang Huang. This research was funded by the National Institutes of Health under Award Number R01LM011834. 2234 References Ossama Abdel-Hamid, Abdel-rahman Mohamed, Hui Jiang, Li Deng, Gerald Penn, and Dong Yu. 2014. Convolutional neural networks for speech recognition. IEEE/ACM Transactions on audio, speech, and language processing, 22(10):1533–1545. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation, 42(4):335. Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltruˇsaitis, Amir Zadeh, and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with wordlevel fusion and reinforcement learning. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, pages 163–171. ACM. Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. Covarepa collaborative voice analysis repository for speech technologies. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 960–964. IEEE. Florian Eyben, Martin W¨ollmer, Alex Graves, Bj¨orn Schuller, Ellen Douglas-Cowie, and Roddy Cowie. 2010a. On-line emotion recognition in a 3-d activation-valence-time continuum using acoustic and linguistic cues. Journal on Multimodal User Interfaces, 3(1-2):7–19. Florian Eyben, Martin W¨ollmer, and Bj¨orn Schuller. 2010b. Opensmile: the munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM international conference on Multimedia, pages 1459–1462. ACM. Kate Forbes-Riley and Diane Litman. 2004. Predicting emotion in spoken dialogue from multiple knowledge sources. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Yue Gu, Shuhong Chen, and Ivan Marsic. 2018. Deep multimodal learning for emotion recognition in spoken language. arXiv preprint arXiv:1802.08332. Yue Gu, Xinyu Li, Shuhong Chen, Jianyu Zhang, and Ivan Marsic. 2017. Speech intention classification with multimodal deep learning. In Canadian Conference on Artificial Intelligence, pages 260–271. Springer. Che-Wei Huang and Shrikanth S Narayanan. 2016. Attention assisted discovery of sub-utterance structure in speech emotion recognition. In INTERSPEECH, pages 1387–1391. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. Qin Jin, Chengxin Li, Shizhe Chen, and Huimin Wu. 2015. Speech emotion recognition with acoustic and lexical features. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4749–4753. IEEE. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Diane J Litman and Kate Forbes-Riley. 2004. Predicting student emotions in computer-human tutoring dialogues. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 351. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Seyedmahdad Mirsamadi, Emad Barsoum, and Cha Zhang. 2017. Automatic speech emotion recognition using recurrent neural networks with local attention. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 2227–2231. IEEE. Louis-Philippe Morency, Rada Mihalcea, and Payal Doshi. 2011. Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the 13th international conference on multimodal interfaces, pages 169–176. ACM. Michael Neumann and Ngoc Thang Vu. 2017. Attentive convolutional neural network based speech emotion recognition: A study on the impact of input features, signal length, and acted speech. arXiv preprint arXiv:1706.00612. Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017a. A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37:98–125. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2015. Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 2539–2544. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017b. Context-dependent sentiment 2235 analysis in user-generated videos. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 873–883. Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016. Convolutional mkl based multimodal emotion recognition and sentiment analysis. In Data Mining (ICDM), 2016 IEEE 16th International Conference on, pages 439–448. IEEE. Ver´onica P´erez Rosas, Rada Mihalcea, and LouisPhilippe Morency. 2013. Multimodal sentiment analysis of spanish online videos. IEEE Intelligent Systems, 28(3):38–45. Viktor Rozgic, Sankaranarayanan Ananthakrishnan, Shirin Saleem, Rohit Kumar, and Rohit Prasad. 2012. Ensemble of svm trees for multimodal emotion recognition. In Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pages 1–4. IEEE. Hiroaki Sakoe and Seibi Chiba. 1978. Dynamic programming algorithm optimization for spoken word recognition. IEEE transactions on acoustics, speech, and signal processing, 26(1):43–49. Arman Savran, Houwei Cao, Miraj Shah, Ani Nenkova, and Ragini Verma. 2012. Combining video, audio and lexical indicators of affect in spontaneous conversation via particle filtering. In Proceedings of the 14th ACM international conference on Multimodal interaction, pages 485–492. ACM. Dino Seppi, Anton Batliner, Bj¨orn Schuller, Stefan Steidl, Thurid Vogt, Johannes Wagner, Laurence Devillers, Laurence Vidrascu, Noam Amir, and Vered Aharonson. 2008. Patterns, prototypes, performance: classifying emotional user states. In Ninth Annual Conference of the International Speech Communication Association. Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, and Eric P Xing. 2016. Select-additive learning: Improving cross-individual generalization in multimodal sentiment analysis. arXiv preprint arXiv:1609.05244. Martin W¨ollmer, Felix Weninger, Tobias Knaup, Bj¨orn Schuller, Congkai Sun, Kenji Sagae, and LouisPhilippe Morency. 2013. Youtube movie reviews: Sentiment analysis in an audio-visual context. IEEE Intelligent Systems, 28(3):46–53. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. arXiv preprint arXiv:1707.07250. Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016. Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259. Shiqing Zhang, Shiliang Zhang, Tiejun Huang, Wen Gao, and Qi Tian. 2017. Learning affective features with a hybrid deep model for audio-visual emotion recognition. IEEE Transactions on Circuits and Systems for Video Technology.
2018
207
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2236–2246 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2236 Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph Amir Zadeh1, Paul Pu Liang2, Jonathan Vanbriesen1, Soujanya Poria3, Edmund Tong1, Erik Cambria4, Minghai Chen1, Louis-Philippe Morency1 {1- Language Technologies Institute, 2- Machine Learning Department}, CMU, USA {3- A*STAR, 4- Nanyang Technological University}, Singapore {abagherz,pliang,jvanbrie}@cs.cmu.edu, [email protected] [email protected], [email protected], [email protected] Abstract Analyzing human multimodal language is an emerging area of research in NLP. Intrinsically human communication is multimodal (heterogeneous), temporal and asynchronous; it consists of the language (words), visual (expressions), and acoustic (paralinguistic) modalities all in the form of asynchronous coordinated sequences. From a resource perspective, there is a genuine need for large scale datasets that allow for in-depth studies of multimodal language. In this paper we introduce CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI), the largest dataset of sentiment analysis and emotion recognition to date. Using data from CMU-MOSEI and a novel multimodal fusion technique called the Dynamic Fusion Graph (DFG), we conduct experimentation to investigate how modalities interact with each other in human multimodal language. Unlike previously proposed fusion techniques, DFG is highly interpretable and achieves competitive performance compared to the current state of the art. 1 Introduction Theories of language origin identify the combination of language and nonverbal behaviors (vision and acoustic modality) as the prime form of communication utilized by humans throughout evolution (M¨uller, 1866). In natural language processing, this form of language is regarded as human multimodal language. Modeling multimodal language has recently become a centric research direction in both NLP and multimodal machine learning (Hazarika et al., 2018; Zadeh et al., 2018a; Poria et al., 2017a; Baltruˇsaitis et al., 2017; Chen et al., 2017). Studies strive to model the dual dynamics of multimodal language: intra-modal dynamics (dynamics within each modality) and cross-modal dynamics (dynamics across different modalities). However, from a resource perspective, previous multimodal language datasets have severe shortcomings in the following aspects: Diversity in the training samples: The diversity in training samples is crucial for comprehensive multimodal language studies due to the complexity of the underlying distribution. This complexity is rooted in variability of intra-modal and crossmodal dynamics for language, vision and acoustic modalities (Rajagopalan et al., 2016). Previously proposed datasets for multimodal language are generally small in size due to difficulties associated with data acquisition and costs of annotations. Variety in the topics: Variety in topics opens the door to generalizable studies across different domains. Models trained on only few topics generalize poorly as language and nonverbal behaviors tend to change based on the impression of the topic on speakers’ internal mental state. Diversity of speakers: Much like writing styles, speaking styles are highly idiosyncratic. Training models on only few speakers can lead to degenerate solutions where models learn the identity of speakers as opposed to a generalizable model of multimodal language (Wang et al., 2016). Variety in annotations Having multiple labels to predict allows for studying the relations between labels. Another positive aspect of having variety of labels is allowing for multi-task learning which has shown excellent performance in past research. Our first contribution in this paper is to introduce the largest dataset of multimodal sentiment and emotion recognition called CMU Multimodal Opinion Sentiment and Emotion Intensity (CMUMOSEI). CMU-MOSEI contains 23,453 annotated video segments from 1,000 distinct speakers and 2237 250 topics. Each video segment contains manual transcription aligned with audio to phoneme level. All the videos are gathered from online video sharing websites 1. The dataset is currently a part of the CMU Multimodal Data SDK and is freely available to the scientific community through Github 2. Our second contribution is an interpretable fusion model called Dynamic Fusion Graph (DFG) to study the nature of cross-modal dynamics in multimodal language. DFG contains built-in efficacies that are directly related to how modalities interact. These efficacies are visualized and studied in detail in our experiments. Aside interpretability, DFG achieves superior performance compared to previously proposed models for multimodal sentiment and emotion recognition on CMU-MOSEI. 2 Background In this section we compare the CMU-MOSEI dataset to previously proposed datasets for modeling multimodal language. We then describe the baselines and recent models for sentiment analysis and emotion recognition. 2.1 Comparison to other Datasets We compare CMU-MOSEI to an extensive pool of datasets for sentiment analysis and emotion recognition. The following datasets include a combination of language, visual and acoustic modalities as their input data. 2.1.1 Multimodal Datasets CMU-MOSI (Zadeh et al., 2016b) is a collection of 2199 opinion video clips each annotated with sentiment in the range [-3,3]. CMU-MOSEI is the next generation of CMU-MOSI. The ICT-MMMO (W¨ollmer et al., 2013) consists of online social review videos annotated at the video level for sentiment. YouTube (Morency et al., 2011) contains videos from the social media web site YouTube that span a wide range of product reviews and opinion videos. MOUD (Perez-Rosas et al., 2013) consists of product review videos in Spanish. Each video consists of multiple segments labeled to display positive, negative or neutral sentiment. IEMOCAP (Busso et al., 2008) consists of 151 videos of recorded dialogues, with 2 speakers per session for a total of 302 videos across the dataset. Each 1following creative commons license allows for personal unrestricted use and redistribution of the videos 2https://github.com/A2Zadeh/CMUMultimodalDataSDK Dataset # S # Sp Mod Sent Emo TL (hh:mm:ss) CMU-MOSEI 23,453 1,000 {l, v, a}   65:53:36 CMU-MOSI 2,199 98 {l, v, a}   02:36:17 ICT-MMMO 340 200 {l, v, a}   13:58:29 YouTube 300 50 {l, v, a}   00:29:41 MOUD 400 101 {l, v, a}   00:59:00 SST 11,855 – {l}   – Cornell 2,000 – {l}   – Large Movie 25,000 – {l}   – STS 5,513 – {l}   – IEMOCAP 10,000 10 {l, v, a}   11:28:12 SAL 23 4 {v, a}   11:00:00 VAM 499 20 {v, a}   12:00:00 VAM-faces 1,867 20 {v}   – HUMAINE 50 4 {v, a}   04:11:00 RECOLA 46 46 {v, a}   03:50:00 SEWA 538 408 {v, a}   04:39:00 SEMAINE 80 20 {v, a}   06:30:00 AFEW 1,645 330 {v, a}   02:28:03 AM-FED 242 242 {v}   03:20:25 Mimicry 48 48 {v, a}   11:00:00 AFEW-VA 600 240 {v, a}   00:40:00 Table 1: Comparison of the CMU-MOSEI dataset with previous sentiment analysis and emotion recognition datasets. #S denotes the number of annotated data points. #Sp is the number of distinct speakers. Mod indicates the subset of modalities present from {(l)anguage,(v)ision,(a)udio}. Sent and Emo columns indicate presence of sentiment and emotion labels. TL denotes the total number of video hours. segment is annotated for the presence of 9 emotions (angry, excited, fear, sad, surprised, frustrated, happy, disappointed and neutral) as well as valence, arousal and dominance. 2.1.2 Language Datasets Stanford Sentiment Treebank (SST) (Socher et al., 2013) includes fine grained sentiment labels for phrases in the parse trees of sentences collected from movie review data. While SST has larger pool of annotations, we only consider the root level annotations for comparison. Cornell Movie Review (Pang et al., 2002) is a collection of 2000 moviereview documents and sentences labeled with respect to their overall sentiment polarity or subjective rating. Large Movie Review dataset (Maas et al., 2011) contains text from highly polar movie reviews. Sanders Tweets Sentiment (STS) consists of 5513 hand-classified tweets each classified with respect to one of four topics of Microsoft, Apple, Twitter, and Google. 2.1.3 Visual and Acoustic Datasets The Vera am Mittag (VAM) corpus consists of 12 hours of recordings of the German TV talk2238 show “Vera am Mittag” (Grimm et al., 2008). This audio-visual data is labeled for continuous-valued scale for three emotion primitives: valence, activation and dominance. VAM-Audio and VAMFaces are subsets that contain on acoustic and visual inputs respectively. RECOLA (Ringeval et al., 2013) consists of 9.5 hours of audio, visual, and physiological (electrocardiogram, and electrodermal activity) recordings of online dyadic interactions. Mimicry (Bilakhia et al., 2015) consists of audiovisual recordings of human interactions in two situations: while discussing a political topic and while playing a role-playing game. AFEW (Dhall et al., 2012, 2015) is a dynamic temporal facial expressions data corpus consisting of close to real world environment extracted from movies. Detailed comparison of CMU-MOSEI to the datasets in this section is presented in Table 1. CMU-MOSEI has longer total duration as well as larger number of data point in total. Furthermore, CMU-MOSEI has a larger variety in number of speakers and topics. It has all three modalities provided, as well as annotations for both sentiment and emotions. 2.2 Baseline Models Modeling multimodal language has been the subject of studies in NLP and multimodal machine learning. Notable approaches are listed as follows and indicated with a symbol for reference in the Experiments and Discussion section (Section 5). # MFN: (Memory Fusion Network) (Zadeh et al., 2018a) synchronizes multimodal sequences using a multi-view gated memory that stores intraview and cross-view interactions through time. ∎MARN: (Multi-attention Recurrent Network) (Zadeh et al., 2018b) models intra-modal and multiple cross-modal interactions by assigning multiple attention coefficients. Intra-modal and cross-modal interactions are stored in a hybrid LSTM memory component. ∗TFN (Tensor Fusion Network) (Zadeh et al., 2017) models inter and intra modal interactions by creating a multi-dimensional tensor that captures unimodal, bimodal and trimodal interactions. ◇MV-LSTM (Multi-View LSTM) (Rajagopalan et al., 2016) is a recurrent model that designates regions inside a LSTM to different views of the data. § EF-LSTM (Early Fusion LSTM) concatenates the inputs from different modalities at each time-step and uses that as the input to a single LSTM (Hochreiter and Schmidhuber, 1997; Graves et al., 2013; Schuster and Paliwal, 1997). In case of unimodal models EF-LSTM refers to a single LSTM. We also compare to the following baseline models: † BC-LSTM (Poria et al., 2017b), ♣C-MKL (Poria et al., 2016), ♭DF (Nojavanasghari et al., 2016), ♡SVM (Cortes and Vapnik, 1995; Zadeh et al., 2016b; Perez-Rosas et al., 2013; Park et al., 2014), ●RF (Breiman, 2001), THMM (Morency et al., 2011), SAL-CNN (Wang et al., 2016), 3DCNN (Ji et al., 2013). For language only baseline models: ∪CNN-LSTM (Zhou et al., 2015), RNTN (Socher et al., 2013), ×: DynamicCNN (Kalchbrenner et al., 2014), ⊳DAN (Iyyer et al., 2015), ≀DHN (Srivastava et al., 2015), ⊲RHN (Zilly et al., 2016). For acoustic only baseline models: AdieuNet (Trigeorgis et al., 2016), SERLSTM (Lim et al., 2016). 3 CMU-MOSEI Dataset Understanding expressed sentiment and emotions are two crucial factors in human multimodal language. We introduce a novel dataset for multimodal sentiment and emotion recognition called CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI). In the following subsections, we first explain the details of the CMU-MOSEI data acquisition, followed by details of annotation and feature extraction. 3.1 Data Acquisition Social multimedia presents a unique opportunity for acquiring large quantities of data from various speakers and topics. Users of these social multimedia websites often post their opinions in the forms of monologue videos; videos with only one person in front of camera discussing a certain topic of interest. Each video inherently contains three modalities: language in the form of spoken text, visual via perceived gestures and facial expressions, and acoustic through intonations and prosody. During our automatic data acquisition process, videos from YouTube are analyzed for the presence of one speaker in the frame using face detection to ensure the video is a monologue. We limit the videos to setups where the speaker’s attention is exclusively towards the camera by rejecting videos that have moving cameras (such as camera on bikes or selfies recording while walking). We use a diverse set of 250 frequently used topics in online videos as the seed for acquisition. We restrict the 2239 reviews debates speech marketing consulting financial speeches statement advertising review faq business consumer product political loans update firms equity remarks politics retail phd person online hear topic sector talk due web ads feel stock daily dose lesson entity refining outline farms sermon like faqs debate protest QA resignation document symposium retailing forestry commercials specialization liberalism reviewing vol futures hearings shops independant participatory enterprises bechtel asset addition configuration nyse narration retailer industrial comment reply colloquium rewritten autonomous recitation ict deadpan monthly nasdaq revised crm concise discussing securities officially enclave citigroup edit investment seeing monologue analysis investing lecture social reaction multiculturalism macroeconomics consumers investors hearing home added courses stocks revision comments summary narrative summit weekly witness timing voiceover markets unified independent movie textbook upload socialist Things overview suppliers stimulus lenders testimony companies response answers editing market respond economy speakers textiles keynote tutorial premium quarterly customers financing description advertisers questions retailers updates handling sneezing responses podcast derivation Figure 1: The diversity of topics of videos in CMUMOSEI, displayed as a word cloud. Larger words indicate more videos from that topic. The most frequent 3 topics are reviews (16.2%), debate (2.9%) and consulting (1.8%) while the remaining topics are almost uniformly distributed. number of videos acquired from each channel to a maximum of 10. This resulted in discovering 1,000 identities from YouTube. The definition of a identity is proxy to the number of channels since accurate identification requires quadratic manual annotations, which is infeasible for high number of speakers. Furthermore, we limited the videos to have manual and properly punctuated transcriptions provided by the uploader. The final pool of acquired videos included 5,000 videos which were then manually checked for quality of video, audio and transcript by 14 expert judges over three months. The judges also annotated each video for gender and confirmed that each video is an acceptable monologue. A set of 3228 videos remained after manual quality inspection. We also performed automatic checks on the quality of video and transcript which are discussed in Section 3.3 using facial feature extraction confidence and forced alignment confidence. Furthermore, we balance the gender in the dataset using the data provided by the judges (57% male to 43% female). This constitutes the final set of raw videos in CMU-MOSEI. The topics covered in the final set of videos are shown in Figure 1 as a Venn-style word cloud (Coppersmith and Kelly, 2014) with the size proportional to the number of videos gathered for that topic. The most frequent 3 topics are reviews (16.2%), debate (2.9%) and consulting (1.8%). The remaining topics are almost uniformly distributed 3. The final set of videos are then tokenized into 3more detailed analysis such as exact percentages and number of videos per topic are available in the supplementary material Total number of sentences 23453 Total number of videos 3228 Total number of distinct speakers 1000 Total number of distinct topics 250 Average number of sentences in a video 7.3 Average length of sentences in seconds 7.28 Total number of words in sentences 447143 Total of unique words in sentences 23026 Total number of words appearing at least 10 times in the dataset 3413 Total number of words appearing at least 20 times in the dataset 1971 Total number of words appearing at least 50 times in the dataset 888 Table 2: Summary of CMU-MOSEI dataset statistics. sentences using punctuation markers manually provided by transcripts. Due to the high quality of the transcripts, using punctuation markers showed better sentence quality than using the Stanford CoreNLP tokenizer (Manning et al., 2014). This was verified on a set of 20 random videos by two experts. After tokenization, a set of 23,453 sentences were chosen as the final sentences in the dataset. This was achieved by restricting each identity to contribute at least 10 and at most 50 sentences to the dataset. Table 2 shows high-level summary statistics of the CMU-MOSEI dataset. 3.2 Annotation Annotation of CMU-MOSEI follows closely the annotation of CMU-MOSI (Zadeh et al., 2016a) and Stanford Sentiment Treebank (Socher et al., 2013). Each sentence is annotated for sentiment on a [-3,3] Likert scale of: [−3: highly negative, −2 negative, −1 weakly negative, 0 neutral, +1 weakly positive, +2 positive, +3 highly positive]. Ekman emotions (Ekman et al., 1980) of {happiness, sadness, anger, fear, disgust, surprise} are annotated on a [0,3] Likert scale for presence of emotion x: [0: no evidence of x, 1: weakly x, 2: x, 3: highly x]. The annotation was carried out by 3 crowdsourced judges from Amazon Mechanical Turk platform. To avert implicitly biasing the judges and to capture the raw perception of the crowd, we avoided extreme annotation training and instead provided the judges with a 5 minutes training video on how to use the annotation system. All the annotations have been carried out by only master workers with higher than 98% approval rate to assure high quality annotations 4. Figure 2 shows the distribution of sentiment and emotions in CMU-MOSEI dataset. The distribution 4Extensive statistics of the dataset including the crawling mechanism, the annotation UI, training procedure for the workers, agreement scores are available in submitted supplementary material available on arXiv. 2240 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 Happiness Sadness Anger Disgust Surprise Fear 0 1000 2000 3000 4000 5000 6000 7000 8000 Negative Weakly Negative Neutral Weakly Positive Positive Figure 2: Distribution of sentiment and emotions in the CMU-MOSEI dataset. The distribution shows a natural skew towards more frequently used emotions. However, the least frequent emotion, fear, still has 1,900 data points which is an acceptable number for machine learning studies. shows a slight shift in favor of positive sentiment which is similar to distribution of CMU-MOSI and SST. We believe that this is an implicit bias in online opinions being slightly shifted towards positive, since this is also present in CMU-MOSI. The emotion histogram shows different prevalence for different emotions. The most common category is happiness with more than 12,000 positive sample points. The least prevalent emotion is fear with almost 1900 positive sample points which is an acceptable number for machine learning studies. 3.3 Extracted Features Data points in CMU-MOSEI come in video format with one speaker in front of the camera. The extracted features for each modality are as follows (for other benchmarks we extract the same features): Language: All videos have manual transcription. Glove word embeddings (Pennington et al., 2014) were used to extract word vectors from transcripts. Words and audio are aligned at phoneme level using P2FA forced alignment model (Yuan and Liberman, 2008). Following this, the visual and acoustic modalities are aligned to the words by interpolation. Since the utterance duration of words in English is usually short, this interpolation does not lead to substantial information loss. Visual: Frames are extracted from the full videos at 30Hz. The bounding box of the face is extracted using the MTCNN face detection algorithm (Zhang et al., 2016). We extract facial action units through Facial Action Coding System (FACS) (Ekman et al., 1980). Extracting these action units allows for accurate tracking and understanding of the facial expressions (Baltruˇsaitis et al., 2016). We also extract a set of six basic emotions purely from static faces using Emotient FACET (iMotions, 2017). MultiComp OpenFace (Baltruˇsaitis et al., 2016) is used to extract the set of 68 facial landmarks, 20 facial shape parameters, facial HoG features, head pose, head orientation and eye gaze (Baltruˇsaitis et al., 2016). Finally, we extract face embeddings from commonly used facial recognition models such as DeepFace (Taigman et al., 2014), FaceNet (Schroff et al., 2015) and SphereFace (Liu et al., 2017). Acoustic: We use the COVAREP software (Degottex et al., 2014) to extract acoustic features including 12 Mel-frequency cepstral coefficients, pitch, voiced/unvoiced segmenting features (Drugman and Alwan, 2011), glottal source parameters (Drugman et al., 2012; Alku et al., 1997, 2002), peak slope parameters and maxima dispersion quotients (Kane and Gobl, 2013). All extracted features are related to emotions and tone of speech. 4 Multimodal Fusion Study From the linguistics perspective, understanding the interactions between language, visual and audio modalities in multimodal language is a fundamental research problem. While previous works have been successful with respect to accuracy metrics, they have not created new insights on how the fusion is performed in terms of what modalities are related and how modalities engage in an interaction during fusion. Specifically, to understand the fusion process one must first understand the n-modal dynamics (Zadeh et al., 2017). n-modal dynamics state that there exists different combination of modalities and that all of these combinations must be captured to better understand the multimodal language. In this paper, we define building the n-modal dynamics as a hierarchical process and propose a new fusion model called the Dynamic Fusion Graph (DFG). DFG is easily interpretable through what is called efficacies in graph connections. To utilize this new fusion model in a multimodal language framework, we build upon Memory Fusion Network (MFN) by replacing the original fusion component in the MFN with our DFG. We call this resulting model the Graph Memory Fusion Network (Graph-MFN). Once the model is trained end to end, we analyze the efficacies in the DFG to study the fusion mechanism learned for modalities in multimodal language. In addition to being an interpretable fusion mechanism, 2241 𝒟",$ 𝒟",% 𝒟%,$ 𝒟",%,$ 𝑎 𝑣 𝑙 𝒯 ⊕ trimodal bimodal unimodal output Figure 3: The structure of Dynamic Fusion Graph (DFG) for three modalities of {(l)anguage,(v)ision,(a)coustic}. Dashed lines in DFG show the dynamic connections between vertices controlled by the efficacies (α). Graph-MFN also outperforms previously proposed state-of-the-art models for sentiment analysis and emotion recognition on the CMU-MOSEI. 4.1 Dynamic Fusion Graph In this section we discuss the internal structure of the proposed Dynamic Fusion Graph (DFG) neural model (Figure 3. DFG has the following properties: 1) it explicitly models the n-modal interactions, 2) does so with an efficient number of parameters (as opposed to previous approaches such as Tensor Fusion (Zadeh et al., 2017)) and 3) can dynamically alter its structure and choose the proper fusion graph based on the importance of each n-modal dynamics during inference. We assume the set of modalities to be M = {(l)anguage,(v)ision,(a)coustic}. The unimodal dynamics are denoted as {l},{v},{a}, the bimodal dynamics as {l,v},{v,a},{l,a} and trimodal dynamics as {l,v,a}. These dynamics are in the form of latent representations and are each considered as vertices inside a graph G = (V,E) with V the set of vertices and E the set of edges. A directional neural connection is established between two vertices vi and vj only if vi ⊂vj. For example, {l} ⊂{l,v} which results in a connection between < language > and < language,vision >. This connection is denoted as an edge eij. Dj takes as input all vi that satisfy the neural connection formula above for vj. We define an efficacy for each edge eij denoted as αij. vi is multiplied by αij before being used as input to Dj. Each α is a sigmoid activated probabilℎ" # ℎ$" ℎ" % 𝑢'" ⨀ ℎ")* # ℎ")* % ℎ")* $ ⨁ 𝒟𝛾(*) 𝒟1(2) (𝑡−1) Multimodal State Memory Dynamic Fusion Graph System of LSTM-MNLs ⨀ 𝜎 𝜎 ℎ 𝑡 𝑡+ 1 𝑡−1 𝑡+ 2 𝑡+ 3 𝑡+ 4 𝒟# 𝒟$ 𝒟% 𝑢" 𝒯" 𝑧" 𝒟= 𝒟1(>) 𝛾(?) Figure 4: The overview of Graph Memory Fusion Network (Graph-MFN) pipeline. Graph-MFN replaces the fusion block in MFN with a Dynamic Fusion Graph (DFG). For description of variables and memory formulation please refer to the original Memory Fusion Network paper (Zadeh et al., 2018a). ity neuron which indicates how strong or weak the connection is between vi and vj. αs are the main source of interpretability in DFG. The vector of all αs is inferred using a deep neural network Dα which takes as input singleton vertices in V (l, v, and a). We leave it to the supervised training objective to learn parameters of Dα and make good use of efficacies, thus dynamically controlling the structure of the graph. The singleton vertices are chosen for this purpose since they have no incoming edges thus no efficacy associated with those edges (no efficacy is needed to infer the singleton vertices). The same singleton vertices l, v, and a are the inputs to the DFG. In the next section we discuss how these inputs are given to DFG. All vertices are connected to the output vertex Tt of the network via edges scaled by their respective efficacy. The overall structure of the vertices, edges and respective efficacies is shown in Figure 3. There are a total of 8 vertices (counting the output vertex), 19 edges and subsequently 19 efficacies. 4.2 Graph-MFN To test the performance of DFG, we use a similar recurrent architecture to Memory Fusion Network (MFN). MFN is a recurrent neural model with three main components 1) System of LSTMs: a set of parallel LSTMs with each LSTM modeling a single modality. 2) Delta-memory Attention Network is the component that performs multimodal fusion 2242 Dataset MOSEI Sentiment MOSEI Emotions Task Sentiment Anger Disgust Fear Happy Sad Surprise Metric A2 F1 A5 A7 MAE r WA F1 WA F1 WA F1 WA F1 WA F1 WA F1 LANGUAGE SOTA2 74.1§ 74.1⊳ 43.1≀ 42.9≀ 0.75§ 0.46≀ 56.0∪71.0× 59.0§ 67.1⊳56.2§ 79.7§ 53.0⊳44.1⊳ 53.8≀ 49.9≀ 53.2× 70.0⊳ SOTA1 74.3⊳74.1§ 43.2§ 43.2§ 0.74⊳0.47§ 56.6≀ 71.8●64.0⊳72.6●58.8× 89.8● 54.0§ 47.0§ 54.0§ 61.2●54.3⊳85.3● VISUAL SOTA2 73.8§ 73.5§ 42.5⊳42.5⊳ 0.78≀ 0.41♡ 54.4≀ 64.6§ 54.4♡71.5⊲51.3§ 78.4§ 53.4≀ 40.8§ 54.3⊳60.8●51.3⊳84.2§ SOTA1 73.9⊳73.7⊳ 42.7≀ 42.7≀ 0.78§ 0.43≀ 60.0§ 71.0● 60.3≀ 72.4●64.2♡89.8●57.4●49.3● 57.7§ 61.5⊲51.8§ 85.4● ACOUSTIC SOTA2 74.2≀73.8△42.1△42.1△0.78⊳0.43§ 55.5⊲51.8△58.9⊳72.4●58.5⊳89.8●57.2∩55.5∩58.9⊲65.9⊲52.2♡83.6∩ SOTA1 74.2△73.9≀ 42.4∩42.4∩0.74∩0.43⊳56.4△71.9● 60.9§ 72.4● 62.7§ 89.8⊲61.5§ 61.4§ 62.0∩69.2∩54.3⊲85.4● MULTIMODAL SOTA2 76.0# 76.0# 44.7† 44.6† 0.72∗0.52∗56.0◇71.4♭65.2# 71.4# 56.7§ 89.9# 57.8§ 66.6∗58.9∗60.8# 52.2∗85.4● SOTA1 76.4◇76.4◇44.8∗44.7∗0.72# 0.52# 60.5∗72.0● 67.0♭ 73.2●60.0♡89.9●66.5∗71.0∎59.2§ 61.8●53.3# 85.4# Graph-MFN 76.9 77.0 45.1 45.0 0.71 0.54 62.6 72.8 69.1 76.6 62.0 89.9 66.3 66.3 60.4 66.9 53.7 85.5 Table 3: Results for sentiment analysis and emotion recognition on the MOSEI dataset (reported results are as of 5/11/2018. please check the CMU Multimodal Data SDK github for current state of the art and new features for CMU-MOSEI and other datasets). SOTA1 and SOTA2 refer to the previous best and second best state-of-the-art models (from Section 2) respectively. Compared to the baselines Graph-MFN achieves superior performance in sentiment analysis and competitive performance in emotion recognition. For all metrics, higher values indicate better performance except for MAE where lower values indicate better performance. by assigning coefficients to highlight cross-modal dynamics. 3) Multiview Gated Memory is a component that stores the output of multimodal fusion. We replace the Delta-memory Attention Network with DFG and refer to the modified model as Graph Memory Fusion Network (Graph-MFN). Figure 4 shows the overall architecture of the Graph-MFN. Similar to MFN, Graph-MFN employs a system of LSTMs for modeling individual modalities. cl, cv, and ca represent the memory of LSTMs for language, vision and acoustic modalities respectively. Dm, m ∈{l,v,a} is a fully connected deep neural network that takes in hm [t−1,t] the LSTM representation across two consecutive timestamps, which allows the network to track changes in memory dimensions across time. The outputs of Dl, Dv and Da are the singleton vertices for the DFG. The DFG models cross-modal interactions and encodes the cross-modal representations in its output vertex Tt for storage in the Multi-view Gated Memory ut. The Multi-view Gated Memory functions using a network Du that transforms Tt into a proposed memory update ˆut. γ1 and γ2 are the Multi-view Gated Memory’s retain and update gates respectively and are learned using networks Dγ1 and Dγ2. Finally, a network Dz transforms Tt into a multimodal representation zt to update the system of LSTMs. The output of Graph-MFN in all the experiments is the output of each LSTM hm T as well as contents of the Multi-view Gated Memory at time T (last recurrence timestep), uT . This output is subsequently connected to a classification or regression layer for final prediction (for sentiment and emotion recognition). 5 Experiments and Discussion In our experiments, we seek to evaluate how modalities interact during multimodal fusion by studying the efficacies of DFG through time. Table 3 shows the results on CMU-MOSEI. Accuracy is reported as Ax where x is the number of sentiment classes as well as F1 measure. For regression we report MAE and correlation (r). For emotion recognition due to the natural imbalances across various emotions, we use weighted accuracy (Tong et al., 2017) and F1 measure. Graph-MFN shows superior performance in sentiment analysis and competitive performance in emotion recognition. Therefore, DFG is both an effective and interpretable model for multimodal fusion. To better understand the internal fusion mechanism between modalities, we visualize the behavior of the learned DFG efficacies in Figure 5 for various cases (deep red denotes high efficacy and deep blue denotes low efficacy). Multimodal Fusion has a Volatile Nature: The first observation is that the structure of the DFG is changing case by case and for each case over time. As a result, the model seems to be selectively prioritizing certain dynamics over the others. For example, in case (I) where all modalities are informative, all efficacies seem to be high, imply2243 Acoustic modality uninformative Vision modality uninformative Too much too fast, I mean we basically just get introduced to this character… (angry voice) Acoustic: Language: Vision: 𝑙→𝑙, 𝑎 𝑎→𝑙, 𝑎 𝑙→𝑙, 𝑣 𝑎, 𝑣→𝒯 𝑙, 𝑎, 𝑣→𝒯 𝑎→𝑎, 𝑣 𝑙, 𝑣→𝒯 𝑣→𝑙, 𝑣 𝑣→𝑎, 𝑣 𝑙→𝑙, 𝑎, 𝑣 𝑎→𝑙, 𝑎, 𝑣 𝑣→𝑙, 𝑎, 𝑣 𝑙, 𝑎→𝑙, 𝑎, 𝑣 𝑙, 𝑣→𝑙, 𝑎, 𝑣 𝑎, 𝑣→𝑙, 𝑎, 𝑣 𝑙→𝒯 𝑎→𝒯 𝑣→𝒯 𝑙, 𝑎→𝒯 All I can say is he’s a pretty average guy. (disappointed voice) Language modality uninformative What disappointed me was that one of the actors in the movie was there for short amount of time. (neutral voice) 𝑙→𝑙, 𝑎 𝑎→𝑙, 𝑎 𝑙→𝑙, 𝑣 𝑎, 𝑣→𝒯 𝑙, 𝑎, 𝑣→𝒯 𝑎→𝑎, 𝑣 𝑙, 𝑣→𝒯 𝑣→𝑙, 𝑣 𝑣→𝑎, 𝑣 𝑙→𝑙, 𝑎, 𝑣 𝑎→𝑙, 𝑎, 𝑣 𝑣→𝑙, 𝑎, 𝑣 𝑙, 𝑎→𝑙, 𝑎, 𝑣 𝑙, 𝑣→𝑙, 𝑎, 𝑣 𝑎, 𝑣→𝑙, 𝑎, 𝑣 𝑙→𝒯 𝑎→𝒯 𝑣→𝒯 𝑙, 𝑎→𝒯 Vision and acoustic modalities informative And he I don’t think he got mad when hah I don’t know maybe. (frustrated voice) Gaze aversion Uninformative Contradictory smile Surprised (I) (II) (III) (IV) 𝑡= 1 𝑡= 𝑇 𝑡= 1 𝑡= 𝑇𝑡= 1 𝑡= 𝑇𝑡= 1 𝑡= 𝑇 Figure 5: Visualization of DFG efficacies across time. The efficacies (thus the DFG structure) change over time as DFG is exposed to new information. DFG is able choose which n-modal dynamics to rely on. It also learns priors about human communication since certain efficacies (thus edges in DFG) remain unchanged across time and across data points. ing that the DFG is able to find useful information in unimodal, bimodal and trimodal interactions. However, in cases (II) and (III) where the visual modality is either uninformative or contradictory, the efficacies of v →l,v and v →l,a,v and l,a →l,a,v are reduced since no meaningful interactions involve the visual modality. Priors in Fusion: Certain efficacies remain unchanged across cases and across time. These are priors from Human Multimodal Language that DFG learns. For example the model always seems to prioritize fusion between language and audio in (l →l,a), and (a →l,a). Subsequently, DFG gives low values to efficacies that rely unilaterally on language or audio alone: the (l →τ) and (a →τ) efficacies seem to be consistently low. On the other hand, the visual modality appears to have a partially isolated behavior. In the presence of informative visual information, the model increases the efficacies of (v →τ) although the values of other visual efficacies also increase. Trace of Multimodal Fusion: We trace the dominant path that every modality undergoes during fusion: 1) language tends to first fuse with audio via (l →l,a) and the language and acoustic modalities together engage in higher level fusions such as (l,a →l,a,v). Intuitively, this is aligned with the close ties between language and audio through word intonations. 2) The visual modality seems to engage in fusion only if it contains meaningful information. In cases (I) and (IV), all the paths involving the visual modality are relatively active while in cases (II) and (III) the paths involving the visual modality have low efficacies. 3) The acoustic modality is mostly present in fusion with the language modality. However, unlike language, the acoustic modality also appears to fuse with the visual modality if both modalities are meaningful, such as in case (I). An interesting observation is that in almost all cases the efficacies of unimodal connections to terminal T is low, implying that T prefers to not rely on just one modality. Also, DFG always prefers to perform fusion between language and audio as in most cases both l →l,a and a →l,a have high efficacies; intuitively in most natural scenarios language and acoustic modalities are highly aligned. Both of these cases show unchanging behaviors which we believe DFG has learned as natural priors of human communicative signal. With these observations, we believe that DFG has successfully learned how to manage its internal structure to model human communication. 6 Conclusion In this paper we presented the largest dataset of multimodal sentiment analysis and emotion recognition called CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI). CMUMOSEI consists of 23,453 annotated sentences from more than 1000 online speakers and 250 different topics. The dataset expands the horizons of Human Multimodal Language studies in NLP. One such study was presented in this paper where we analyzed the structure of multimodal fusion in sentiment analysis and emotion recognition. This was 2244 done using a novel interpretable fusion mechanism called Dynamic Fusion Graph (DFG). In our studies we investigated the behavior of modalities in interacting with each other using built-in efficacies of DFG. Aside analysis of fusion, DFG was trained in the Memory Fusion Network pipeline and showed superior performance in sentiment analysis and competitive performance in emotion recognition. Acknowledgments This material is based upon work partially supported by the National Science Foundation (Award #1833355) and Oculus VR. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation or Oculus VR, and no official endorsement should be inferred. References Paavo Alku, Tom B¨ackstr¨om, and Erkki Vilkman. 2002. Normalized amplitude quotient for parametrization of the glottal flow. the Journal of the Acoustical Society of America 112(2):701–710. Paavo Alku, Helmer Strik, and Erkki Vilkman. 1997. Parabolic spectral parameter—a new method for quantification of the glottal flow. Speech Communication 22(1):67–79. Tadas Baltruˇsaitis, Chaitanya Ahuja, and LouisPhilippe Morency. 2017. Multimodal machine learning: A survey and taxonomy. arXiv preprint arXiv:1705.09406 . Tadas Baltruˇsaitis, Peter Robinson, and Louis-Philippe Morency. 2016. Openface: an open source facial behavior analysis toolkit. In Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on. IEEE, pages 1–10. Sanjay Bilakhia, Stavros Petridis, Anton Nijholt, and Maja Pantic. 2015. The mahnob mimicry database: A database of naturalistic human interactions. Pattern Recognition Letters 66(Supplement C):52 – 61. Pattern Recognition in Human Computer Interaction. https://doi.org/https://doi.org/10.1016/j.patrec.2015.03.005. Leo Breiman. 2001. Random forests. Mach. Learn. 45(1):5–32. https://doi.org/10.1023/A:1010933404324. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Journal of Language Resources and Evaluation 42(4):335–359. https://doi.org/10.1007/s10579-008-9076-6. Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltruˇsaitis, Amir Zadeh, and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with wordlevel fusion and reinforcement learning. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, New York, NY, USA, ICMI 2017, pages 163–171. https://doi.org/10.1145/3136755.3136801. Glen Coppersmith and Erin Kelly. 2014. Dynamic wordclouds and vennclouds for exploratory data analysis. In Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces. Association for Computational Linguistics, Baltimore, Maryland, USA, pages 22–29. Corinna Cortes and Vladimir Vapnik. 1995. Supportvector networks. Mach. Learn. 20(3):273–297. https://doi.org/10.1023/A:1022627411411. Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. Covarep—a collaborative voice analysis repository for speech technologies. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, pages 960–964. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon. 2012. Collecting large, richly annotated facial-expression databases from movies. IEEE MultiMedia 19(3):34– 41. https://doi.org/10.1109/MMUL.2012.26. Abhinav Dhall, O.V. Ramana Murthy, Roland Goecke, Jyoti Joshi, and Tom Gedeon. 2015. Video and image based emotion recognition challenges in the wild: Emotiw 2015. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, New York, NY, USA, ICMI ’15, pages 423–426. https://doi.org/10.1145/2818346.2829994. Thomas Drugman and Abeer Alwan. 2011. Joint robust voicing detection and pitch estimation based on residual harmonics. In Interspeech. pages 1973– 1976. Thomas Drugman, Mark Thomas, Jon Gudnason, Patrick Naylor, and Thierry Dutoit. 2012. Detection of glottal closure instants from speech signals: A quantitative review. IEEE Transactions on Audio, Speech, and Language Processing 20(3):994–1006. Paul Ekman, Wallace V Freisen, and Sonia Ancoli. 1980. Facial signs of emotional experience. Journal of personality and social psychology 39(6):1125. A. Graves, A. r. Mohamed, and G. Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pages 6645–6649. https://doi.org/10.1109/ICASSP.2013.6638947. Michael Grimm, Kristian Kroschel, and Shrikanth Narayanan. 2008. The vera am mittag german audiovisual emotional speech database. In ICME. IEEE, pages 865–868. 2245 Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmerman. 2018. Memn: Multimodal emotional memory network for emotion recognition in dyadic conversational videos. In NAACL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. iMotions. 2017. Facial expression analysis. goo.gl/1rh1JN. Mohit Iyyer, Varun Manjunatha, Jordan L BoydGraber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In ACL (1). pages 1681–1691. Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 2013. 3d convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1):221–231. https://doi.org/10.1109/TPAMI.2012.59. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 . John Kane and Christer Gobl. 2013. Wavelet maxima dispersion for breathy to tense voice discrimination. IEEE Transactions on Audio, Speech, and Language Processing 21(6):1170–1179. Wootaek Lim, Daeyoung Jang, and Taejin Lee. 2016. Speech emotion recognition using convolutional and recurrent neural networks. In Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2016 Asia-Pacific. IEEE, pages 1–4. Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. 2017. Sphereface: Deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Portland, Oregon, USA, pages 142–150. http://www.aclweb.org/anthology/P111015. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010. Louis-Philippe Morency, Rada Mihalcea, and Payal Doshi. 2011. Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the 13th International Conference on Multimodal Interactions. ACM, pages 169–176. Friedrich Max M¨uller. 1866. Lectures on the science of language: Delivered at the Royal Institution of Great Britain in April, May, & June 1861, volume 1. Longmans, Green. Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltruˇsaitis, and Louis-Philippe Morency. 2016. Deep multimodal fusion for persuasiveness prediction. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, New York, NY, USA, ICMI 2016, pages 284– 288. https://doi.org/10.1145/2993148.2993176. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP. pages 79–86. Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency. 2014. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, New York, NY, USA, ICMI ’14, pages 50–57. https://doi.org/10.1145/2663204.2663260. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532– 1543. Veronica Perez-Rosas, Rada Mihalcea, and LouisPhilippe Morency. 2013. Utterance-Level Multimodal Sentiment Analysis. In Association for Computational Linguistics (ACL). Sofia, Bulgaria. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Mazumder, Amir Zadeh, and LouisPhilippe Morency. 2017a. Context dependent sentiment analysis in user generated videos. In Association for Computational Linguistics. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Mazumder, Amir Zadeh, and LouisPhilippe Morency. 2017b. Context-dependent sentiment analysis in user-generated videos. In Association for Computational Linguistics. Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016. Convolutional mkl based multimodal emotion recognition and sentiment analysis. In Data Mining (ICDM), 2016 IEEE 16th International Conference on. IEEE, pages 439–448. Shyam Sundar Rajagopalan, Louis-Philippe Morency, Tadas Baltruˇsaitis, and Roland Goecke. 2016. Extending long short-term memory for multi-view structured learning. In European Conference on Computer Vision. 2246 Fabien Ringeval, Andreas Sonderegger, J¨urgen S. Sauer, and Denis Lalanne. 2013. Introducing the recola multimodal corpus of remote collaborative and affective interactions. In FG. IEEE Computer Society, pages 1–8. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In CVPR. IEEE Computer Society, pages 815–823. M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. Trans. Sig. Proc. 45(11):2673–2681. https://doi.org/10.1109/78.650093. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, et al. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP). Citeseer, volume 1631, page 1642. Rupesh K Srivastava, Klaus Greff, and Juergen Schmidhuber. 2015. Training very deep networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, Curran Associates, Inc., pages 2377–2385. http://papers.nips.cc/paper/5850training-very-deep-networks.pdf. Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. 2014. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, Washington, DC, USA, CVPR ’14, pages 1701–1708. https://doi.org/10.1109/CVPR.2014.220. Edmund Tong, Amir Zadeh, Cara Jones, and LouisPhilippe Morency. 2017. Combating human trafficking with multimodal deep models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1547–1556. George Trigeorgis, Fabien Ringeval, Raymond Brueckner, Erik Marchi, Mihalis A Nicolaou, Bj¨orn Schuller, and Stefanos Zafeiriou. 2016. Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, pages 5200–5204. Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, and Eric P Xing. 2016. Select-additive learning: Improving cross-individual generalization in multimodal sentiment analysis. arXiv preprint arXiv:1609.05244 . Martin W¨ollmer, Felix Weninger, Tobias Knaup, Bj¨orn Schuller, Congkai Sun, Kenji Sagae, and LouisPhilippe Morency. 2013. Youtube movie reviews: Sentiment analysis in an audio-visual context. IEEE Intelligent Systems 28(3):46–53. Jiahong Yuan and Mark Liberman. 2008. Speaker identification on the scotus corpus. Journal of the Acoustical Society of America 123(5):3878. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. In Empirical Methods in Natural Language Processing, EMNLP. Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multi-view sequential learning. arXiv preprint arXiv:1802.00927 . Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, and Louis-Philippe Morency. 2018b. Multi-attention recurrent network for human communication comprehension. arXiv preprint arXiv:1802.00923 . Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016a. Mosi: Multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259 . Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016b. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems 31(6):82–88. Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23(10):1499–1503. Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Francis C. M. Lau. 2015. A c-lstm neural network for text classification. CoRR abs/1511.08630. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. 2016. Recurrent Highway Networks. arXiv preprint arXiv:1607.03474 .
2018
208
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2247–2256 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2247 Efficient Low-rank Multimodal Fusion with Modality-Specific Factors Zhun Liu∗, Ying Shen∗, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency School of Computer Science Carnegie Mellon University {zhunl,yshen2,vbl,pliang,abagherz,morency}@cs.cmu.edu Abstract Multimodal research is an emerging field of artificial intelligence, and one of the main research problems in this field is multimodal fusion. The fusion of multimodal data is the process of integrating multiple unimodal representations into one compact multimodal representation. Previous research in this field has exploited the expressiveness of tensors for multimodal representation. However, these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor. In this paper, we propose the Lowrank Multimodal Fusion method, which performs multimodal fusion using low-rank tensors to improve efficiency. We evaluate our model on three different tasks: multimodal sentiment analysis, speaker trait analysis, and emotion recognition. Our model achieves competitive results on all these tasks while drastically reducing computational complexity. Additional experiments also show that our model can perform robustly for a wide range of low-rank settings, and is indeed much more efficient in both training and inference compared to other methods that utilize tensor representations. 1 Introduction Multimodal research has shown great progress in a variety of tasks as an emerging research field of artificial intelligence. Tasks such as speech recognition (Yuhas et al., 1989), emotion recognition, (De Silva et al., 1997), (Chen et al., 1998), (W¨ollmer et al., 2013), sentiment analysis, (Morency et al., 2011) ∗equal contributions as well as speaker trait analysis and media description (Park et al., 2014a) have seen a great boost in performance with developments in multimodal research. However, a core research challenge yet to be solved in this domain is multimodal fusion. The goal of fusion is to combine multiple modalities to leverage the complementarity of heterogeneous data and provide more robust predictions. In this regard, an important challenge has been on scaling up fusion to multiple modalities while maintaining reasonable model complexity. Some of the recent attempts (Fukui et al., 2016), (Zadeh et al., 2017) at multimodal fusion investigate the use of tensors for multimodal representation and show significant improvement in performance. Unfortunately, they are often constrained by the exponential increase of cost in computation and memory introduced by using tensor representations. This heavily restricts the applicability of these models, especially when we have more than two views of modalities in the dataset. In this paper, we propose the Low-rank Multimodal Fusion, a method leveraging low-rank weight tensors to make multimodal fusion efficient without compromising on performance. The overall architecture is shown in Figure 1. We evaluated our approach with experiments on three multimodal tasks using public datasets and compare its performance with state-of-the-art models. We also study how different low-rank settings impact the performance of our model and show that our model performs robustly within a wide range of rank settings. Finally, we perform an analysis of the impact of our method on the number of parameters and run-time with comparison to other fusion methods. Through theoretical analysis, we show that our model can scale linearly in the number of modalities, and our experiments also show a corresponding speedup in training when compared with 2248 !" # Low-rank Multimodal Fusion ℎ %& %" # # %' !' !& (" (& (' Task output ⋯ *" (,) *" (.) *" (/) ∘ 1 # # %" + + + ⋯ *& (,) *& (.) *& (/) 1 %& + + + ∘ # ⋯ *' (,) *' (.) *' (/) 1 %' + + + Low-rank factors Low-rank factors Low-rank factors Visual Audio Language Multimodal Representation # Prediction Figure 1: Overview of our Low-rank Multimodal Fusion model structure: LMF first obtains the unimodal representation za,zv,zl by passing the unimodal inputs xa,xv,xl into three sub-embedding networks fv,fa,fl respectively. LMF produces the multimodal output representation by performing low-rank multimodal fusion with modality-specific factors. The multimodal representation can be then used for generating prediction tasks. other tensor-based models. The main contributions of our paper are as follows: • We propose the Low-rank Multimodal Fusion method for multimodal fusion that can scale linearly in the number of modalities. • We show that our model compares to state-ofthe-art models in performance on three multimodal tasks evaluated on public datasets. • We show that our model is computationally efficient and has fewer parameters in comparison to previous tensor-based methods. 2 Related Work Multimodal fusion enables us to leverage complementary information present in multimodal data, thus discovering the dependency of information on multiple modalities. Previous studies have shown that more effective fusion methods translate to better performance in models, and there’s been a wide range of fusion methods. Early fusion is a technique that uses feature concatenation as the method of fusion of different views. Several works that use this method of fusion (Poria et al., 2016) , (Wang et al., 2016) use input-level feature concatenation and use the concatenated features as input, sometimes even removing the temporal dependency present in the modalities (Morency et al., 2011). The drawback of this class of method is that although it achieves fusion at an early stage, intra-modal interactions are potentially suppressed, thus losing out on the context and temporal dependencies within each modality. On the other hand, late fusion builds separate models for each modality and then integrates the outputs together using a method such as majority voting or weighted averaging (Wortwein and Scherer, 2017), (Nojavanasghari et al., 2016). Since separate models are built for each modality, inter-modal interactions are usually not modeled effectively. Given these shortcomings, more recent work focuses on intermediate approaches that model both intra- and inter-modal dynamics. Fukui et al. (2016) proposes to use Compact Bilinear Pooling over the outer product of visual and linguistic representations to exploit the interactions between vision and language for visual question answering. Similar to the idea of exploiting interactions, Zadeh et al. (2017) proposes Tensor Fusion Network, which computes the outer product between unimodal representations from three different modalities to compute a tensor representation. These methods exploit tensor representations to model 2249 inter-modality interactions and have shown a great success. However, such methods suffer from exponentially increasing computational complexity, as the outer product over multiple modalities results in extremely high dimensional tensor representations. For unimodal data, the method of low-rank tensor approximation has been used in a variety of applications to implement more efficient tensor operations. Razenshteyn et al. (2016) proposes a modified weighted version of low-rank approximation, and Koch and Lubich (2010) applies the method towards temporally dependent data to obtain lowrank approximations. As for applications, Lei et al. (2014) proposes a low-rank tensor technique for dependency parsing while Wang and Ahuja (2008) uses the method of low-rank approximation applied directly on multidimensional image data (Datumas-is representation) to enhance computer vision applications. Hu et al. (2017) proposes a low-rank tensor-based fusion framework to improve the face recognition performance using the fusion of facial attribute information. However, none of these previous work aims to apply low-rank tensor techniques for multimodal fusion. Our Low-rank Multimodal Fusion method provides a much more efficient method to compute tensor-based multimodal representations with much fewer parameters and computational complexity. The efficiency and performance of our approach are evaluated on different downstream tasks, namely sentiment analysis, speaker-trait recognition and emotion recognition. 3 Low-rank Multimodal Fusion In this section, we start by formulating the problem of multimodal fusion and introducing fusion methods based on tensor representations. Tensors are powerful in their expressiveness but do not scale well to a large number of modalities. Our proposed model decomposes the weights into low-rank factors, which reduces the number of parameters in the model. This decomposition can be performed efficiently by exploiting the parallel decomposition of low-rank weight tensor and input tensor to compute tensor-based fusion. Our method is able to scale linearly with the number of modalities. 3.1 Multimodal Fusion using Tensor Representations In this paper, we formulate multimodal fusion as a multilinear function f ∶V1 × V2 × ... × VM → H where V1,V2,...,VM are the vector spaces of input modalities and H is the output vector space. Given a set of vector representations, {zm}M m=1 which are encoding unimodal information of the M different modalities, the goal of multimodal fusion is to integrate the unimodal representations into one compact multimodal representation for downstream tasks. Tensor representation is one successful approach for multimodal fusion. It first requires a transformation of the input representations into a highdimensional tensor and then mapping it back to a lower-dimensional output vector space. Previous works have shown that this method is more effective than simple concatenation or pooling in terms of capturing multimodal interactions (Zadeh et al., 2017), (Fukui et al., 2016). Tensors are usually created by taking the outer product over the input modalities. In addition, in order to be able to model the interactions between any subset of modalities using one tensor, Zadeh et al. (2017) proposed a simple extension to append 1s to the unimodal representations before taking the outer product. The input tensor Z formed by the unimodal representation is computed by: Z = M ⊗ m=1 zm,zm ∈Rdm (1) where ⊗ M m=1 denotes the tensor outer product over a set of vectors indexed by m, and zm is the input representation with appended 1s. The input tensor Z ∈Rd1×d2×...dM is then passed through a linear layer g(⋅) to to produce a vector representation: h = g(Z;W,b) = W ⋅Z + b, h,b ∈Rdy (2) where W is the weight of this layer and b is the bias. With Z being an order-M tensor (where M is the number of input modalities), the weight W will naturally be a tensor of order-(M + 1) in Rd1×d2×...×dM×dh. The extra (M +1)-th dimension corresponds to the size of the output representation dh. In the tensor dot product W ⋅Z, the weight tensor W can be then viewed as dh order-M tensors. In other words, the weight W can be partitioned into ̃ Wk ∈Rd1×...×dM , k = 1,...,dh. Each ̃ Wk contributes to one dimension in the output vector h, i.e. hk = ̃ Wk ⋅Z. This interpretation of tensor fusion is illustrated in Figure 2 for the bi-modal case. One of the main drawbacks of tensor fusion is that we have to explicitly create the highdimensional tensor Z. The dimensionality of Z 2250 M R ? ⨂ E E !> !# E ℎ = Figure 2: Tensor fusion via tensor outer product will increase exponentially with the number of modalities as ∏M m=1 dm. The number of parameters to learn in the weight tensor W will also increase exponentially. This not only introduces a lot of computation but also exposes the model to risks of overfitting. 3.2 Low-rank Multimodal Fusion with Modality-Specific Factors As a solution to the problems of tensor-based fusion, we propose Low-rank Multimodal Fusion (LMF). LMF parameterizes g(⋅) from Equation 2 with a set of modality-specific low-rank factors that can be used to recover a low-rank weight tensor, in contrast to the full tensor W. Moreover, we show that by decomposing the weight into a set of low-rank factors, we can exploit the fact that the tensor Z actually decomposes into {zm}M m=1, which allows us to directly compute the output h without explicitly tensorizing the unimodal representations. LMF reduces the number of parameters as well as the computation complexity involved in tensorization from being exponential in M to linear. 3.2.1 Low-rank Weight Decomposition The idea of LMF is to decompose the weight tensor W into M sets of modality-specific factors. However, since W itself is an order-(M + 1) tensor, commonly used methods for decomposition will result in M + 1 parts. Hence, we still adopt the view introduced in Section 3.1 that W is formed by dh order-M tensors ̃ Wk ∈Rd1×...×dM ,k = 1,...,dh stacked together. We can then decompose each ̃ Wk separately. For an order-M tensor ̃ Wk ∈Rd1×...×dM, there always exists an exact decomposition into vectors in the form of: ̃ Wk = R ∑ i=1 M ⊗ m=1 w(i) m,k, w(i) m,k ∈Rd m (3) The minimal R that makes the decomposition valid is called the rank of the tensor. The vector sets {{w(i) m,k}M m=1}R i=1 are called the rank R decomposition factors of the original tensor. In LMF, we start with a fixed rank r, and parameterize the model with r decomposition factors {{w(i) m,k}M m=1}r i=1,k = 1,...,dh that can be used to reconstruct a low-rank version of these ̃ Wk. We can regroup and concatenate these vectors into M modality-specific low-rank factors. Let w(i) m = [w(i) m,1,w(i) m,2,...,w(i) m,dh], then for modality m, {w(i) m }r i=1 is its corresponding low-rank factors. And we can recover a low-rank weight tensor by: W = r ∑ i=1 M ⊗ m=1 w(i) m (4) Hence equation 2 can be computed by h = ( r ∑ i=1 M ⊗ m=1 w(i) m ) ⋅Z (5) Note that for all m, w(i) m ∈Rdm×dh shares the same size for the second dimension. We define their outer product to be over only the dimensions that are not shared: w(i) m ⊗w(i) n ∈Rdm×dn×dh. A bimodal example of this procedure is illustrated in Figure 3. Nevertheless, by introducing the low-rank factors, we now have to compute the reconstruction of W = ∑r i=1⊗ M m=1 w(i) m for the forward computation. Yet this introduces even more computation. 3.2.2 Efficient Low-rank Fusion Exploiting Parallel Decomposition In this section, we will introduce an efficient procedure for computing h, exploiting the fact that tensor Z naturally decomposes into the original input {zm}M m=1, which is parallel to the modalityspecific low-rank factors. In fact, that is the main reason why we want to decompose the weight tensor into M modality-specific factors. Using the fact that Z = ⊗ M m=1 zm, we can simplify equation 5: h = ( r ∑ i=1 M ⊗ m=1 w(i) m ) ⋅Z = r ∑ i=1 ( M ⊗ m=1 w(i) m ⋅Z) = r ∑ i=1 ( M ⊗ m=1 w(i) m ⋅ M ⊗ m=1 zm) = M Λ m=1 [ r ∑ i=1 w(i) m ⋅zm] (6) 2251 5> (S) ⨂ + + ⋯ 5# (S) 5> (T) ⨂ 5# (T) V ⨂ E E !> !# g E = ℎ Figure 3: Decomposing weight tensor into low-rank factors (See Section 3.2.1 for details.) where Λ M m=1 denotes the element-wise product over a sequence of tensors: Λ 3 t=1 xt = x1 ○x2 ○x3. An illustration of the trimodal case of equation 6 is shown in Figure 1. We can also derive equation 6 for a bimodal case to clarify what it does: h = ( r ∑ i=1 w(i) a ⊗w(i) v ) ⋅Z = ( r ∑ i=1 w(i) a ⋅za) ○( r ∑ i=1 w(i) v ⋅zv) (7) An important aspect of this simplification is that it exploits the parallel decomposition of both Z and W, so that we can compute h without actually creating the tensor Z from the input representations zm. In addition, different modalities are decoupled in the simplified computation of h, which allows for easy generalization of our approach to an arbitrary number of modalities. Adding a new modality can be simply done by adding another set of modality-specific factors and extend Equation 7. Last but not least, Equation 6 consists of fully differentiable operations, which enables the parameters {w(i) m }r i=1 m = 1,...,M to be learned end-to-end via back-propagation. Using Equation 6, we can compute h directly from input unimodal representations and their modal-specific decomposition factors, avoiding the weight-lifting of computing the large input tensor Z and W, as well as the r linear transformation. Instead, the input tensor and subsequent linear projection are computed implicitly together in Equation 6, and this is far more efficient than the original method described in Section 3.1. Indeed, LMF reduces the computation complexity of tensorization and fusion from O(dy ∏M m=1 dm) to O(dy × r × ∑M m=1 dm). In practice, we use a slightly different form of Equation 6, where we concatenate the low-rank factors into M order-3 tensors and swap the order in which we do the element-wise product and summation: h = r ∑ i=1 [ M Λ m=1 [w(1) m ,w(2) m ,...,w(r) m ] ⋅ˆzm] i,∶ (8) and now the summation is done along the first dimension of the bracketed matrix. [⋅]i,∶indicates the i-th slice of a matrix. In this way, we can parameterize the model with M order-3 tensors, instead of parameterizing with sets of vectors. 4 Experimental Methodology We compare LMF with previous state-of-the-art baselines, and we use the Tensor Fusion Networks (TFN) (Zadeh et al., 2017) as a baseline for tensorbased approaches, which has the most similar structure with us except that it explicitly forms the large multi-dimensional tensor for fusion across different modalities. We design our experiments to better understand the characteristics of LMF. Our goal is to answer the following four research questions: (1) Impact of Multimodal Low-rank Fusion: Direct comparison between our proposed LMF model and the previous TFN model. (2) Comparison with the State-of-the-art: We evaluate the performance of LMF and state-of-theart baselines on three different tasks and datasets. (3) Complexity Analysis: We study the modal complexity of LMF and compare it with the TFN model. (4) Rank Settings: We explore performance of LMF with different rank settings. The results of these experiments are presented in Section 5. 4.1 Datasets We perform our experiments on the following multimodal datasets, CMU-MOSI (Zadeh et al., 2016a), 2252 Dataset CMU-MOSI IEMOCAP POM Level Segment Segment Video # Train 1284 6373 600 # Valid 229 1775 100 # Test 686 1807 203 Table 1: The speaker independent data splits for training, validation, and test sets. POM (Park et al., 2014b), and IEMOCAP (Busso et al., 2008) for sentiment analysis, speaker traits recognition, and emotion recognition task, where the goal is to identify speakers emotions based on the speakers’ verbal and nonverbal behaviors. CMU-MOSI The CMU-MOSI dataset is a collection of 93 opinion videos from YouTube movie reviews. Each video consists of multiple opinion segments and each segment is annotated with the sentiment in the range [-3,3], where -3 indicates highly negative and 3 indicates highly positive. POM The POM dataset is composed of 903 movie review videos. Each video is annotated with the following speaker traits: confident, passionate, voice pleasant, dominant, credible, vivid, expertise, entertaining, reserved, trusting, relaxed, outgoing, thorough, nervous, persuasive and humorous. IEMOCAP The IEMOCAP dataset is a collection of 151 videos of recorded dialogues, with 2 speakers per session for a total of 302 videos across the dataset. Each segment is annotated for the presence of 9 emotions (angry, excited, fear, sad, surprised, frustrated, happy, disappointed and neutral). To evaluate model generalization, all datasets are split into training, validation, and test sets such that the splits are speaker independent, i.e., no identical speakers from the training set are present in the test sets. Table 1 illustrates the data splits for all datasets in detail. 4.2 Features Each dataset consists of three modalities, namely language, visual, and acoustic modalities. To reach the same time alignment across modalities, we perform word alignment using P2FA (Yuan and Liberman, 2008) which allows us to align the three modalities at the word granularity. We calculate the visual and acoustic features by taking the average of their feature values over the word time interval (Chen et al., 2017). Language We use pre-trained 300-dimensional Glove word embeddings (Pennington et al., 2014) to encode a sequence of transcribed words into a sequence of word vectors. Visual The library Facet1 is used to extract a set of visual features for each frame (sampled at 30Hz) including 20 facial action units, 68 facial landmarks, head pose, gaze tracking and HOG features (Zhu et al., 2006). Acoustic We use COVAREP acoustic analysis framework (Degottex et al., 2014) to extract a set of low-level acoustic features, including 12 Mel frequency cepstral coefficients (MFCCs), pitch, voiced/unvoiced segmentation, glottal source, peak slope, and maxima dispersion quotient features. 4.3 Model Architecture In order to compare our fusion method with previous work, we adopt a simple and straightforward model architecture 2 for extracting unimodal representations. Since we have three modalities for each dataset, we simply designed three unimodal sub-embedding networks, denoted as fa,fv,fl, to extract unimodal representations za,zv,zl from unimodal input features xa,xv,xl. For acoustic and visual modality, the sub-embedding network is a simple 2-layer feed-forward neural network, and for language modality, we used an LSTM (Hochreiter and Schmidhuber, 1997) to extract representations. The model architecture is illustrated in Figure 1. 4.4 Baseline Models We compare the performance of LMF to the following baselines and state-of-the-art models in multimodal sentiment analysis, speaker trait recognition, and emotion recognition. Support Vector Machines Support Vector Machines (SVM) (Cortes and Vapnik, 1995) is a widely used non-neural classifier. This baseline is trained on the concatenated multimodal features for classification or regression task (P´erez-Rosas et al., 2013), (Park et al., 2014a), (Zadeh et al., 2016b). Deep Fusion The Deep Fusion model (DF) (Nojavanasghari et al., 2016) trains one deep neural model for each modality and then combine the output of each modality network with a joint neural network. Tensor Fusion Network The Tensor Fusion Network (TFN) (Zadeh et al., 2017) explicitly models view-specific and cross-view dynamics by creating a multi-dimensional tensor that captures uni1goo.gl/1rh1JN 2The source code of our model is available on Github at https://github.com/Justin1904/Low-rank-Multimodal-Fusion 2253 modal, bimodal and trimodal interactions across three modalities. Memory Fusion Network The Memory Fusion Network (MFN) (Zadeh et al., 2018a) accounts for view-specific and cross-view interactions and continuously models them through time with a special attention mechanism and summarized through time with a Multi-view Gated Memory. Bidirectional Contextual LSTM The Bidirectional Contextual LSTM (BC-LSTM) (Zadeh et al., 2017), (Fukui et al., 2016) performs contextdependent fusion of multimodal data. Multi-View LSTM The Multi-View LSTM (MVLSTM) (Rajagopalan et al., 2016) aims to capture both modality-specific and cross-modality interactions from multiple modalities by partitioning the memory cell and the gates corresponding to multiple modalities. Multi-attention Recurrent Network The Multiattention Recurrent Network (MARN) (Zadeh et al., 2018b) explicitly models interactions between modalities through time using a neural component called the Multi-attention Block (MAB) and storing them in the hybrid memory called the Long-short Term Hybrid Memory (LSTHM). 4.5 Evaluation Metrics Multiple evaluation tasks are performed during our evaluation: multi-class classification and regression. The multi-class classification task is applied to all three multimodal datasets, and the regression task is applied to the CMU-MOSI and the POM dataset. For binary classification and multiclass classification, we report F1 score and accuracy Acc−k where k denotes the number of classes. Specifically, Acc−2 stands for the binary classification. For regression, we report Mean Absolute Error (MAE) and Pearson correlation (Corr). Higher values denote better performance for all metrics except for MAE. 5 Results and Discussion In this section, we present and discuss the results from the experiments designed to study the research questions introduced in section 4. 5.1 Impact of Low-rank Multimodal Fusion In this experiment, we compare our model directly with the TFN model since it has the most similar structure to our model, except that TFN explicitly forms the multimodal tensor fusion. The comparison reported in the last two rows of Table 2 demonstrates that our model significantly outperforms TFN across all datasets and metrics. This competitive performance of LMF compared to TFN emphasizes the advantage of Low-rank Multimodal Fusion. 5.2 Comparison with the State-of-the-art We compare our model with the baselines and stateof-the-art models for sentiment analysis, speaker traits recognition and emotion recognition. Results are shown in Table 2. LMF is able to achieve competitive and consistent results across all datasets. On the multimodal sentiment regression task, LMF outperforms the previous state-of-the-art model on MAE and Corr. Note the multiclass accuracy is calculated by mapping the range of continuous sentiment values into a set of intervals that are used as discrete classes. On the multimodal speaker traits Recognition task, we report the average evaluation score over 16 speaker traits and shows that our model achieves the state-of-the-art performance over all three evaluation metrics on the POM dataset. On the multimodal emotion recognition task, our model achieves better results compared to the stateof-the-art models across all emotions on the F1 score. F1-emotion in the evaluation metrics indicates the F1 score for a certain emotion class. 5.3 Complexity Analysis Theoretically, the model complexity of our fusion method is O(dy × r × ∑M m=1 dm) compared to O(dy ∏M m=1 dm) of TFN from Section 3.1. In practice, we calculate the total number of parameters used in each model, where we choose M = 3, d1 = 32, d2 = 32, d3 = 64, r = 4, dy = 1. Under this hyper-parameter setting, our model contains about 1.1e6 parameters while TFN contains about 12.5e6 parameters, which is nearly 11 times more. Note that, the number of parameters above counts not only the parameters in the multimodal fusion stage but also the parameters in the subnetworks. Furthermore, we evaluate the computational complexity of LMF by measuring the training and testing speeds between LMF and TFN. Table 3 illustrates the impact of Low-rank Multimodal Fusion on the training and testing speeds compared with TFN model. Here we set rank to be 4 since it can generally achieve fairly competent performance. 2254 Dataset CMU-MOSI POM IEMOCAP Metric MAE Corr Acc-2 F1 Acc-7 MAE Corr Acc F1-Happy F1-Sad F1-Angry F1-Neutral SVM 1.864 0.057 50.2 50.1 17.5 0.887 0.104 33.9 81.5 78.8 82.4 64.9 DF 1.143 0.518 72.3 72.1 26.8 0.869 0.144 34.1 81.0 81.2 65.4 44.0 BC-LSTM 1.079 0.581 73.9 73.9 28.7 0.840 0.278 34.8 81.7 81.7 84.2 64.1 MV-LSTM 1.019 0.601 73.9 74.0 33.2 0.891 0.270 34.6 81.3 74.0 84.3 66.7 MARN 0.968 0.625 77.1 77.0 34.7 39.4 83.6 81.2 84.2 65.9 MFN 0.965 0.632 77.4 77.3 34.1 0.805 0.349 41.7 84.0 82.1 83.7 69.2 TFN 0.970 0.633 73.9 73.4 32.1 0.886 0.093 31.6 83.6 82.8 84.2 65.4 LMF 0.912 0.668 76.4 75.7 32.8 0.796 0.396 42.8 85.8 85.9 89.0 71.7 Table 2: Results for sentiment analysis on CMU-MOSI, emotion recognition on IEMOCAP and personality trait recognition on POM. Best results are highlighted in bold. Model Training Speed (IPS) Testing Speed (IPS) TFN 340.74 1177.17 LMF 1134.82 2249.90 Table 3: Comparison of the training and testing speeds between TFN and LMF. The second and the third columns indicate the number of data point inferences per second (IPS) during training and testing time respectively. Both models are implemented in the same framework with equivalent running environment. Based on these results, performing a low-rank multimodal fusion with modality-specific low-rank factors significantly reduces the amount of time needed for training and testing the model. On an NVIDIA Quadro K4200 GPU, LMF trains with an average frequency of 1134.82 IPS (data point inferences per second) while the TFN model trains at an average of 340.74 IPS. 5.4 Rank Settings To evaluate the impact of different rank settings for our LMF model, we measure the change in performance on the CMU-MOSI dataset while varying Figure 4: The Impact of different rank settings on Model Performance: As the rank increases, the results become unstable and low rank is enough in terms of the mean absolute error. the number of rank. The results are presented in Figure 4. We observed that as the rank increases, the training results become more and more unstable and that using a very low rank is enough to achieve fairly competent performance. 6 Conclusion In this paper, we introduce a Low-rank Multimodal Fusion method that performs multimodal fusion with modality-specific low-rank factors. LMF scales linearly in the number of modalities. LMF achieves competitive results across different multimodal tasks. Furthermore, LMF demonstrates a significant decrease in computational complexity from exponential to linear time. In practice, LMF effectively improves the training and testing efficiency compared to TFN which performs multimodal fusion with tensor representations. Future work on similar topics could explore the applications of using low-rank tensors for attention models over tensor representations, as they can be even more memory and computationally intensive. Acknowledgements This material is based upon work partially supported by the National Science Foundation (Award # 1833355) and Oculus VR. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation or Oculus VR, and no official endorsement should be inferred. References Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette Chang, Sungbok Lee, and Shrikanth S. Narayanan. 2008. Iemocap: Interactive emotional dyadic motion capture database. Journal of Lan2255 guage Resources and Evaluation 42(4):335–359. https://doi.org/10.1007/s10579-008-9076-6. Lawrence S Chen, Thomas S Huang, Tsutomu Miyasato, and Ryohei Nakatsu. 1998. Multimodal human emotion/expression recognition. In Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on. IEEE, pages 366–371. Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltruˇsaitis, Amir Zadeh, and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with wordlevel fusion and reinforcement learning. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, New York, NY, USA, ICMI 2017, pages 163–171. https://doi.org/10.1145/3136755.3136801. Corinna Cortes and Vladimir Vapnik. 1995. Supportvector networks. Machine learning 20(3):273–297. Liyanage C De Silva, Tsutomu Miyasato, and Ryohei Nakatsu. 1997. Facial emotion recognition using multi-modal information. In Information, Communications and Signal Processing, 1997. ICICS., Proceedings of 1997 International Conference on. IEEE, volume 1, pages 397–401. Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. Covarepa collaborative voice analysis repository for speech technologies. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, pages 960–964. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847 . Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735– 1780. https://doi.org/10.1162/neco.1997.9.8.1735. Guosheng Hu, Yang Hua, Yang Yuan, Zhihong Zhang, Zheng Lu, Sankha S Mukherjee, Timothy M Hospedales, Neil M Robertson, and Yongxin Yang. 2017. Attribute-enhanced face recognition with neural tensor fusion networks. Othmar Koch and Christian Lubich. 2010. Dynamical tensor approximation. SIAM Journal on Matrix Analysis and Applications 31(5):2360–2375. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 1381–1391. Louis-Philippe Morency, Rada Mihalcea, and Payal Doshi. 2011. Towards multimodal sentiment analysis: Harvesting opinions from the web. In Proceedings of the 13th International Conference on Multimodal Interactions. ACM, pages 169–176. Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltruˇsaitis, and Louis-Philippe Morency. 2016. Deep multimodal fusion for persuasiveness prediction. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, pages 284–288. Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency. 2014a. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, pages 50–57. Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency. 2014b. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, New York, NY, USA, ICMI ’14, pages 50–57. https://doi.org/10.1145/2663204.2663260. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Ver´onica P´erez-Rosas, Rada Mihalcea, and LouisPhilippe Morency. 2013. Utterance-level multimodal sentiment analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 973–982. Soujanya Poria, Iti Chaturvedi, Erik Cambria, and Amir Hussain. 2016. Convolutional mkl based multimodal emotion recognition and sentiment analysis. In Data Mining (ICDM), 2016 IEEE 16th International Conference on. IEEE, pages 439–448. Shyam Sundar Rajagopalan, Louis-Philippe Morency, Tadas Baltruˇsaitis, and Roland Goecke. 2016. Extending long short-term memory for multi-view structured learning. In European Conference on Computer Vision. Ilya Razenshteyn, Zhao Song, and David P Woodruff. 2016. Weighted low rank approximations with provable guarantees. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing. ACM, pages 250–263. Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, and Eric P Xing. 2016. Select-additive learning: Improving cross-individual generalization in multimodal sentiment analysis. arXiv preprint arXiv:1609.05244 . 2256 Hongcheng Wang and Narendra Ahuja. 2008. A tensor approximation approach to dimensionality reduction. International Journal of Computer Vision 76(3):217–229. Martin W¨ollmer, Felix Weninger, Tobias Knaup, Bj¨orn Schuller, Congkai Sun, Kenji Sagae, and LouisPhilippe Morency. 2013. Youtube movie reviews: Sentiment analysis in an audio-visual context. IEEE Intelligent Systems 28(3):46–53. Torsten Wortwein and Stefan Scherer. 2017. What really mattersan information gain analysis of questions and reactions in automated ptsd screenings. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, pages 15–20. Jiahong Yuan and Mark Liberman. 2008. Speaker identification on the scotus corpus. Journal of the Acoustical Society of America 123(5):3878. Ben P Yuhas, Moise H Goldstein, and Terrence J Sejnowski. 1989. Integration of acoustic and visual speech signals using neural networks. IEEE Communications Magazine 27(11):65–71. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. In Empirical Methods in Natural Language Processing, EMNLP. Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multi-view sequential learning. arXiv preprint arXiv:1802.00927 . Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, and Louis-Philippe Morency. 2018b. Multi-attention recurrent network for human communication comprehension. arXiv preprint arXiv:1802.00923 . Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016a. Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259 . Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016b. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems 31(6):82–88. Qiang Zhu, Mei-Chen Yeh, Kwang-Ting Cheng, and Shai Avidan. 2006. Fast human detection using a cascade of histograms of oriented gradients. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. IEEE, volume 2, pages 1491–1498.
2018
209
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 219–230 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 219 Neural Argument Generation Augmented with Externally Retrieved Evidence Xinyu Hua and Lu Wang College of Computer and Information Science Northeastern University Boston, MA 02115 [email protected] [email protected] Abstract High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments. 1 Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013). A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues. For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass. In another example, online deliberation has become a popular way of soliciting public opinions on new policies’ pros and cons (Albrecht, 2006; Park et al., 2012). Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers. We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students’ essay writing skills, and proFigure 1: Sample user arguments from Reddit Change My View subcommunity that argue against original post’s thesis on “government should be allowed to view private emails”. Both arguments leverage supporting information from Wikipedia articles. viding context of controversial issues from different perspectives. As a consequence, there exists a pressing need for automating the argument construction process. To date, progress made in argument generation has been limited to retrieval-based methods— arguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017). Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015), existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output. In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure. One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains. Especially, most previous NLG systems rely on tem220 plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011), or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence. This makes them unwieldy to be adapted for new domains. In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance. To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence. Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references. Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that “government should be allowed to view private emails”. Both replies leverage information drawn from Wikipedia, such as “political corruption” and “Fourth Amendment on protections of personal privacy”. Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014), which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017). Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia2. Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of “talking points”, and the other then generates an argument based on both input and keyphrases. Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence. We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained 1 https://www.reddit.com/r/changemyview 2 https://en.wikipedia.org/ relevance estimation model. Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models. The rest of this paper is organized as follows. Section 2 highlights the roadmap of our system. The dataset used for our study is introduced in Section 3. The model formulation and retrieval methods are detailed in Sections 4 and 5. We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8. Related work is discussed in Section 9. Finally, we conclude in Section 10. 2 Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2. Given a statement, a set of queries are constructed based on its topic signature words (e.g., “government” and “national security”) to retrieve a list of relevant articles from Wikipedia. A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model. The generation model then encodes the statement and the evidence with a shared encoder in sequence. Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., “right to privacy”, “political corruption”), followed by a separate argument decoder which produces the final argument. 3 Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues. Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments. Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply. In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 20173. 3Dataset used in this paper is available at http:// xinyuhua.github.io/Resources/. 221 Figure 2: Overview of our system pipeline (best viewed in color). Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface). A reranking module then outputs top sentences as evidence. The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model. During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases. Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language4, (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators. After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies. We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated. A Focused Domain Dataset. The current dataset contains diverse domains with unbalanced numbers of arguments. We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain. However, topic labels are not available for the discussions. We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts5. Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics. Each abstract is labeled as politics 4 We use offensive words collected by Google’s What Do You Love project: https://gist.github.com/ jamiew/1112488, last accessed on February 22nd, 2018. 5About 1.3 million English Wikipedia abstracts are downloaded from http://dbpedia.org/page/. or non-politics if its title only matches keywords from one category.6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled. Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average. The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments. 4 Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases. Extended from the successful seq2seq attentional model (Bahdanau et al., 2015), our proposed model is novel in the following ways. First, two separate decoders are designed, one for generating keyphrases, the other for argument construction. By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input. Second, a novel 6Sample keywords for politics: “congress”, “election”, “constitution”; for non-politics: “art”, “fashion”,“music”. Full lists are provided in the supplementary material. 7More details about our domain classifier are provided in the supplementary material. 222 attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases. Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations. 4.1 Model Formulation Our model takes as input a sequence of tokens x = {xO; xE}, where xO is the statement sequence and xE contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module. A special token <evd> is inserted between xO and xE. Our model then first generates a set of keyphrases as a sequence yp = {yp l }, followed by an argument ya = {ya t }, by maximizing log P(y|x), where y = {yp; ya}. The objective is further decomposed into P t log P(yt|y1:t−1, x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states sa t and sp t , for argument decoder and keyphrase decoder, respectively. The hidden states are computed as done in Bahdanau et al. (2015) with attention: st = g(st−1, ct, yt) (1) ct = T X j=1 αtjhj (2) αtj = exp(etj) PT k=1 exp(etk) (3) etj = vT tanh(Whhj + Wsst + battn) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , ba attn, ga(·)} for the argument decoder; {W p h, W p s , bp attn, gp(·)} for the keyphrase decoder. Encoder. A two-layer bidirectional LSTM (biLSTM) is used to obtain the encoder hidden states hi for each time step i. For biLSTM, the hidden state is the concatenation of forward and backward hidden states: hi = [−→ hi; ←− hi]. Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014), and updated during training. The last hidden state of encoder is used to initialize both decoders. In our model the encoder is shared by argument and keyphrase decoders. Decoders. Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with oneto-many multi-task sequence-to-sequence learning (Luong et al., 2015). The distinction is that our training objective is the sum of two loss functions: L(θ) = −α Tp X (x,yp)∈D log P(yp|x; θ) −(1 −α) Ta X (x,ya)∈D log P(ya|x; θ) (5) where Tp and Ta denote the lengths of reference keyphrase sequence and argument sequence. α is a weighting parameter, and it is set as 0.5 in our experiments. Attention over Both Input and Keyphrases. Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process. We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states. Additional context vector c′ t is then computed over keyphrase decoder hidden states sp j, which is used for computing the new argument decoder state: sa t = g′(sa t−1, [ct; c′ t], ya t ) (6) c′ t = Tp X j=1 α′ tjsp j (7) α′ tj = exp(e′ tj) PTp k=1 exp(e′ tk) (8) e′ tj = v′T tanh(W ′ psp j + W ′ asa t + b′ attn) (9) where sp j is the hidden state of keyphrase decoder at position j, sa t is the hidden state of argument decoder at timestep t, and ct is computed in Eq. 2. Decoder Sharing. We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder. A special token <arg> is inserted between the two sequences, indicating the start of argument generation. 4.2 Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder. We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments. Hybrid Beam Expansion. In the standard beam search, the top k words of highest probability are 223 selected deterministically based on the softmax output to expand each hypothesis. However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016), e.g., one beam often dominates and thus inhibits hypothesis diversity. Here we only pick the top n words (n < k), and randomly draw another k −n words based on the multinomial distribution after removing the n expanded words from the candidates. This leads to a more diverse set of hypotheses. Segment-based Reranking. We also propose to rerank the beams every p steps based on beam’s coverage of content words from input. Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., “I don’t agree with you”), this operation has the potential of encouraging more informative generation. k = 10, n = 3, and p = 10 are used for experiments. The effect of parameter selection is studied in Section 7. 5 Relevant Evidence Retrieval 5.1 Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences. Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics. A dump of December 21, 2016 was downloaded. For training, evidence sentences are retrieved with queries constructed from target user arguments. For test, queries are constructed from OP. Article Retrieval. We first create an inverted index lookup table for Wikipedia as done in Chen et al. (2017). For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles. Therefore, multiple passes of retrieval will be conducted if more than one query is created. Specifically, we first collect topic signature words of the post. Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus. We treat posts from other discussions in our dataset as background. For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word. For instance, a query “the government, my e-mails, Queries Constructed from OP Argument Avg # Topic Sig. 17.2 9.8 Avg # Query 6.7 1.9 Avg # Article Retrieved 26.1 8.0 Avg # Sent. Retrieved 67.3 8.5 Table 1: Statistics for evidence sentence retrieval from Wikipedia. Considering query construction from either OP or target user arguments, we show the average numbers of topic signatures collected, queries constructed, and retrieved articles and sentences. national security” is constructed for the first sentence of OP in the motivating example (Figure 2). Top five retrieved articles with highest TF-IDF similarity scores are kept per query. Sentence Reranking. The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement. Up to 100 top ranked paragraphs with positive scores are retained. These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again. We only keep up to 10 top sentences with positive scores for inclusion in the evidence set. 5.2 Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP (Manning et al., 2014). • Keep phrases of length between 2 and 10 that overlap with content words in the argument. • If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained. The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training. 6 Experimental Setup 6.1 Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time. We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time 224 by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated. This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set. In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence. Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs). 6.2 Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer. We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8. We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss. Gradient clipping is also applied with the maximum norm of 2. The input and output vocabulary sizes are both 50k. Curriculum Training. We train the models in three stages where the truncated input and output lengths are gradually increased. Details are listed in Table 2. Importantly, this strategy allows model training to make rapid progress during early stages. Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32. The model converges after about 10 epochs in total with pre-training initialization, which is described below. Component Stage 1 Stage 2 Stage 3 Encoder OP 50 150 400 Evidence 0 80 120 Decoder Keyphrases 0 80 120 Target Argument 30 80 120 Table 2: Truncation size (i.e., number of tokens including delimiters) for different stages during training. Note that in the first stage we do not include evidence and keyphrases. Adding Pre-training. We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set. After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder). Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points. We describe more detailed results in the supplementary material. 6.3 Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument. We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known. All seq2seq models use a regular beam search decoder with the same beam size as ours. Variants of Our Models. We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED). For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP). System vs. Oracle Retrieval. For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval). We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results. 7 Results 7.1 Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002), an n-gram precision-based metric (up to bigrams are considered), and METEOR (Denkowski and Lavie, 2014), measuring unigram recall and precision by considering paraphrases, synonyms, and stemming. Human arguments are used as the gold-standard. Because each OP may be paired with more than one highquality arguments, we compute BLEU and METEOR scores for the system argument compared against all arguments, and report the best. We do not use multiple reference evaluation because 225 w/ System Retrieval w/ Oracle Retrieval BLEU MTR Len BLEU MTR Len Baseline RETRIEVAL 15.32 12.19 151.2 10.24 16.22 132.7 Comparisons SEQ2SEQ 10.21 5.74 34.9 7.44 5.25 31.1 + encode evd 18.03 7.32 67.0 13.79 10.06 68.1 + encode KP 21.94 8.63 74.4 12.96 10.50 78.2 Our Models DEC-SHARED 21.22 8.91 69.1 15.78 11.52 68.2 + attend KP 24.71 10.05 74.8 11.48 10.08 40.5 DEC-SEPARATE 24.24 10.63 88.6 17.48 13.15 86.9 + attend KP 24.52 11.27 88.3 17.80 13.67 86.8 Table 3: Results on argument generation by BLEU and METEOR (MTR), with system retrieved evidence and oracle retrieval. The best performing model is highlighted in bold per metric. Our separate decoder models, with and without keyphrase attention, statistically significantly outperform all seq2seq-based models based on approximation randomization testing (Noreen, 1989), p < 0.0001. the arguments are often constructed from different angles and cover distinct aspects of the issue. For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered. As can be seen from Table 3, our models produce better BLEU scores than almost all the comparisons. Especially, our models with separate decoder yield significantly higher BLEU and METEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do. Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments. Moreover, utilizing attention over both input and the generated keyphrases further boosts our models’ performance. Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing. The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases. Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores. Decoder Strategy Comparison. We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k). From the results in Figure 3, we find that reranking with a smaller step size, e.g., 0 3 5 7 10 Top-k Words Selected Deterministically 12.4 12.6 12.8 13.0 13.2 13.4 13.6 13.8 14.0 METEOR p=5 p=10 p=20 Standard decoder Figure 3: Effect of our reranking-based decoder. Beams are reranked at every 5, 10, and 20 steps (p). For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 −k randomly sampled over multinomial distribution after removing the k words. Reranking with smaller step size yields better results. p = 5, can generally lead to better METEOR scores. Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e. with a smaller k). 7.2 Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as “I don’t agree with you” or “this is not true”, are prevalent among generations by seq2seq models. We believe that good arguments should include content that addresses the given topic. Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information. To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al. (2013). A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model. A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score. For model learning, we further divide our current training data into training, developing, and test sets. For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples. This model achieves a Mean Reciprocal Rank (MRR) score of 0.95 on the test set. Descriptions about model formulation and related training 226 Standard Decoder Our Decoder MRR P@1 MRR P@1 Baseline RETRIEVAL 81.08 65.45 Comparisons SEQ2SEQ 75.29 58.85 74.46 57.06 + encode evd 83.73 71.59 88.24 78.76 Our Models DEC-SHARED 79.80 65.57 95.18 90.91 + attend KP 94.33 89.76 93.48 87.91 DEC-SEPARATE 86.85 76.74 91.70 84.72 + attend KP 88.53 79.05 92.77 86.46 Table 4: Evaluation on topic relevance—models that generate arguments highly related with OP should be ranked high by a separately trained relevance estimation model, i.e., higher Mean Reciprocal Rank (MRR) and Precision at 1 (P@1) scores. All models trained with evidence significantly outperform seq2seq trained without evidence (approximation randomization testing, p < 0.0001). details are included in the supplementary material. We then take this trained model to evaluate the relevance between OP and the corresponding system arguments. Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample. Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples. Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4. The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model. Moreover, we manually pick 29 commonly used generic responses (e.g., “I don’t think so”) and count their frequency in system outputs. For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases. This further implies that our model generates more topic-relevant content. 7.3 Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5: Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments. Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005). to 5 (with 5 as best): Grammaticality—whether an argument is fluent, informativeness—whether the argument contains useful information and is not generic, and relevance—whether the argument contains information of a different stance or offtopic. 30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments. Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text. Sample arguments are displayed in Figure 4. 8 Further Discussion Keyphrase Generation Analysis. Here we provide further analysis over the generated keyphrases by our separate decoder model. First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments). Furthermore, 36% of generated keyphrases are reused by our system arguments. With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument. For instance, in the first sample argument by our model in Figure 4, keyphrases “the motive” and “russian” are generated. Although not used, they suggest the topics that the argument should stay on. Sample Arguments and Future Directions. As can be seen from the sample outputs in Figure 4, our model generally captures more relevant concepts, e.g., “military army” and “wars 8Inter-rater agreement scores for these three aspects are 0.50, 0.60, and 0.48 by Krippendorff’s α. 227 Input: Putin is trying to re-form a “Soviet Union” with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia. That’s not to say that he wouldn’t go further, but I don’t think there is yet any compelling reason to believe he would. Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms. It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform. In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics. Our Model: There are two reasons you are so far beyond. There is no reason to see the military army. You can infer what they assume it, so they tend to protect up. There. If we assume it were bad, they can not be controlled. So we’re talking going to ignore the wars of the world. The fact of the matter is not bad. I would also assume it. However, the government can not be reminded of the world. Seq2seq + evd: 1. The reason is that the government isn’t the power of the war of Russia, at the time of the war of Russia. In Russia, there is no way to continue to control the military. Seq2seq + KP: I don’t think it’s a bit. I don’t think you’re forgetting. I don’t want to see the Syrian American dream in the US. I don’t see the Syrian American dream of American citizens. I don’t see the Syrian American dream of American citizens. I don’t see the Syrian American military in the world. I don’t think it wouldn’t be true. Input: Voting for either Hillary or Trump is morally reprehensible. Human: I don’t think a vote for either of them means ”I’m totally fine with...” as you say. Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal? Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior. Our Model: He’s going to happen. You’re overestimating the more premise. He’s already overestimating to this hypothetical party. He’s already killed over the supreme court positions. Looking for Bernie. This is the same reasons: Clinton is not the supreme court presidency. You’ve already sent Trump’s right to vote. The fact is that he’s going to happen. He’s already going to notice. However, it’s a long way. Seq2seq + evd: 1. If they don’t vote the vote, they would not be able to vote for any candidate. They don’t have the same effect on their political power. They are not voting for them. Seq2seq + KP: I agree with your view. I don’t agree with you. I don’t think it’s easy to appeal to the Clintons. If you don’t want to do this? Figure 4: Sample arguments generated by human, our system, and seq2seq trained with evidence. Only the main thesis is shown for the input OP. System generations are manually detokenized and capitalized. of the world”, as discussed in the first example. Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments. As discovered by our prior work (Wang et al., 2017), both topical content and language style are essential elements for high quality arguments. For future work, generation models with a better control on linguistic style need to be designed. As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning. 9 Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017). While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied. Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000). For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica. It however only outputs a text plan, mainly relying on heuristic rules. Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system. This work aims to close the gap by proposing an end-to-end trained argument construction framework. Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries. Wachsmuth et al. (2017) build a search engine from arguments collected from various online debate portals. After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015). Nevertheless, simply merging arguments from different resources inevitably introduces redundancy. To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments. 10 Conclusion We studied the novel problem of generating arguments of a different stance for a given statement. We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia. Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument. Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models. Acknowledgements This work was partly supported by National Science Foundation Grant IIS-1566382, and a GPU gift from Nvidia. We thank three anonymous reviewers for their insightful suggestions on various aspects of this work. 228 References Steffen Albrecht. 2006. Whose voice is heard in online deliberation?: A study of participation and representation in political debates on the internet. Information, Community and Society 9(1):62–82. Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, pages 502–512. http://www.aclweb.org/anthology/D10-1049. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Representations (ICLR). Anja Belz. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Natural Language Engineering 14(4):431–455. Blai Bonet and Hector Geffner. 1996. Arguing for decisions: A qualitative model of decision making. In Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., pages 98–105. Nadjet Bouayad-Agha, Gerard Casamayor, and Leo Wanner. 2011. Content selection from an ontology-based knowledge base for the generation of football summaries. In Proceedings of the 13th European Workshop on Natural Language Generation. Association for Computational Linguistics, Nancy, France, pages 72–81. http://www.aclweb.org/anthology/W11-2810. James P Byrnes. 2013. The nature and development of decision-making: A self-regulation model. Psychology Press. Giuseppe Carenini and Johanna Moore. 2000. A strategy for generating evaluative arguments. In INLG’2000 Proceedings of the First International Conference on Natural Language Generation. Association for Computational Linguistics, Mitzpe Ramon, Israel, pages 47–54. https://doi.org/10.3115/1118253.1118261. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1870–1879. http://aclweb.org/anthology/P171171. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Association for Computational Linguistics, Baltimore, Maryland, USA, pages 376– 380. http://www.aclweb.org/anthology/W14-3348. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 11–22. http://aclweb.org/anthology/P17-1002. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems. pages 1019–1027. Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyzing argumentative discourse units in online interactions. In Proceedings of the First Workshop on Argumentation Mining. Association for Computational Linguistics, Baltimore, Maryland, pages 39– 48. http://www.aclweb.org/anthology/W14-2106. Eduard H Hovy. 1993. Automated discourse generation using discourse structure relations. Artificial intelligence 63(1-2):341–385. Xinyu Hua and Lu Wang. 2017. Understanding and detecting supporting arguments of diverse types. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Vancouver, Canada, pages 203–208. http://aclweb.org/anthology/P17-2032. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. ACM, pages 2333– 2338. Yangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016. A latent variable recurrent neural network for discourse-driven language models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 332–342. http://www.aclweb.org/anthology/N16-1037. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR). Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of the 18th conference on Computational linguistics-Volume 1. Association for Computational Linguistics, pages 495–501. 229 Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. In Proceedings of the International Conference on Learning Representations (ICLR). Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Baltimore, Maryland, pages 55–60. http://www.aclweb.org/anthology/P14-5010. Hongyuan Mei, Mohit Bansal, and Matthew R. Walter. 2016. What to talk about and how? selective generation using lstms with coarse-to-fine alignment. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 720–730. http://www.aclweb.org/anthology/N16-1086. Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured svms and rnns. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 985–995. http://aclweb.org/anthology/P17-1091. Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. Raquel Mochales Palau and Marie-Francine Moens. 2009. Argumentation mining: the detection, classification and structure of arguments in text. In Proceedings of the 12th international conference on artificial intelligence and law. ACM, pages 98–107. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pages 311–318. https://doi.org/10.3115/1073083.1073135. Joonsuk Park and Claire Cardie. 2014. Identifying appropriate support for propositions in online user comments. In Proceedings of the First Workshop on Argumentation Mining. Association for Computational Linguistics, Baltimore, Maryland, pages 29– 38. http://www.aclweb.org/anthology/W14-2105. Joonsuk Park, Sally Klingel, Claire Cardie, Mary Newhart, Cynthia Farina, and Joan-Josep Vallb´e. 2012. Facilitative moderation for online participation in erulemaking. In Proceedings of the 13th Annual International Conference on Digital Government Research. ACM, pages 173–182. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1532–1543. http://www.aclweb.org/anthology/D14-1162. Chris Reed. 1999. The role of saliency in generating natural language arguments. In IJCAI. pages 876– 883. Chris Reed, Derek Long, and Maria Fox. 1996. An architecture for argumentative dialogue planning. In International Conference on Formal and Applied Practical Reasoning. Springer, pages 555–566. Paul Reisert, Naoya Inoue, Naoaki Okazaki, and Kentaro Inui. 2015. A computational approach for generating toulmin model argumentation. In Proceedings of the 2nd Workshop on Argumentation Mining. Association for Computational Linguistics, Denver, CO, pages 45–55. http://www.aclweb.org/anthology/W15-0507. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence - an automatic method for context dependent evidence detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 440–450. http://aclweb.org/anthology/D15-1050. Misa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshihiko Yanase, Makoto Iwayama, Qinghua Sun, and Yoshiki Niwa. 2015. End-to-end argument generation system in debating. In Proceedings of ACL-IJCNLP 2015 System Demonstrations. Association for Computational Linguistics and The Asian Federation of Natural Language Processing, Beijing, China, pages 109–114. http://www.aclweb.org/anthology/P15-4019. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017. Building an argument search engine for the web. In Proceedings of the 4th Workshop on Argument Mining. Association for Computational Linguistics, Copenhagen, Denmark, pages 49–59. http://www.aclweb.org/anthology/W17-5106. Lu Wang, Nick Beauchamp, Sarah Shugars, and Kechen Qin. 2017. Winning on the merits: The joint effects of content and style on debate outcomes. Transactions of the Association for Computational Linguistics 5:219–232. https://transacl.org/ojs/index.php/tacl/article/view/1009. 230 Lu Wang and Wang Ling. 2016. Neural networkbased abstract generation for opinions and arguments. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, pages 47–57. http://www.aclweb.org/anthology/N16-1007. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1711–1721. http://aclweb.org/anthology/D15-1199. Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1296–1306. https://aclweb.org/anthology/D16-1137. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 2253–2263. https://www.aclweb.org/anthology/D17-1239. Ingrid Zukerman, Richard McConachy, and Sarah George. 2000. Using argumentation strategies in automated argument generation. In INLG’2000 Proceedings of the First International Conference on Natural Language Generation. Association for Computational Linguistics, Mitzpe Ramon, Israel, pages 55–62. https://doi.org/10.3115/1118253.1118262.
2018
21
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2257–2267 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2257 Discourse Coherence: Concurrent Explicit and Implicit Relations Hannah Rohde1 Alexander Johnson1 Nathan Schneider2 Bonnie Webber1 1 University of Edinburgh 2 Georgetown University {Hannah.Rohde, Bonnie.Webber}@ed.ac.uk, [email protected], [email protected] Abstract Theories of discourse coherence posit relations between discourse segments as a key feature of coherent text. Our prior work suggests that multiple discourse relations can be simultaneously operative between two segments for reasons not predicted by the literature. Here we test how this joint presence can lead participants to endorse seemingly divergent conjunctions (e.g., but and so) to express the link they see between two segments. These apparent divergences are not symptomatic of participant naïveté or bias, but arise reliably from the concurrent availability of multiple relations between segments – some available through explicit signals and some via inference. We believe that these new results can both inform future progress in theoretical work on discourse coherence and lead to higher levels of performance in discourse parsing. 1 Introduction A question that remains unresolved in work on discourse coherence is the nature and number of relations that can hold between clauses in a coherent text (Halliday and Hasan, 1976; Stede, 2012). Our earlier work (Rohde et al., 2015, 2016) showed that, in the presence of explicit discourse adverbials, people also infer additional discourse relations that they take to hold jointly with those associated with the adverbials. For example, in: (1) It’s too far to walk. Instead let’s take the bus. people infer a RESULT relation in the context of the adverbial instead, which itself signals that the bus stands in a SUBSTITUTION relation to walking. We showed this using crowdsourced conjunctioninsertion experiments (Rohde et al., 2015, 2016), in which participants were asked to insert into the gap between two discourse segments, a conjunction that best expressed how they took the segments to be related. Rohde et al. (2017) also asked participants to select any other conjunctions that they took to convey the same sense as their “best” choice. (More details of these experiments are given in Section 3.) All three studies showed participants selecting conjunctions whose sense differed from that of the explicit discourse adverbial. But Rohde et al. (2015, 2016) also showed participants often selecting conjunctions that signal different coherence relations than those selected by other participants. And Rohde et al. (2017) showed participants often identifying very different conjunctions as conveying the same meaning. For example, in passage (2), with the discourse adverbial in other words, one large fraction of participants chose to insert OR, while another large fraction inserted SO. Since the two are neither synonymous nor representative of the same relation, either the participants have come up with different analyses of the passages (Section 2) or something more surprising is at work. (2) Unfortunately, nearly 75,000 acres of tropical forest are converted or deforested every day ______ in other words an area the size of Central Park disappears every 16 minutes. [SO∼OR] Rohde et al. (2017) noted other cases where different pairs of conjunctions (e.g., BECAUSE and BUT, BUT and OR, and BECAUSE and OR) appear systematically across participants and across passages for particular adverbials, and speculated on what these odd pairings may reveal, but did not provide any empirical evidence for why this happens. Here we present such evidence from an experiment on three discourse adverbials (in other words, otherwise, and instead). After describing related work on multiple discourse relations (Section 2) and then our experimental methodology (Section 3), we step through results for these three adverbials. As a final piece of evidence, we manipulate the presence and absence of a fourth adverbial, after all, in order to 2258 demonstrate that inference of the relation(s) between segments in a passage is not always driven by the presence of such an adverbial. 2 Related Work This is not the first work on discourse coherence to acknowledge the possibility of multiple relations holding between given discourse segments. For example, the developers of Rhetorical Structure Theory acknowledged that even experienced RST analysts may interpret a text differently in terms of the relations they take to hold (Mann and Thompson, 1988, p. 265). But while RST allows for multiple alternative analyses of a text in terms of discourse relations, in practice, researchers working in the RST framework standardly produce a single analysis of a text, with a single relational labeling, selecting the analysis that is “most plausible in terms of the perceived goals of the writer” (Mann et al., 1989, pp. 34–35). If that single analysis is later mapped into a different structure to support further processing – e.g., a binary branching tree structure – the mapping does not change the chosen relational labeling. Multiple relations may additionally hold in theories of discourse coherence that posit multiple levels of text analysis. For example, following Grosz and Sidner (1986), Moore and Pollack (1992) characterized text as having both an informational structure (relating information conveyed by discourse segments) and an intentional structure (relating the functions of those segments with respect to what the speaker is trying to accomplish through the text). The kinds of relations at the two levels are different, as can be seen in the following example from (Moore and Pollack, 1992, p. 540): (3) a. George Bush supports big business. b. He’s sure to veto House Bill 1711. At the level of intentions, (3a) aims to provide EVIDENCE for the claim in (3b), while at an informational level, (3a) serves as the CAUSE of the situation in (3b). RST would force annotators to choose only the analysis that best reflected the perceived goals of the writer. Additionally, multiple relations can hold where there are distinct explicit signals for distinct discourse relations holding between a pair of segments (Cuenca and Marin, 2009; Fraser, 2013), as in: (4) It’s too far to walk. So instead let’s take the bus. where the conjunction so signals a RESULT relation and the adverbial instead signals that taking the bus stands in an SUBSTITUTION relation to walking. Finally, a fourth way in which the previous literature has taken multiple discourse relations to hold is when a single phrase or lexico-syntactic construction jointly signals multiple discourse relations as holding over a text – for example, since as a subordinating conjunction may, in particular contexts, signal both a TEMPORAL relation and a CAUSAL relation, rather than just one or the other (Miltsakaki et al., 2005). We are aware of only two resources that allow more than one discourse relation to be annotated between two segments – the Penn Discourse TreeBank (PDTB; Prasad et al., 2008, 2014) and, more recently, the BECauSE Corpus 2.0 (Dunietz et al., 2017). The PDTB allows multiple discourse relations of the third and fourth types noted above. It also allows them to be annotated if there is no explicit connective between a pair of segments but annotators see more than one sense relation as linking them, as in the following variant of (4): (5) It’s too far to walk. Let’s take the bus. Here a RESULT relation can be associated with an implicit token of so between the clauses, while a SUBSTITUTION relation can be associated with an implicit token of instead. The above are the main cases in which PDTB annotates multiple relations. Relevant to this paper, the PDTB does not annotate implicit conjunction relations where there is already an explicit discourse adverbial. Thus the PDTB would either ignore the implicit RESULT relation for (1) or (incorrectly) annotate instead in (1) as conveying both SUBSTITUTION and RESULT. Moreover, while the PDTB has been used in training many (but not all) discourse parsers (Marcu, 2000; Lin et al., 2014; Feng and Hirst, 2012; Xue et al., 2015, 2016; Ji and Eisenstein, 2014), discourse parsing has for the most part ignored its annotations of multiple concurrent relations between clauses, except in the case of distinct explicit connectives expressing distinct relations. Instead, they have arbitrarily taken just a single relation to hold, even though the relations are simply recorded in an a priori canonical order. This practice is problematic because, for example, there may well be a difference in the properties of segments where two relations are jointly seen to hold, versus those segments in which only one or the other holds. This can result in unwanted noise in the data and lower the reliability of whatever is induced. While our previous studies showed another source of multiple discourse relations holding con2259 currently between discourse segments, the work reported here explains how, in the context of multiple relations, participants can take very different conjunctions to be conveying the same relation, and what can change participants’ selection of a conjunction to mark the relation they infer alongside that conveyed by an explicit discourse adverbial. 3 Methodology A locally crowdsourced conjunction-insertion task provided a proxy for labelling relations between adjacent discourse segments within a passage. Our materials consisted of passages containing an explicit discourse adverbial, preceded by a gap, which effectively separated the passage into two segments. The passages consisted of 16 with in other words, 16 with instead, 16 with after all, and 48 with otherwise. Participants were asked to read each passage and choose the conjunction(s) that best expressed how the two segments link together. The presentation of conjunction choices varied in order for each participant, but always consisted of AND, BECAUSE, BUT, OR, SO, NONE. While the task admittedly encourages participants to select one (or more) conjunctions, our prior work has shown that participants are very willing to use NONE if no conjunction is appropriate. We therefore take their insertion of a conjunction as their endorsement of the relation signaled by that conjunction. To further control data quality, we included 6 catch trials with an expected correct conjunction like “To be ______ not to be”. Three of the explicit discourse adverbials that we chose are anaphoric: in other words, otherwise, and instead (Webber et al., 2000). Unlike conjunctions such as AND, BECAUSE, BUT, OR and SO, they are not constrained by structure as to what they establish discourse relations with. So a conjunction-insertion task can be used to assess links between the segments (see also Scholman and Demberg 2017). Our three anaphoric adverbials share a core meaning of ‘otherness’ via their lexical semantics and flexibility in the relations they can participate in, making them a fruitful set to compare. The fourth adverbial, after all, allows us to test a hypothesis that the inferred connection between clauses is not driven by the adverbial alone. These particular adverbials were selected because they had yielded unexpected combinations of conjunction insertions in our prior work (e.g., OR/SO with in other words). This is in contrast to adverbials like therefore and nevertheless, for which participants’ conjunction combinations could be attributed to variation in the specificity of the conjunctions (SO/AND for therefore, BUT/AND for nevertheless). For our selection of a set of conjunctions to use as proxies for relation labels, we included all the coordinating conjunctions in English, as well as the subordinating conjunction BECAUSE as EXPLANATION relations are frequent. All participants (N=28) were monolingual native English speakers who were selected following a pre-test to measure their ability to consistently insert conjunctions that captured the underlying coherence relations in a series of passages. All gave informed consent. They each received £50 for their time. Each participant saw one of two randomly ordered lists. Passages were presented in batches of 34, one batch per day for three days. The materials were simplified variants of naturally occurring passages. Some were also manipulated systematically, in ways aimed at altering the availability of different coherence relations. Passages are available via the “dataset” link on the paper in the ACL anthology, and predictions about them are laid out in Sections 4.1–4.4. 4 Datasets 4.1 In other words Dataset Rohde et al. (2016) report an OR∼SO response split for in other words when participants could insert only their top choice of conjunction. Figure 1 shows SO dominating participants’ choice in all cases, but OR showing up among their choices in all but one passage (leftmost vertical bar). Additionally, several passages elicited BUT as the top choice of some participants. in other words 0 7 14 21 28 and because before but or so other none Figure 1: Stacked bar chart for conjunction insertions in passages with in other words (Rohde et al., 2016). Each vertical bar represents a passage with one response from each participant (N=28, no overlap with current participants). 2260 The in other words passages of the current experiment tested two linked hypotheses: The first is that OR∼SO response splits arise from two components of the lexical semantics of the adverbial itself: its sense of an evoked alternative and its sense of a consequence via restatement, whereby the truth of the second segment holds because it provides a reformulated restatement of the first segment’s content. For passage (2), this corresponds to the deforestation of 75,000 acres of tropical forest entailing the disappearance of an area the size of Central Park every 16 minutes. The second hypothesis is that the prevalence of and substitutability between SO and OR in (2) depends on the immediately adjacency of the two segments. This was suggested by participant choices of BUT (cf. Figure 1), as well as the observation that in other words does not always license OR via its lexical semantics and SO via entailment, as shown in (6), where BUT has become more available. Note that none of the relations conveyed by these conjunctions (CONTRAST or CONCESSION for BUT, DISJUNCTION for OR, CONSEQUENCE for SO) are already conveyed by the adverbial itself, which for in other words) would be RESTATEMENT. (6) Unfortunately, nearly 75,000 acres of tropical forest are converted or deforested every day. I don’t know where I heard that ______ in other words an area the size of Central Park disappears every 16 minutes. We tested these hypotheses by creating minimal pairs of 16 passages containing in other words. The pairs varied in the presence/absence of a metalinguistic comment intervening between the original description and its reformulation, as in (7)–(8). (7) Typically, a cast-iron wood-burning stove is 60 percent efficient ______ in other words 40 percent of the wood ends up as ash, smoke or lost heat. (8) Typically, a cast-iron wood-burning stove is 60 percent efficient. How this is measured is unclear ______ in other words 40 percent of the wood ends up as ash, smoke or lost heat. For each passage, participants identified their preferred conjunction and then any others that they took to convey the same sense. Half the participants saw a given passage with no intervening metalinguistic comment, half with. If our hypotheses are confirmed, it will show that manipulating the immediately preceding segment can shift participants’ preference from relations associated with OR and SO (ALTERNATIVE and CONSEQUENCE) to relations of CONTRAST or CONCESSION. This would then be evidence that adjacency affects what coherence relations participants take to be available. 0 7 14 21 28 and because before but or so other none otherwise Figure 2: Stacked bar chart for participants’ (N=28) conjunction insertions in otherwise passages (Rohde et al., 2016) 4.2 Otherwise Dataset Rohde et al. (2016) report surprising response splits amongst BECAUSE∼BUT∼OR for otherwise in their conjunction-insertion data (Figure 2). Given that otherwise has several different functions (described below), we hypothesize that different response splits arise from the lexical semantics of otherwise, combined with inference as to the function of the otherwise clause in a given passage. One function of otherwise is in ARGUMENTATION. Here, an otherwise clause provides a reason for a given claim, as in (9). Another function is in ENUMERATION, when the speaker first gives some preferred or more salient options, the otherwise clause introduces other alternative options, as in (10). A third use is in expressing an EXCEPTION to a generalization. Here, the main clause expresses a generalization, while otherwise clause specifies an exception (disjunctive alternative) to it, as in (11). (9) Proper placement of the testing device is an important issue ______ otherwise the test results will be inaccurate. (10) A baked potato, plonked on a side plate with sour cream flecked with chives, is the perfect accompaniment ______ otherwise you could serve a green salad and some good country bread. (11) Mr. Lurie and Mr. Jarmusch actually catch a shark, a thrashing 10-footer ______ otherwise the action is light. Results presented in (Rohde et al., 2017) for passages like (9) showed participant judgments of OR and BECAUSE, but not BUT. Passages like (10) yielded pairings of OR and BUT, but not BECAUSE. Lastly, passages like (11) yielded response splits between BUT and the less specific AND (Knott, 1996). Note that due to overlaps in conjunction choice, some conjunctions cannot be unambiguously associated with a single use of otherwise: While BECAUSE may unambiguously signal that a participant has inferred ARGUMENTATION, OR might indicate inference of either ARGUMENTATION or 2261 ENUMERATION. Thus we probe both participant choices of connectives and (via paraphrase) the use of otherwise that they take to hold. We chose 16 passages for each use of otherwise, based on our own category judgments. For each passage, we asked participants to select the conjunction that best expressed how its two segments were related, and then any other connectives that they took to express the same thing. A paraphrase task was then used as further evidence for the relation participants inferred in the otherwise passages. After completing a given session’s batch of passages, participants were asked to select which of three options they took to be a valid paraphrase of the passage. Each use of otherwise was assigned a distinct paraphrase to link the left-hand and right-hand segments (LHS, RHS). • ARGUMENTATION: “A reason for ⟨LHS⟩is ⟨RHS⟩.” • EXCEPTION: “Generally ⟨RHS⟩. An exception is when ⟨LHS⟩.” • ENUMERATION: “There’s more than one good option for ⟨goal⟩. They are: ⟨LHS⟩, ⟨RHS⟩.” We also allowed participants to choose a second paraphrase if they thought it appropriate. 4.3 Instead Dataset Rohde et al. (2016) report a range of participant choices in conjunction-insertion passages involving instead (Figure 3). For passages on the left of the figure, participants uniformly chose BUT, while the passage on the far right yielded a strong preference for SO. Elsewhere, some chose BUT and some chose SO. (For the current experiment, we ignore the fact that AND can contingently substitute for either BUT or SO as a connective in text (Knott, 1996), focussing only on passages where participants explicitly choose BUT and/or SO.) Rohde et al. (2017) report even more surprising participant responses to passages such as (12), where some participants selected both BUT and SO as equally expressing how the segments in the passage were related. (12) There may not be a flight scheduled to Loja today ______ instead we can go to Cuenca. [BUT∼SO] Neither the inter-participant split between BUT and SO in (Rohde et al., 2016) nor the intraparticipant split between them (Rohde et al., 2017) can be explained in terms of instead itself, since 0 7 14 21 28 and because before but or so other none instead 0 7 14 21 28 and because before but or so other none otherwise Figure 3: Stacked bar chart for participants’ (N=28) conjunction insertions in instead passages (Rohde et al., 2016) instead simply conveys that what follows is an alternative to an unrealised situation in the context (Prasad et al., 2008; Webber, 2013). The current experiment tests the hypothesis that this BUT∼SO split is a consequence of inference from properties of the segments themselves. To test this hypothesis, we created 16 minimal pairs of passages containing instead, one of which emphasized the information structural parallelism between the clauses, as in (13a), and another variant (13b) that de-emphasized that parallelism in favor of a causal link implied by a downward-entailing construction such as too X (Webber, 2013). For each passage, half the participants saw the parallelism variant in the conjunctioninsertion task, while half saw the causal variant. (13) a. There was no flight scheduled to Loja yesterday ______ instead there were several to Cuenca. b. There were too few flights scheduled to Loja yesterday ______ instead we went to Cuenca. 4.4 After all Dataset In (Rohde et al., 2017), we reported a BECAUSE∼BUT response split for passages containing after all. We speculated that this may be because a passage such as (14) below presents an argument in which the second segment serves as a REASON (hence, BECAUSE) for the first segment, but also serves to CONTRAST with it (hence, BUT). (14) Yes, I suppose there’s a certain element of danger in it ______ (after all) there’s a certain amount of danger in living, whatever you do. We hypothesize that the BECAUSE∼BUT split cannot be a consequence of the adverbial after all, which the Cambridge Dictionary indicates is “used to add information that shows that what you have just said is true”.1 If REASON and/or CONTRAST 1https://dictionary.cambridge.org/us/dictionary/ english/after-all 2262 A B C D E F G H I J K L M N O P no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening no_intervening with_intervening 0 5 10 # responses Choice BUT SO AND BECAUSE OR [no connective] Figure 4: Distribution of participants’ first choice of conjunction for passages with in other words. Each participant saw only one variant. Each vertical bar represents a passage with the responses from each participant, color-coded by conjunction. are being conveyed, it can’t be a consequence of after all. As such, this response split must depend on the reasoning that supports the inference of coherence between the two segments, separate from the adverbial itself. We test the hypothesis that the response split is independent of the presence or absence of after all. Starting with 16 passages that originally contained after all, we created a variant of each passage without the adverbial. The conjunction insertion task was the same as with the other datasets. 5 Results 5.1 In other words: Inference and adjacency Section 4.1 lays out the joint hypotheses that inferred relations in passages with in other words reflect two components of the lexical semantics of the adverbial (leading to the OR∼SO split) and that the presence of intervening material before in other words reduces the availability of those relations, favoring BUT instead. Figure 4 shows the predicted pattern: The no-intervening-content condition primarily yields OR/SO responses (with variation across passages on the OR-vs.-SO preference) with a relative increase in BUT responses in the intervening-content condition.2 Passage B corresponds to the pair of examples (2)/(6), and passage C reflects (7)/(8). For the analysis here and in Section 5.3, a relevant first-choice conjunction was chosen and the binary outcome of its insertion was modeled with a mixed-effect logistic regression. Here, the insertion of OR indeed varied with the presence/absence of intervening material (β = −1.569, p < 0.005). We posit that increases in BUT associated with the intervening content indicate either an interruption of the meta-linguistic tangent or an intention to signal a contrast with the negative affect of the 2For Passage P in Figure 4, participants may have linked the in other words clause to the intervening material itself. tangent itself (e.g., “I don’t know where. . . ”, “frustrating way of putting it”, “how this is measured is unclear”). We speculate that the presence of BECAUSE in passages with intervening content may arise when that content implies that the situation is somehow surprising, which in turn merits explanation (e.g., “it’s an UNUSUAL role for her”, “their ability to actually work sensitively is perhaps QUESTIONABLE”, “it’s STRANGE to think of a planet being born”). These hypotheses will themselves need to be tested. 5.2 Otherwise: Inference from semantic features of segments As noted in Section 4.2, passages containing otherwise were used to test how semantic properties of the segments themselves influenced conjunction choice. The categorization of passages by the researchers (16 ARGUMENTATION, 16 EXCEPTION, 16 ENUMERATION) predicts the conjunctions chosen by participants. In aggregate, ≈99% of responses to ARGUMENTATION passages were BECAUSE or OR or both. ≈92% of responses to EXCEPTION passages were BUT, AND, or both BUT and AND. And ≈98% of responses to ENUMERATION passages were BUT, AND, OR, or some subset thereof. For analysis, a mixed-effect logistic regression modeled the binary outcome of BUT insertion and showed significant variation across the three categories (p < 0.001). This measure captures the difference between pairs of categories: ARGUMENTATION permits BECAUSE and OR (hence BUT is rare) while ENUMERATION permits BUT and OR (hence BUT is present) and EXCEPTION favors BUT (hence BUT is very frequent). All pairwise comparisons yielded a main effect of category on this dependent measure (p’s < 0.001). Turning to individual passages, participant choices are shown in Figures 5–7. For ARGUMENTATION (Figure 5), the effect is uniformly strong, with all passages showing BECAUSE or OR as 2263 A B C D E F G H I J K L M N O P first second first second first second first second first second first second first second first second first second first second first second first second first second first second first second first second 0 10 20 # responses Choice BUT SO AND BECAUSE OR OR,BUT OR,SO OR,AND OR,BECAUSE AND,OR,BUT AND,OR,SO AND,OR,SO,BUT [no connective] Figure 5: Distribution of first and second choice conjunctions for ARGUMENTATION otherwise. Labels such as “OR,BUT” are for multiple second choices. Each vertical bar represents a passage with the responses from each participant, color-coded by conjunction. Enlarged B/W versions of Figures 4–8 are available via the “notes” link on the paper in the ACL anthology. A B C D E F G H I J K L M N O P first second first second first second first second first second first second first second first second first second first second first second first second first second first second first second first second 0 10 20 # responses Choice BUT SO AND BECAUSE OR OR,AND [no connective] Figure 6: Distribution of first and second choice conjunctions for EXCEPTION otherwise. The label “OR,AND” in the legend implies both as second choices. A B C D E F G H I J K L M N O P first second first second first second first second first second first second first second first second first second first second first second first second first second first second first second first second 0 10 20 # responses Choice BUT SO AND BECAUSE OR OR,BUT SO,OR OR,AND AND,OR,SO BUT,AND [no connective] Figure 7: Distribution of first and second choice conjunctions for ENUMERATION otherwise. Labels in the legend such as “SO,OR” are for multiple second choices. 2264 A B C D E F G H I J K L M N O parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel parallel non_parallel 0 5 10 # responses Choice BUT SO AND BECAUSE OR [no connective] Figure 8: Instead passages, pairing a parallel variant and a causal variant. Each column shows the distribution of participants’ first choice in the conjunction-insertion task. Each participant saw only one variant. participants’ top choice, with OR or BECAUSE chosen as equivalent (shown in the columns labelled “second”). For EXCEPTION (Figure 6), BUT is consistently the participants’ top choice. There are a few deviations from this near uniform endorsement of BUT for EXCEPTION (Figure 6, passages L–P). Any hypotheses, however, would require further experimentation to test. For example, in passage M (see (15)) and P (see (16)), participants rarely identified any conjunction as conveying the same sense as BUT. However, when their top choice was BECAUSE, they also selected OR as conveying the same sense. As noted above, BECAUSE and OR predominate with otherwise used in ARGUMENTATION. This raises the question of why passages M and P lead some participants to infer ARGUMENTATION and other participants, either EXCEPTION or ENUMERATION. (15) Democrats insist that the poor should be the priority, and that tax relief should be directed at them _____ otherwise they lack a cogent vision of the needs of a new economy. (16) He said that the proposed bill would give states more flexibility in deciding whether they wanted to use the Federal money for outright grants to municipalities or to set up loan programs _____ otherwise it left last fall’s Congressional legislation unchanged. Finally, though the pattern for ENUMERATION (Figure 7) is harder to see, combinations of BUT, OR and AND predominate as participants’ top choices, with a few tokens of BECAUSE and SO, but too few to analyse as anything but noise. The above results reflect researcher-assigned use labels. However, the confusion matrix in Table 1 shows that on the whole, participants agree with that assignment. The column labelled Multiple is for cases where participants offered two paraphrases. For ARGUMENTATION, at least one paraphrase always corresponded to EXCEPTION, while for ENUMERATION, it did so for most of these tokens (9/14). We comment on this below. While there was less agreement when participants offered multiple paraphrases for researcherassigned EXCEPTION, there may be too few tokens here to draw any kind of conclusion. In any case, the results for ARGUMENTATION and ENUMERATION agree both across participants (in what paraphrase they choose when they don’t choose the researcher-assigned label) and within participants (in what pairs of paraphrases they gave for the original passage). The above results support our hypothesis that variability in participants’ choice of conjunctions follows from both the lexical semantics of otherwise and the relation that participants infer between the segments in the passage. 5.3 Instead: Inference from a single manipulated property On aggregate, participants responded very differently to the parallel and causal variants of instead passages (cf. Section 4.3). Figure 8 shows that in all cases, the parallel variant yielded more BUT responses, whereas the non-parallel (causal) variant yielded significantly more SO responses (main effect of (non-)parallelism: β=−7.0008, p<0.001).3 Some of these results are very strong. For example, Passage A (17) drew all BUT responses for the 3We analyzed only 15 passages for instead and after all, due to a presentation error of the 16th for these adverbials. Participant Researcher ARGUMENTATION ENUMERATION EXCEPTION Multiple ARGUMENTATION 401 (91.5%) 4 25 18 ENUMERATION 23 364 (81.4%) 46 14 EXCEPTION 21 29 393 (87.7%) 5 Table 1: Researcher labels assigned to otherwise passages vs. labels implied by participant paraphrases 2265 A B C D E F G H I J K L M N O with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial with_adverbial no_adverbial 0 5 10 # responses Choice BUT SO AND BECAUSE [no connective] Figure 9: Distribution of first choice in conjunction selection task for passages with after all parallel variant in (17a) and all SO responses for the causal variant in (17b), as did Passage B. In a few cases, however, the parallel variant drew variable responses, even while its causal variant drew strong SO responses. This is true of Passage O, with parallel and causal variants in (18a–b). (17) a. They could have been playing football in the village green _____ instead they played in the street. b. They didn’t like playing football in the village green _____ instead they played in the street. (18) a. Smugglers nowadays don’t use overland passages _____ instead they use the seas to transport their goods. b. Smugglers’ overland passages nowadays are too visible _____ instead they use the seas to transport their goods. One possible explanation is that participants varied in the role they assigned to the positive claim in the second segment of (18a) – either as a reason for the negative claim in the first segment (BECAUSE), as a contrast with that claim (BUT), or as its result (SO). Although manipulating the segment to enhance either parallelism or causality can change participant responses, it is clear that parallelism alone doesn’t guarantee contrast. 5.4 After all: Adverb adds little to inference Figure 9 shows participant choice of conjunction when after all is present and when it is absent. Their choice is largely the same for passages A–F and K–N, with and without the adverbial. As for passage O, since AND can contingently substitute for BUT (Knott, 1996), the response pattern can be considered the same as well. A by-passage correlation between the rate of BUT and BECAUSE responses across the two conditions confirms this similarity (R2=.70, F(1,13)=30.98, p<0.001). The outlier is passage G: (19) There was a testy moment driving over the George Washington Bridge when the toll-taker charged him $24 for his truck and trailer _____ after all it was New York. With after all, the majority of participants chose BUT as best expressing how the two segments are connected, while without it, the majority chose BECAUSE. Whatever explanation we gave here would be pure speculation. We trust that the fact that the other 14 passages demonstrate the predicted effect provides sufficient evidence that splits in participant responses are not simply a result of the presence of a discourse adverbial. 6 Conclusion While our previous work showed that multiple discourse relations can hold between two segments – relations at the same semantic level, simultaneously available to a reader – we provided no evidence as to what influences the particular relations that are taken to be available. Our current experiments have provided some such evidence. Specifically, we have shown that participant responses to systematically manipulated passages involving discourse adverbials can be explained in terms of both the lexical semantics of discourse adverbials and properties of the passages that contain them. As the conjunctions chosen by participants convey senses that differ from those of the discourse adverbials, we also provided evidence for the simultaneous availability of multiple coherence relations that arise from both explicit signals and inference. We hope the reader is now convinced that, in both psycholinguistic research on discourse coherence and computational work on discourse parsing, one needs to identify and examine evidence for coherence involving more than one discourse relation. Acknowledgments This project has been supported by a grant from the Nuance Foundation. In addition, a Leverhulme Trust Prize in Languages & Literatures to H. Rohde has enabled her to devote more time to research. We thank Amir Zeldes and the anonymous reviewers for their helpful feedback on the paper. 2266 References M.J. Cuenca and M.J. Marin. 2009. Co-occurrence of discourse markers in Catalan and Spanish oral narrative. Journal of Pragmatics, 41(5):899–914. Jesse Dunietz, Lori Levin, and Jaime Carbonell. 2017. The BECauSE Corpus 2.0: Annotating causality and overlapping relations. In Proceedings of the 11th Linguistic Annotation Workshop, pages 95–104, Valencia, Spain. Vanessa Wei Feng and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 60–68, Jeju Island, Korea. Bruce Fraser. 2013. Combinations of contrastive discourse markers in English. International Review of Pragmatics, 5:318–340. Barbara Grosz and Candace Sidner. 1986. Attention, intention and the structure of discourse. Computational Linguistics, 12(3):175–204. Michael Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman. Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 13–24, Baltimore, Maryland. Alistair Knott. 1996. A Data-driven Methodology for Motivating a Set of Coherence Relations. Ph.D. dissertation, Department of Artificial Intelligence, University of Edinburgh. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering, pages 151–184. William C. Mann, Christian M. I. M. Matthiessen, and Sandra A. Thompson. 1989. Rhetorical structure theory and text analysis. Technical Report ISI/RR89-242, USC-ISI, Marina del Rey CA. William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 8(3):243–281. Daniel Marcu. 2000. The theory and practice of discourse parsing and summarization. MIT Press. Eleni Miltsakaki, Nikhil Dinesh, Rashmi Prasad, Aravind Joshi, and Bonnie Webber. 2005. Experiments on sense annotation and sense disambiguation of discourse connectives. In Proceedings of the Fourth Workshop on Treebanks and Linguistic Theories (TLT’05), Barcelona, Spain. Johanna Moore and Martha Pollack. 1992. A problem for RST: The need for multi-level discouse analysis. Computational Linguistics, 18(4):537–544. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. In Proceedings of the 6th International Conference on Language Resources and Evaluation, pages 2961–2968, Marrakech, Morocco. Rashmi Prasad, Bonnie Webber, and Aravind Joshi. 2014. Reflections on the Penn Discourse TreeBank, comparable corpora and complementary annotation. Computational Linguistics, 40(4):921–950. Hannah Rohde, Anna Dickinson, Chris Clark, Annie Louis, and Bonnie Webber. 2015. Recovering discourse relations: Varying influence of discourse adverbials. In Proceedings of the First Workshop on Linking Computational Models of Lexical, Sentential and Discourse-level Semantics, pages 22–31, Lisbon, Portugal. Hannah Rohde, Anna Dickinson, Nathan Schneider, Christopher Clark, Annie Louis, and Bonnie Webber. 2016. Filling in the blanks in understanding discourse adverbials: Consistency, conflict, and context-dependence in a crowdsourced elicitation task. In Proceedings of the Tenth Linguistic Annotation Workshop (LAW-X), pages 49–58, Berlin, Germany. Hannah Rohde, Anna Dickinson, Nathan Schneider, Annie Louis, and Bonnie Webber. 2017. Exploring substitutability through discourse adverbials and multiple judgments. In Proceedings of the 12th International Conference on Computational Semantics (IWCS), Montpellier, France. Merel Scholman and Vera Demberg. 2017. Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task. In Proceedings of the 11th Linguistic Annotation Workshop, pages 24–33, Valencia, Spain. Manfred Stede. 2012. Discourse Processing. Morgan & Claypool Publishers. Bonnie Webber. 2013. What excludes an alternative in coherence relations? In Proceedings of the 10th International Conference on Computational Semantics, Potsdam, Germany. Bonnie Webber, Aravind Joshi, and Alistair Knott. 2000. The anaphoric nature of certain discourse connectives. In Making Sense: From Lexeme to Discourse, Groningen, The Netherlands. Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol T. Rutherford. 2015. The CoNLL-2015 Shared Task on Shallow Discourse Parsing. In Proceedings of the Nineteenth Conference on Computational Language Learning: Shared Task, pages 1–16, Beijing, China. Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Attapol Rutherford, Bonnie Webber, Chuan Wang, and Hongmin Wang. 2016. CoNLL 2016 Shared Task 2267 on Multilingual Shallow Discourse Parsing. In Proceedings of the 20th Conference on Computational Natural Language Learning – Shared Task, pages 1– 19, Berlin, Germany.
2018
210
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2268–2277 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2268 A Spatial Model for Extracting and Visualizing Latent Discourse Structure in Text Shashank Srivastava∗ Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Nebojsa Jojic Microsoft Research Redmond, WA 98052, USA [email protected] Abstract We present a generative probabilistic model of documents as sequences of sentences, and show that inference in it can lead to extraction of long-range latent discourse structure from a collection of documents. The approach is based on embedding sequences of sentences from longer texts into a 2- or 3-D spatial grids, in which one or two coordinates model smooth topic transitions, while the third captures the sequential nature of the modeled text. A significant advantage of our approach is that the learned models are naturally visualizable and interpretable, as semantic similarity and sequential structure are modeled along orthogonal directions in the grid. We show that the method can capture discourse structures in narrative text across multiple genres, including biographies, stories, and newswire reports. In particular, our method can capture biographical templates from Wikipedia, and is competitive with state-ofthe-art generative approaches on tasks such as predicting the outcome of a story, and sentence ordering. 1 Introduction The ability to identify discourse patterns and narrative themes from language is useful in a wide range of applications and data analysis. From a perspective of language understanding, learning such latent structure from large corpora can provide background information that can aid machine reading. For example, computers can use such knowledge to predict what is likely to happen next ∗*Work done while first author was an intern at Microsoft Research in a narrative (Mostafazadeh et al., 2016), or reason about which narratives are coherent and which do not make sense (Barzilay and Lapata, 2008). Similarly, knowledge of discourse is increasingly important for language generation models. Modern neural generation models, while good at capturing surface properties of text – by fusing elements of syntax and style – are still poor at modeling long range dependencies that go across sentences (Li and Jurafsky, 2017; Wang et al., 2017). Models of long range flow in the text can thus be useful as additional input to such methods. Previously, the question of modeling discourse structure in language has been explored through several lenses, including from perspectives of linguistics, cognitive science and information retrieval. Prominent among linguistic approaches are Discourse Representation Theory (Asher, 1986) and Rhetorical Structure Theory (Mann and Thompson, 1988); which formalize how discourse context can constrain the semantics of a sentence, and lay out ontologies of discourse relation types between parts of a document. This line of research has been largely constrained by the unavailability of corpora of discourse relations, which are expensive to annotate. Another line of research has focused on the task of automatic script induction, building on earlier work in the 1970’s (Schank and Abelson, 1977). More recently, methods based on neural distributed representations have been explored (Li and Hovy, 2014; Kalchbrenner and Blunsom, 2013; Le and Mikolov, 2014) to model the flow of discourse. While these methods have had varying degrees of success, they are largely opaque and hard to interpret. In this work, we seek to provide a scalable model that can extract latent sequential structures from a collection of documents, and can be naturally visualized to provide a summary of the learned semantics and discourse trajectories. In this work, we present an approach for extract2269 Figure 1: Modeling principle for Sequential Counting Grids. We design the method to capture semantic similarities between documents along XY planes (e.g., biographies might be more similar to literary fiction than news reports), as well extract sequential trajectories along the Z axes similar to those shown. The sequence of sentences in a document is latently aligned to positions in the grid, such that the model prefers alignments of contiguous sentences to grid cells that are spatially close. ing and visualizing sequential structure from a collection of text documents. Our method is based on embedding sentences in a document in a 3dimensional grid, such that contiguous sentences in the document are likely to be embedded in the same order in the grid. Further, sentences across documents that are semantically similar are also likely to be embedded in the same neighborhood in the grid. By leveraging the sequential order of sentences in a large document collection, the method can induce lexical semantics, as well as extract latent discourse trajectories in the documents. Figure 1 shows a conceptual schematic of our approach. The method can learn semantic similarity (across XY planes), as well as sequential discourse chains (along the Z-axis). The parameters and latent structure of the grid are learned by optimizing the likelihood of a collection of documents under a generative model. Our method outperforms state-of-the-art generative methods on two tasks: predicting the outcome of a story and coherence prediction; and is seen to yield a flexible range of interpretable visualizations in different domains of text. Our method is scalable, and can incorporate a broad range of features. In particular, the approach can work on simple tokenized text. The remainder of this paper is organized as follows. In Section 2, we briefly summarize other related work. In Section 3, we describe our method in detail. We present experimental results in Section 4, and conclude with a brief discussion. 2 Related work Building on linguistic theories of discourse and text coherence, several computational approaches have attempted to model discourse structure from multiple perspectives. Prominent among these are Narrative Event Chains (Chambers and Jurafsky, 2008) which learn chains of events that follow a pattern in a unsupervised framework, and the Entity grid model (Barzilay and Lapata, 2008), which represents sentences in a context in terms of discourse entities occurring in them and trains coherence classifiers over this representation. Other work extends these using better models of events and discourse entities (Lin et al., 2011; Pichotta and Mooney, 2015). Louis and Nenkova (2012) use manually provided syntactic patterns for sentence representation, and model transitions in text as Markov probabilities, which is related to our work. However, while they use simple HMMs over discrete topics, our method allows for a richer model that also captures smooth transition across them. Approaches such as Kalchbrenner and Blunsom (2013); Li et al. (2014); Li and Jurafsky (2017) model text through recurrent neural architectures, but are hard to interpret and visualize. Other approaches have explored applications related to modeling narrative discourse in context of limited tasks such as story cloze (Mostafazadeh et al., 2016) and identifying similar narratives (Chaturvedi et al., 2018). From a large scale document-mining perspective, the question of extracting intra-document structure remains largely underexplored. While early models such as LDA completely ignore ordering and discourse elements of a documents, other methods that use distributed embeddings of documents are opaque (Le and Mikolov, 2014), even while they can in principle model sequential structure within a document. Methods such as HMM-LDA (Griffiths et al., 2005) and Topics-over-time (Wang and McCallum, 2006) address the related question of topic evolution in a stream of documents, but these approaches are too coarse to model intra-document sequential structure. In terms of our technical approach, we build on previous research on gridbased models (Jojic and Perina, 2011), which have previously been used for topic-modeling for images and text as unstructured bags-of-features. 2270 3 Sequential CG model In this section, we present our method, which we call Sequential Counting Grids, or Sequential CG. We first present our notation, model formulation and training approach. We discuss how the method is designed to incorporate smoothness and sequential structure, and how the method can be efficiently scaled to train on large document collections. In Section 3.2, we present a mixture model variant that combines Sequential CG with a unigram language model. 3.1 Model description We represent a document as a sequence s of sentences, s = {s1, s2 . . . sD}, where D represents the number of sentences in the document. In general, we assume each sentence is represented as a multiset of features si = {cz}i, where ci z represents the count of the feature indexed by z in the ith sentence in the sequence.1 The Sequential CG consists of a 3-D grid G of size Ex × Ey × Ez, where Ex, Ey and Ez denote the extent of the grid along the X, Y and Z-axes (see Figure 1). Let us denote an index of a position in the grid by an integer-valued vector i = (ixiyiz). The three components of the index together specify a XY location as well as a depth in the grid. The Sequential CG model is parametrized by two sets of parameters, πi,z and Pij. Here, πi,z represents a multinomial distribution over the vocabulary of features z for each cell in the grid G, i.e. P z πi,z = 1 ∀i ∈G. To induce smoothness across XY planes, we further define histogram distributions hi,z, which average the π distributions in a 2-D neighborhood Wi (of size specified by W = [Wx, Wy]) around the grid position i. This notation follows Jojic and Perina (2011). hi,z = 1 WxWy X i′∈Wi πi′,z (1) The generative model assumes that individual sentences in a document are generated by h distributions in the grid. Movements from one position i to another j in the grid are modeled as transition probabilities Pij. The generative process consists of the following. We uniformly sample a starting location i1 in the grid. We sample words in the first 1These may simply consist of tokens (words, entities and MWEs) in the sentence, but can include additional information, such as sentiment or event annotations, or other discrete sentence-level representations sentence s1 from πi1, and sample the next position i2 from the distribution Pi1,:, and so on till we generate sD. The alignments I = [i1, i2 . . . iD] of individual sentences in a document with positions in the grid are latent variables in our model. Given the sequence of alignments I for a document, the conditional likelihood of generating s is given as a product of generating individual sentences: p(s| I) = D Y d p({cd z}| id) = D Y d=1 Y z (hid,z)cd z (2) Since the alignments of sequences to their positions in the grids I are latent, we marginalize over these to maximize the likelihood of an observed collection of documents S := {st}T t=1. Here, T is the total number of documents, and t is an index over individual documents. Using Jensen’s inequality, any distributions qt I over the hidden alignments It provide lower-bounds on the data log-likelihood. X t log p(st|π) = X t log X I p(st, I|π)  = X t log  X I qt I p(st|I)p(I)) qt I  ≥− X t X I qt I log qt I + X t X I qt I log p(s|I, π)p(I)) (3) Here, qt I denotes a variational distribution for each of the data sequences st. The learning algorithm consists of an iterative generalized EM procedure (which can be interpreted as a block-coordinate ascent in the latent variables qt I and the model parameters π and P). We maximize the lower bound in Eqn 3 exactly by setting qt I to the posterior distribution of the data for the current values of the parameters π (standard E step). Thus, we have qt I ∝p(s|I)p(I) = h D Y d=1 Y z (hid,z)cd z(t)ih D Y d=2 Pid−1,id i (4) We do not need to explicitly compute the posterior distribution qt I = p(I|s) at this point, but only use it to compute the relevant expectation statistics in the M-step. This can be done efficiently, as we 2271 see next. In the M-step, we consider qt I as fixed, and maximize the objective in terms of the model parameters π. Substituting this in Eqn 3, and focusing on terms that depend on the model parameters (π and P), we get L(π, P) ≥ X t X I qt I log p(s|I, π)p(I)) + Hq = X t X I qt I  X d X z cd z(t) log hid,z + X d log Pid−1,id  = X t X I Eqt I h X d X z Iit d=icd z(t) log hid,z i + X t X I Eqt I h X d Iit d−1=i,it d=j log Pij i (5) Maximizing the likelihood w.r.t. P leads to the following updates for the transition probabilities:2 Pij = P t P d P(it d−1 = i, it d = j) P t P d P(it d−1 = i) (6) Here, the pairwise state-probabilities P(it d−1 = i, it d = j) for adjacent sentences in a sequence can be efficiently calculated using the ForwardBackward algorithm. In Equation 5, rewriting the term containing h in terms of π using Eqn 1 (and ignoring constant terms WxWy), we get: X t X I Eqt I h X d X z Iit d=icd z(t) log X i′∈Wi πi′,z i = X t X I X d P(it d = i) X z cd z(t) log X i′∈Wi πi′,z (7) The presence of a summation inside of a logarithm makes maximizing this objective for π harder. For this, we simply use Jensen’s inequality introducing an additional variational distribution (for the latent grid positions within window Wi ), and maximize the lower bound. The final M-step update for π becomes: πi,z ∝  X t X d cd z(t) X k|i∈Wk P(it d = k) hk,z  πi,z (8) 2Since the optimal value for the concave problem P j yj log xj s.t. P j xj = 1 occurs when x∗ j ∝yj As before, the state-probabilities P(it d = i) can be computed using the Forward Backward algorithm. Intuitively, the expected alignments in the E-step are distributions over sequences of positions in the grid that best explain the structure of documents for the current value of Sequential CG parameters. In the M-step, we assume these distributions embedding documents into various parts of the grid as given, and update the multinomial parameters and transition probabilities. Modeling the transitions as having a Markov property allows us to use a dynamic programming approach (Forward Backward algorithm) to exactly compute the posterior probabilities required for parameter updates. We note that at the onset of the procedure, we need to initialize π randomly to break symmetries. Unless otherwise stated, in all experiments, we run EM to 200 iterations. Correlating space with sequential structure: The use of histogram distributions h to generate data forces smoothness in the model along XY planes due to adjacent cells in the grid sharing a large number of parameters that contribute to their histograms (due to overlapping windows). On the other hand, in order to induce spatial proximity in the grid to mimic the sequential flow of discourse in documents, we constrain the transition matrix P (which specifies transition preferences from one position in the grid to another) to a sparse banded matrix. In particular, a position i = (ix, iy, iz) in the grid can only transition to itself, its 4 neighbors in the same XY plane, and two cells in the succeeding two layers along the Z-axis ( (ix, iy, iz+1) and (ix, iy, iz+2)). This is enforced by fixing other elements in the transition matrix to 0, and only updating allowable transitions. As an important note about implementation details, we observe here that the Forward-Backward procedure (which is repeatedly invoked during model training) can be naturally formulated in terms of matrix operations.3 This allows training for the Sequential CG approach to be scalable for large document collections. In our formulation, we have presented a Sequential CG model for a 3-D grid. This can be adapted to learn 2-D grids (trellis) by setting Ey = 1. In our experiments, we found 3-D grids to be better 3To explain, if f d 1×G are forward probabilities for step d, and Od+1 G×G are observation probabilities for step d + 1, f d+1 = f d × P × Od computes forward probabilities for the next step in the sequence 2272 in terms of task performance and visualization (for a comparable number of parameters). 3.2 Mixture model The Sequential CG model described above can be combined with other generative models (e.g., language models) to get a mixture model. Here, we show how a unigram language model can be combined with Sequential CG. The rationale behind this is that since the Sequential CG is primarily designed to explain elements of documents that reflect sequential discourse structures, mixing with a context-agnostic distribution can allow it to focus specifically on elements that reflect sequential regularities. In experimental evaluation, we find that such a mixture model shows distinctly different behavior (see Section 4.1.1). Next, we briefly describe updates for this approach. Let µz denote the multinomial distribution over features for the unigram model to be mixed with the CG. Let βz be the mixing proportion for the feature z, i.e. an occurrence of z is presumed to come from the Sequential CG with probability βz, and from the unigram distribution with probability 1 −βz. Further, let αt z be binary variable that denotes whether a particular instance of z comes from the Sequential CG, or the unigram model. Then, Equation 2 changes to: p(s| I, α) = Y z,d  (hid,z)cd zβz αt zµcd z z (1−βz) 1−αt z Since we do not observe αt z (i.e., which distribution generated a particular feature in a particular document), they are additional latent variables in the model. Thus, we need to introduce a Bernoulli variational distribution qαzt. Doing this modifies relevant parts (containing qαzt) of Equation 5 to: X t X I qt I  X z qαzt log βz Y d hcd z(t) id,z  + (1 −qαzt) log 1 −βz)µ P d cd z z  + X d log Pid−1,id  + Hqαzt (9) This leads to the following additional updates for estimating qαzt (in the E-step)4 and βz (in the Mstep). 4Since the optimal value for the concave problem P j xj log yj xj s.t. P j xj = 1 occurs when x∗ j ∝yj qαzt = exp  PI i P(it d=i)cd z(t) log hid,z  βz exp  PI i P(it d=i)cdz(t) log hid,z  βz+µ P d cdz z (1−βz) In the M-step, βz can be estimated simply as the fraction of times z is generated from the Sequential CG component. βz = P t qαzt P t Iz 4 Evaluation In this section, we analyze the performance of our approach on text collections from several domains (including short stories, newswire text and biographies). We first qualitatively evaluate our generative method on a dataset of biographical extracts from Wikipedia, which visually illustrates biographical trajectories learned by the model, operationalizing our model concept from Figure 1 in real data (see Figure 2). Next, we evaluate our method on two standard tasks requiring document understanding: story cloze evaluation and sentence ordering. Since our method is completely unsupervised and is not tailored to specific tasks, competitive performance on these tasks would indicate that the method learns helpful regularities in text structure, useful for general-purpose language understanding. 4.1 Visualizing Wikipedia biographies We now qualitatively explore models learned by our method on a dataset of biographies from Wikipedia.5 For this, we use the data previously collected and processed by Bamman and Smith (2014). In all, the original dataset consists of extracts from biographies of about 240,000 individuals. For ease of training, we trained our method on a subset of the 50,0000 shortest documents from this set. The original paper uses the numerical order of dates mentioned in the biographies to extract biographical templates, but we do not use this information. Figure 2 visualizes a Sequential CG model learned on this dataset for on a grid of dimensions E = 8 × 8 × 5, and a histogram window W of dimensions 3 × 3 . In general, we found that using larger grids leads to smoother transitions and learning more intricate patterns including hierarchies of trajectories, but here we show a model with a 5For all our experimental evaluation, we tokenize and lemmatize text using the Stanford CoreNLP pipeline, but retain entity-names and contiguous text-spans representing MWEs as single units 2273 Figure 2: Visualization of a Sequential-CG model with grid size of 8×8×5, trained on 50,000 documents from the Wikipedia biographies dataset. Cells in the grid show words with highest probabilities (empty cells may indicate that no word has a substantially higher probability than others). smaller grid for ease of visualization. Here, the words in each cell in the grid denote the highest probability assignments in that cell. Larger fonts within a cell indicate higher probabilities. We observe that the method successfully extracts various biographical trajectories, as well as capture a notion of similarity between them. To explain, we observe that the lower-right part of the learned grid largely models documents about sportpersons (with discernable regions focusing on sports like soccer, American football and ice-hockey). On the other hand, the left-half of the grid is dominated by biographies of people from the arts and humanities (inlcuding artists, writers, musicians, etc.). The top-center of the grid focuses on academicians and scientists, while the top-right represents biographies of political and military leaders. We note smooth transitions between different regions, which is precisely what we would expect from the use of the smoothing filter that incorporates parameter sharing across cells in the method. Further, as we go across the layers in the figure, we note the biographical trajectories learned by the model across the entire grid. For example, from the grid, the life trajectory of a football player can be visualized as being drafted, signing and playing for a team, and eventually becoming a head-coach or a hall-of-fame inductee. 4.1.1 Effects of mixing The Sequential-CG method can be combined with other generative models in a mixture model, following the approach previously described in Section 3.2. A major reason to do this might be to allow the base model to handle general content, while allowing the Sequential-CG method to focus on modeling context-sensitive words only. Here, we empirically characterize the mixing behavior for different categories of words. Figure 3 shows the mixing proportion of different words when the Sequential-CG model is combined with a unigram model. In the figure, the X-axis corresponds to words in the dataset with decreasing frequency of occurrence, whereas the Yaxis denotes the mixing proportions βz learned by the mixture model. We note that the mixture model learns to explain frequent as well as the long-tail of rare words using the simple unigram model (as seen from low mixing proportion of Sequential-CG method). These regimes correspond to (1) stopwords and very common nouns, and (2) rare words respectively. In turn, this allows the SequentialCG component to preserve more probability mass to explain the intermediate content words. Thus, the Sequential-CG component only needs to model words that reflect useful statistical sequential patterns, without expending modeling effort on background content (common words) or noise (rare words). For the long tail of infrequent words, we observe that Sequential CG is much more likely to generate verbs and adjectives, rather than nouns. This is as we would expect, since verbs and adjectives often denote events and sentiments, which can 2274 Figure 3: Learned mixing proportion (βz) in the mixture model of Section 3.2 for words of different frequencies. βz denotes the probability of a word being generated from the Sequential CG model (rather than from the Unigram model). The Sequential CG learns to model content words (with intermediate ranks), and conserves modeling effort by avoiding modeling both very common words (that occur across contexts), as well as rare words. be important elements in discourse trajectories. 4.2 Story-cloze We next evaluate our method on the story-cloze task presented by Mostafazadeh et al. (2016), which tests common-sense understanding in context of children stories. The task consists of identifying the correct ending to a four-sentence long story (called context in the original paper) and two possible ending options. The dataset for the task consists of a collection of around 45K unlabeled 5-sentence long stories as well as 3742 5-sentence stories with two provided endings, with one labeled as the correct ending. For this task, we train our method on grids of dimension 15 × 15 × 6 (E), and histogram windows W of size 5 × 5 on the unlabeled collection of stories. At test time, for each story, we are provided two versions (a story-version v consists of the provided context c, followed by a possible ending e1, i.e. v = [c, e] ). For prediction, we need to define a goodness score Sv for a story-version. In the simplest case, this score can simply be the log-likelihood log pSCG(v) of the story-version, according to the Sequential-CG model. However, this is problematic since this is biased towards choosing shorter endings. To alleviate this, we define the goodness score by discounting the log-likelihood by the probability of the ending e itself, under a Accuracy Our Method variants Sequential CG + Unigram Mixture 0.602 Sequential CG + Brown clustering 0.593 Sequential CG + Sentiment 0.581 Sequential CG 0.589 Sequential CG (unnormalized) 0.531 DSSM 0.585 GenSim 0.539 Skip-thoughts 0.552 Narrative-Chain(Stories) 0.494 N-grams 0.494 Table 1: Performance of our approach on storycloze task from Mostafazadeh et al. (2016) compared with other unsupervised approaches (accuracy numbers as reported in Mostafazadeh et al. (2016)). simple unigram model. Sv = log pSCG(c, e) −log puni(e) The predicted ending is the story-version with a higher score. Table 1 shows the performance of variants of our approach for the task. Our baselines include previous approaches for the same task: DSSM is a deep-learning based approach, which maps the context and ending to the same space, and is the best-performing method in Mostafazadeh et al. (2016). GenSim and N-gram return the ending that is more similar to the context based on word2vec embeddings (Mikolov et al., 2013) and n-grams, respectively. Narrative-Chains computes the probability of each alternative based on eventchains, following the approach of Chambers and Jurafsky (2008). We note that our method improves on the previous best unsupervised methods for the task. This is quite surprising, since our Sequential-CG model in this case is trained on bag-of-lemma representations, and only needs sentence segmentation, tokenization and lemmatization for preprocessing. On the other hand, approaches such as Narrative-Chains require parsing and eventrecognition, while approaches such as GenSim require learning word embeddings on large text corpora for training. Further, we note that predicting the ending without normalizing for the probability of the words in the ending results in significantly weaker performance, as expected. We train another 2275 Figure 4: Illustrative story-cloze examples where the model correctly identifies the appropriate ending (model score in parentheses). variant of Sequential-CG with the sentence-level sentiment annotation (from Stanford CoreNLP) also added as a feature. This does not improve performance, consistent with findings in Mostafazadeh et al. (2016). We also experiment with a variant where we perform Brown clustering (Brown et al., 1992) of words in the unlabeled stories (K = 500 clusters), and include cluster-annotations as features for training the method. Doing this explicitly incorporates lexical similarity into the model, leading to a small improvement in performance. Finally, a mixture model consisting of the Sequential-CG and a unigram language model leads to a further improvement in performance. The performance of our unsupervised approach on this task indicates that it can learn discourse structures that are helpful for general language understanding. The story-cloze task has recently also been addressed as a shared task at EACL (Mostafazadeh et al., 2017) with a significantly expanded dataset, and achieving much higher performance. However, we note that the proposed best-performing approaches (Chaturvedi et al., 2017; Schwartz et al., 2017) for this task are all supervised, and hence not included here for comparison. Figure 4 shows examples where the model correctly identifies the ending. These show a mix of behavior such as sentiment coherence (identifying dissonance between ‘wonderful surprise’ and ‘stolen’) and modeling causation (police being called after being suspected). 4.3 Sentence Ordering We next evaluate our method on the sentence ordering task, which requires distinguishing an original Accidents Earthquakes Sequential CG 0.813 0.946 VLV-GM (2017) 0.770 0.931 HMM (2012) 0.822 0.938 HMM+Entity (2012) 0.842 0.911 HMM+Content (2012) 0.742 0.953 Discriminative approaches DM (2017) 0.930 0.992 Recursive (2014) 0.864 0.976 Entity-Grid (2008) 0.904 0.872 Graph (2013) 0.846 0.635 Table 2: Performance of our approach on sentence ordering dataset from Barzilay and Lapata (2008). document from a version consisting of permutations of sentences of the original (Barzilay and Lapata, 2008; Louis and Nenkova, 2012). For this, we use two datasets of documents and their permutations from Barzilay and Lapata (2008), which are used as standard evaluation for coherence prediction tasks. These consist of (i) reports of accidents from the National Transportation Safety Bureau (we refer to this data as accidents), and (ii) newswire reports about earthquake events from the Associated press (we refer to this as earthquakes). Each dataset consists of 100 training documents, and about 100 documents for testing. Also provided are about 20 generated permutations for each document (resulting in 1986 test pairs for accidents, and 1955 test pairs for earthquakes). Documents in accidents consist of between 6 and 19 sentences each, with a median of 11 sentences. Documents in earthquakes consist of between 4 and 30 sentences each, with a median of 10 sentences. Since the datasets for these tasks only have a relatively small number of training documents (100 each), we use Sequential-CG with smaller grids (3×3×15), and don’t train a mixture model (which needs to learn a parameter βz for each word in the vocabulary). Further, we train for a much smaller number of iterations to prevent overfitting (K = 3, chosen through cross-validation on the training set). During testing, since provided article pairs are simply permutations of each other and identical in content, we do not need to normalize as needed in Section 4.2. The score of a provided article is simply calculated as its log-likelihood. The article with higher likelihood is predicted to be the original. Table 2 shows performance of the method compared with other approaches for coherence prediction. We note that Sequential-CG performs com2276 Figure 5: Example of newswire report about an earthquake event. Bold fonts represent words that align particularly well with the learned model at corresponding points in the narrative. petitively with the state-of-the-art for generative approaches for the task, while needing no other annotation. In comparison, the HMM based approaches use significant annotation and syntactic features. Sequential-CG also outperforms several discriminative approaches for the task. In Figure 5 we illustrate the learned discourse trajectories in terms of the most salient features in each sentence. Words in bold are those identified by the model to be most context-appropriate at the corresponding point in the narrative. This is done by ranking words by the ratio between their probabilities (π:,z) in the grid weighted by alignment locations of the document (qt I), and unigram probabilities. 5 Conclusion We have presented a simple model for extracting and visualizing latent discourse structure from unlabeled documents. The approach is coarse, and does not have explicit models for important elements such as entities and events in a discourse. However, the method outperforms some previous approaches on document understanding tasks, even while ignoring syntactic structure within sentences. The ability to visualize learning is a key component of our method, which can find significant applications in data mining and data-discovery in large text collections. More generally, similar approaches can explore a wider range of scenarios involving sequences of text. While here our focus was on learning discourse structures at the document level, similar methods can also be used at other scales, such as for syntactic or morphological analysis. References Nicholas Asher. 1986. Belief in discourse representation theory. Journal of Philosophical Logic, 15(2):127–189. David Bamman and Noah Smith. 2014. Unsupervised discovery of biographical structure from text. Transactions of the Association for Computational Linguistics, 2:363–376. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34(1):1–34. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Classbased n-gram models of natural language. Computational linguistics, 18(4):467–479. Nathanael Chambers and Daniel Jurafsky. 2008. Unsupervised learning of narrative event chains. In ACL, pages 789–797. The Association for Computer Linguistics. Snigdha Chaturvedi, Haoruo Peng, and Dan Roth. 2017. Story comprehension for predicting what happens next. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1603–1614. Snigdha Chaturvedi, Shashank Srivastava, and Dan Roth. 2018. ‘Where have I heard this story before?’ : Identifying narrative similarity in movie remakes. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Thomas L Griffiths, Mark Steyvers, David M Blei, and Joshua B Tenenbaum. 2005. Integrating topics and syntax. In Advances in neural information processing systems, pages 537–544. Camille Guinaudeau and Michael Strube. 2013. Graphbased local coherence modeling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 93–103. Nebojsa Jojic and Alessandro Perina. 2011. Multidimensional counting grids: Inferring word order from disordered bags of words. In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, pages 547–556. AUAI Press. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. ACL 2013, page 119. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 2126 June 2014, pages 1188–1196. 2277 Jiwei Li and Eduard Hovy. 2014. A model of coherence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2039–2048. Jiwei Li and Dan Jurafsky. 2017. Neural net models of open-domain discourse coherence. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 198–209. Jiwei Li, Rumeng Li, and Eduard H Hovy. 2014. Recursive deep models for discourse parsing. In EMNLP, pages 2061–2069. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 997–1006. Association for Computational Linguistics. Annie Louis and Ani Nenkova. 2012. A coherence model based on syntactic patterns. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1157–1168. Association for Computational Linguistics. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of NAACLHLT, pages 839–849. Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. LSDSem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 46–51, Valencia, Spain. Karl Pichotta and Raymond J Mooney. 2015. Learning statistical scripts with LSTM recurrent neural networks. In AAAI. Roger C Schank and Robert P Abelson. 1977. Scripts, plans, goals and understanding: an inquiry into human knowledge structures. Erlbaum. Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the roc story cloze task. arXiv preprint arXiv:1702.01841. Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Nyberg. 2017. Steering output style and topic in neural response generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2140–2150, Copenhagen, Denmark. Association for Computational Linguistics. Xuerui Wang and Andrew McCallum. 2006. Topics over time: A non-markov continuous-time model of topical trends. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, pages 424– 433, New York, NY, USA. ACM.
2018
211
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2278–2288 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2278 Joint Reasoning for Temporal and Causal Relations Qiang Ning,1 Zhili Feng,2 Hao Wu,3 Dan Roth1,3 Department of Computer Science 1University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA 2University of Wisconsin-Madison, Madison, WI 53706, USA 3University of Pennsylvania, Philadelphia, PA 19104, USA [email protected], [email protected], haowu4,[email protected] Abstract Understanding temporal and causal relations between events is a fundamental natural language understanding task. Because a cause must occur earlier than its effect, temporal and causal relations are closely related and one relation often dictates the value of the other. However, limited attention has been paid to studying these two relations jointly. This paper presents a joint inference framework for them using constrained conditional models (CCMs). Specifically, we formulate the joint problem as an integer linear programming (ILP) problem, enforcing constraints that are inherent in the nature of time and causality. We show that the joint inference framework results in statistically significant improvement in the extraction of both temporal and causal relations from text.1 1 Introduction Understanding events is an important component of natural language understanding. An essential step in this process is identifying relations between events, which are needed in order to support applications such as story completion, summarization, and timeline construction. Among the many relation types that could exist between events, this paper focuses on the joint extraction of temporal and causal relations. It is well known that temporal and causal relations interact with each other and in many cases, the decision of one relation is made primarily based on evidence from the other. In Example 1, identifying the temporal relation between e1:died and e2:exploded is 1The dataset and code used in this paper are available at http://cogcomp.org/page/publication_ view/835 in fact a very hard case: There are no explicit temporal markers (e.g., “before”, “after”, or “since”); the events are in separate sentences so their syntactic connection is weak; although the occurrence time of e2:exploded is given (i.e., Friday) in text, it is not given for e1:died. However, given the causal relation, e2:exploded caused e1:died,it is clear that e2:exploded happened before e1:died. The temporal relation is dictated by the causal relation. Ex 1: Temporal relation dictated by causal relation. More than 10 people (e1:died) on their way to the nearest hospital, police said. A suicide car bomb (e2:exploded) on Friday in the middle of a group of men playing volleyball in northwest Pakistan. Since e2:exploded is the reason of e1:died, the temporal relation is thus e2 being before e1. Ex 2: Causal relation dictated by temporal relation. Mir-Hossein Moussavi (e3:raged) after government’s efforts to (e4:stifle) protesters. Since e3:raged is temporally after e4:stifle, e4 should be the cause of e3. On the other hand, causal relation extraction can also benefit from knowing temporal relations. In Example 2, it is unclear whether the government stifled people because people raged, or people raged because the government stifled people: both situations are logically reasonable. However, if we account for the temporal relation (that is, e4:stifle happened before e3:raged), it is clear that e4:stifle is the cause and e3:raged is the effect. In this case, the causal relation is dictated by the temporal relation. The first contribution of this work is proposing a joint framework for Temporal and Causal Reasoning (TCR), inspired by these examples. Assuming the availability of a temporal extraction system and a causal extraction system, the proposed joint framework combines these two using a constrained conditional model (CCM) (Chang et al., 2012) framework, with an integer linear pro2279 gramming (ILP) objective (Roth and Yih, 2004) that enforces declarative constraints during the inference phase. Specifically, these constraints include: (1) A cause must temporally precede its effect. (2) Symmetry constraints, i.e., when a pair of events, (A, B), has a temporal relation r (e.g., before), then (B, A) must have the reverse relation of r (e.g., after). (3) Transitivity constraints, i.e., the relation between (A, C) must be temporally consistent with the relation derived from (A, B) and (B, C). These constraints originate from the one-dimensional nature of time and the physical nature of causality and build connections between temporal and causal relations, making CCM a natural choice for this problem. As far as we know, very limited work has been done in joint extraction of both relations. Formulating the joint problem in the CCM framework is novel and thus the first contribution of this work. A key obstacle in jointly studying temporal and causal relations lies in the absence of jointly annotated data. The second contribution of this work is the development of such a jointly annotated dataset which we did by augmenting the EventCausality dataset (Do et al., 2011) with dense temporal annotations. This dataset allows us to show statistically significant improvements on both relations via the proposed joint framework. This paper also presents an empirical result of improving the temporal extraction component. Specifically, we incorporate explicit time expressions present in the text and high-precision knowledge-based rules into the ILP objective. These sources of information have been successfully adopted by existing methods (Chambers et al., 2014; Mirza and Tonelli, 2016), but were never used within a global ILP-based inference method. Results on TimeBank-Dense (Cassidy et al., 2014), a benchmark dataset with temporal relations only, show that these modifications can also be helpful within ILP-based methods. 2 Related Work Temporal and causal relations can both be represented by directed acyclic graphs, where the nodes are events and the edges are labeled with either before, after, etc. (in temporal graphs), or causes and caused by (in causal graphs). Existing work on temporal relation extraction was initiated by (Mani et al., 2006; Chambers et al., 2007; Bethard et al., 2007; Verhagen and Pustejovsky, 2008), Ex 3: Global considerations are needed when making local decisions. The FAA on Friday (e5:announced) it will close 149 regional airport control towers because of forced spending cuts. Before Friday’s (e6:announcement), it (e7:said) it would consider keeping a tower open if the airport convinces the agency it is in the ”national interest” to do so. which formulated the problem as that of learning a classification model for determining the label of each edge locally (i.e., local methods). A disadvantage of these early methods is that the resulting graph may break the symmetric and transitive constraints. There are conceptually two ways to enforce such graph constraints (i.e., global reasoning). CAEVO (Chambers et al., 2014) grows the temporal graph in a multi-sieve manner, where predictions are added sieve-by-sieve. A graph closure operation had to be performed after each sieve to enforce constraints. This is solving the global inference problem greedily. A second way is to perform exact inference via ILP and the symmetry and transitivity requirements can be enforced as ILP constraints (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012; Ning et al., 2017). We adopt the ILP approach in the temporal component of this work for two reasons. First, as we show later, it is straightforward to build a joint framework with both temporal and causal relations as an extension of it. Second, the relation between a pair of events is often determined by the relations among other events. In Ex 3, if a system is unaware of (e5, e6)=simultaneously when trying to make a decision for (e5, e7), it is likely to predict that e5 is before e72; but, in fact, (e5, e7)=after given the existence of e6. Using global considerations is thus beneficial in this context not only for generating globally consistent temporal graphs, but also for making more reliable pairwise decisions. Prior work on causal relations in natural language text was relatively sparse. Many causal extraction work in other domains assumes the existence of ground truth timestamps (e.g., (Sun et al., 2007; G¨uler et al., 2016)), but gold timestamps rarely exist in natural language text. In NLP, people have focused on causal relation identification using lexical features or discourse relations. For 2Consider the case that “The FAA e5:announced...it e7:said it would.. . ”. Even humans may predict that e5 is before e7. 2280 example, based on a set of explicit causal discourse markers (e.g., “because”, “due to”, and “as a result”), Hidey and McKeown (2016) built parallel Wikipedia articles and constructed an open set of implicit markers called AltLex. A classifier was then applied to identify causality. Dunietz et al. (2017) used the concept of construction grammar to tag causally related clauses or phrases. Do et al. (2011) considered global statistics over a large corpora, the cause-effect association (CEA) scores, and combined it with discourse relations using ILP to identify causal relations. These work only focused on the causality task and did not address the temporal aspect. However, as illustrated by Examples 1-2, temporal and causal relations are closely related, as assumed by many existing works (Bethard and Martin, 2008; Rink et al., 2010). Here we argue that being able to capture both aspects in a joint framework provides a more complete understanding of events in natural language documents. Researchers have started paying attention to this direction recently. For example, Mostafazadeh et al. (2016b) proposed an annotation framework, CaTeRs, which captured both temporal and causal aspects of event relations in common sense stories. CATENA (Mirza and Tonelli, 2016) extended the multi-sieve framework of CAEVO to extracting both temporal and causal relations and exploited their interaction through post-editing temporal relations based on causal predictions. In this paper, we push this idea forward and tackle the problem in a joint and more principled way, as shown next. 3 Temporal and Causal Reasoning In this section, we explain the proposed joint inference framework, Temporal and Causal Reasoning (TCR). To start with, we focus on introducing the temporal component, and clarify how to design the transitivity constraints and how to enforce other readily available prior knowledge to improve its performance. With this temporal component already explained, we further incorporate causal relations and complete the TCR joint inference framework. Finally, we transform the joint problem into an ILP so that it can be solved using offthe-shelf packages. 3.1 Temporal Component Let RT be the label set of temporal relations and E and T be the set of all events and the set of all time expressions (a.k.a. timex) in a document. For notation convenience, we use EE to represent the set of all event-event pairs; then ET and T T have obvious definitions. Given a pair in EE or ET , assume for now that we have corresponding classifiers producing confidence scores for every temporal relation in RT . Let them be see(·) and set(·), respectively. Then the inference formulation for all the temporal relations within this document is: ˆY = arg max Y ∈Y ∑ i∈EE see{i 7→Yi} + ∑ j∈ET set{j 7→Yj} (1) where Yk ∈RT is the temporal label of pair k ∈MM (Let M = E ∪T be the set of all temporal nodes), “k 7→Yk” represents the case where the label of pair k is predicted to be Yk, Y is a vectorization of all these Yk’s in one document, and Y is the constrained space that Y lies in. We do not include the scores for T T because the temporal relationship between timexes can be trivially determined using the normalized dates of these timexes, as was done in (Do et al., 2012; Chambers et al., 2014; Mirza and Tonelli, 2016). We impose these relations via equality constraints denoted as Y0. In addition, we add symmetry and transitivity constraints dictated by the nature of time (denoted by Y1), and other prior knowledge derived from linguistic rules (denoted by Y2), which will be explained subsequently. Finally, we set Y = ∩2 i=0Yi in Eq. (1). Transitivity Constraints. Let the dimension of Y be n. Then a standard way to construct the symmetry and transitivity constraints is shown in (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012; Ning et al., 2017) Y1 = { Y ∈Rn T |∀m1,2,3 ∈M, Y(m1,m2) = ¯Y(m2,m1), Y(m1,m3) ∈Trans(Y(m1,m2), Y(m2,m3)) } where the bar sign is used to represent the reverse relation hereafter, and Trans(r1, r2) is a set comprised of all the temporal relations from RT that do not conflict with r1 and r2. The construction of Trans(r1, r2) necessitates a clearer definition of RT , the importance of which is often overlooked by existing methods. Existing approaches all followed the interval representation of events (Allen, 1984), which yields 13 temporal relations (denoted by ˜RT here) as shown in Fig. 1. Most systems used a reduced set, for ex2281 before before before before x x includes x includes includes is included simultaneous x simultaneous includes x is included is included is included after x x after x after after Scheme 1 Scheme 2 B A A A A A A A A A A A A A time Figure 1: Two possible interpretations to the label set of RT = {b, a, i, ii, s, v} for the temporal relations between (A, B). “x” means that the label is ignored. Brackets represent time intervals along the time axis. Scheme 2 is adopted consistently in this work. ample, {before, after, includes, is included, simultaneously, vague}. For notation convenience, we denote them RT = {b, a, i, ii, s, v}. Using a reduced set is more convenient in data annotation and leads to better performance in practice. However, there has been limited discussion in the literature on how to interpret the reduced relation types. For example, is the “before” in RT exactly the same as the “before” in the original set ( ˜RT ) (as shown on the left-hand-side of Fig. 1), or is it a combination of multiple relations in ˜RT (the right-hand-side of Fig. 1)? We compare two reduction schemes in Fig. 1, where scheme 1 ignores low frequency labels directly and scheme 2 absorbs low frequency ones into their temporally closest labels. The two schemes barely have differences when a system only looks at a single pair of mentions at a time (this might explain the lack of discussion over this issue in the literature), but they lead to different Trans(r1, r2) sets and this difference can be magnified when the problem is solved jointly and when the label distribution changes across domains. To completely cover the 13 relations, we adopt scheme 2 in this work. The resulting transitivity relations are shown in Table 1. The top part of Table 1 is a compact representation of three generic rules; for instance, Line 1 means that the labels themselves are transitive. Note that during human annotation, if an annotator looks at a pair of events and decides that multiple well-defined relations can exist, he/she labels it vague; also, when aggregating the labels from multiple annotators, a label will be No. r1 r2 Trans(r1, r2) 1 r r r 2 r s r 3 r1 r2 Trans(¯r2, ¯r1) 4 b i b, i, v 5 b ii b, ii, v 6 b v b, i, ii, v 7 a i a, i, v 8 a ii a, ii, v 9 a v a, i, ii ,v 10 i v b, a, i, v 11 ii v b, a, ii, v Table 1: Transitivity relations based on the label set reduction scheme 2 in Fig. 1. If (m1, m2) 7→r1 and (m2, m3) 7→r2, then the relation of (m1, m3) must be chosen from Trans(r1, r2), ∀m1, m2, m3 ∈M. The top part of the table uses r to represent generic rules compactly. Notations: before (b), after (a), includes (i), is included (ii), simultaneously (s), vague (v), and ¯r represents the reverse relation of r. changed to vague if the annotators disagree with each other. In either case, vague is chosen to be the label when a single well-defined relation cannot be uniquely determined by the contextual information. This explains why a vague relation (v) is always added in Table 1 if more than one label in Trans(r1, r2) is possible. As for Lines 6, 9-11 in Table 1 (where vague appears in Column r2), Column Trans(r1,r2) was designed in such a way that r2 cannot be uniquely determined through r1 and Trans(r1,r2). For instance, r1 is after on Line 9, if we further put before into Trans(r1,r2), then r2 would be uniquely determined to be before, conflicting with r2 being vague, so before should not be in Trans(r1,r2). Enforcing Linguistic Rules. Besides the transitivity constraints represented by Y1 above, we also propose to enforce prior knowledge to further constrain the search space for Y . Specifically, linguistic insight has resulted in rules for predicting the temporal relations with special syntactic or semantic patterns, as was done in CAEVO (a state-of-the-art method). Since these rule predictions often have high-precision, it is worthwhile incorporating them in global reasoning methods as well. In the CCM framework, these rules can be represented as hard constraints in the search space for Y . Specifically, Y2 = { Yj = rule(j), ∀j ∈J (rule)} , (2) where J (rule) ⊆MM is the set of pairs that can be determined by linguistic rules, and rule(j) ∈ 2282 RT is the corresponding decision for pair j according to these rules. In this work, we used the same set of rules designed by CAEVO for fair comparison. 3.2 Full Model with Causal Relations Now we have presented the joint inference framework for temporal relations in Eq. (1). It is easier to explain our complete TCR framework on top of it. Let W be the vectorization of all causal relations and add the scores from the scoring function for causality sc(·) to Eq. (1). Specifically, the full inference formulation is now: ˆY , ˆW = arg max Y ∈Y,W ∈WY ∑ i∈EE see{i 7→Yi} (3) + ∑ j∈ET set{j 7→Yj} + ∑ k∈EE sc{k 7→Wk} where WY is the search space for W. WY depends on the temporal labels Y in the sense that WY = {W ∈Rm C |∀i, j ∈E, if W(i,j) = c, (4) then W(j,i) = ¯c, and Y(i,j) = b} where m is the dimension of W (i.e., the total number of causal pairs), RC = {c, ¯c, null} is the label set for causal relations (i.e., “causes”, “caused by”, and “no relation”), and W(i,j) is the causal label for pair (i, j). The constraint represented by WY means that if a pair of events i and j are labeled to be “causes”, then the causal relation between j and i must be “caused by”, and the temporal relation between i and j must be “before”. 3.3 Scoring Functions In the above, we have built the joint framework on top of scoring functions see(·), set(·) and sc(·). To get see(·) and set(·), we trained classifiers using the averaged perceptron algorithm (Freund and Schapire, 1998) and the same set of features used in (Do et al., 2012; Ning et al., 2017), and then used the soft-max scores in those scoring functions. For example, that means see{i 7→r} = wT r ϕ(i) ∑ r′∈RT wT r′ϕ(i), i ∈EE, r ∈RT , where {wr} is the learned weight vector for relation r ∈RT and ϕ(i) is the feature vector for pair i ∈EE. Given a pair of ordered events, we need sc(·) to estimate the scores of them being “causes” or “caused by”. Since this scoring function has the same nature as see(·), we can reuse the features from see(·) and learn an averaged perceptron for sc(·). In addition to these existing features, we also use prior statistics retrieved using our temporal system from a large corpus3, so as to know probabilistically which event happens before another event. For example, in Example 1, we have a pair of events, e1:died and e2:exploded. The prior knowledge we retrieved from that large corpus is that die happens before explode with probability 15% and happens after explode with probability 85%. We think this prior distribution is correlated with causal directionality, so it was also added as features when training sc(·). Note that the scoring functions here are implementation choice. The TCR joint framework is fully extensible to other scoring functions. 3.4 Convert the Joint Inference into an ILP Conveniently, the joint inference formulation in Eq. (3) can be rewritten into an ILP and solved using off-the-shelf optimization packages, e.g., (Gurobi Optimization, Inc., 2012). First, we define indicator variables yr i = I{Yi = r}, where I{·} is the indicator function, ∀i ∈MM, ∀r ∈ RT . Then let pr i = see{i 7→r} if i ∈EE, or pr i = set{i 7→r} if i ∈ET ; similarly, let wr j = I{Wi = r} be the indicator variables for Wj and qr j be the score for Wj = r ∈RC. Therefore, without constraints Y and WY for now, Eq. (3) can be written as: ˆy, ˆw = arg max ∑ i∈EE∪ET ∑ r∈RT pr i yr i + ∑ j∈EE ∑ r∈RC qr j wr j s.t. yr i , wr j ∈{0, 1}, ∑ r∈RT yr i = ∑ r∈RC wr j = 1 The prior knowledge represented as Y and WY can be conveniently converted into constraints for this optimization problem. Specifically, Y1 has two components, symmetry and transitivity: Y1 : ∀i, j, k ∈M, yr i,j = y¯r j,i, (symmetry) yr1 i,j + yr2 j,k − ∑ r3∈Trans(r1,r2) yr3 i,k ≤1 (transitivity) where ¯r is the reverse relation of r (i.e., ¯b = a, ¯i = ii, ¯s = s, and ¯v = v), and Trans(r1, r2) is defined in Table 1. As for the transitivity constraints, 3https://catalog.ldc.upenn.edu/ LDC2008T19, which is disjoint to the test set used here. Please refer to (Ning et al., 2018a) for more analysis on using this corpus to acquire prior knowledge that aids temporal relation classification. 2283 if both yr1 i,j and yr2 j,k are 1, then the constraint requires at least one of yr3 i,k, r3 ∈Trans(r1, r2) to be 1, which means the relation between i and k has to be chosen from Trans(r1, r2), which is exactly what Y1 is intended to do. The rules in Y2 is written as Y2 : yr j = I{rule(j)=r}, ∀j ∈J (rule) (linguistic rules) where rule(j) and J (rule) have been defined in Eq. (2). Converting the T T constraints, i.e., Y0, into constraints is as straightforward as Y2, so we omit it due to limited space. Last, converting the constraints WY defined in Eq. (4) can be done as following: WY : wc i,j = w¯c j,i ≤yb i,j, ∀i, j ∈E. The equality part, wc i,j = w¯c j,i represents the symmetry constraint of causal relations; the inequality part, wc i,j ≤yb i,j represents that if event i causes event j, then i must be before j. 4 Experiments In this section, we first show on TimeBank-Dense (TB-Dense) (Cassidy et al., 2014), that the proposed framework improves temporal relation identification. We then explain how our new dataset with both temporal and causal relations was collected, based on which the proposed method improves for both relations. 4.1 Temporal Performance on TB-Dense Multiple datasets with temporal annotations are available thanks to the TempEval (TE) workshops (Verhagen et al., 2007, 2010; UzZaman et al., 2013). The dataset we used here to demonstrate our improved temporal component was TB-Dense, which was annotated on top of 36 documents out of the classic TimeBank dataset (Pustejovsky et al., 2003). The main purpose of TB-Dense was to alleviate the known issue of sparse annotations in the evaluation dataset provided with TE3 (UzZaman et al., 2013), as pointed out in many previous work (Chambers, 2013; Cassidy et al., 2014; Chambers et al., 2014; Ning et al., 2017). Annotators of TB-Dense were forced to look at each pair of events or timexes within the same sentence or contiguous sentences, so that much fewer links were missed. Since causal link annotation is not available on TB-Dense, we only show our improvement in terms of temporal performance on # System P R F1 Ablation Study 1 Baseline 39.1 56.8 46.3 2 +Transitivity† 42.9 54.9 48.2 3 +ET 44.3 54.8 49.0 4 +Rules 45.4 58.7 51.2 5 +Causal 45.8 60.5 52.1 Existing Systems‡ 6 ClearTK 53.0 26.4 35.2 7 CAEVO 56.0 41.6 47.8 8 Ning et al. (2017) 47.1 53.3 50.0 †This is technically the same with Do et al. (2012), or Ning et al. (2017) without its structured learning component. ‡We added gold T T to both gold and system prediction. Without this, Systems 6-8 had F1=28.7, 45.7, and 48.5, respectively, same with the reported values in Ning et al. (2017). Table 2: Ablation study of the proposed system in terms of the standard temporal awareness metric. The baseline system is to make inference locally for each event pair without looking at the decisions from others. The “+” signs on lines 2-5 refer to adding a new source of information on top of its preceding system, with which the inference has to be global and done via ILP. All systems are significantly different to its preceding one with p<0.05 (McNemar’s test). TB-Dense. The standard train/dev/test split of TBDense was used and parameters were tuned to optimize the F1 performance on dev. Gold events and time expressions were also used as in existing systems. The contributions of each proposed information sources are analyzed in the ablation study shown in Table 2, where we can see the F1 score was improved step-by-step as new sources of information were added. Recall that Y1 represents transitivity constraints, ET represents taking eventtimex pairs into consideration, and Y2 represents rules from CAEVO (Chambers et al., 2014). System 1 is the baseline we are comparing to, which is a local method predicting temporal relations one at a time. System 2 only applied Y1 via ILP on top of all EE pairs by removing the 2nd term in Eq. (1); for fair comparison with System 1, we added the same ET predictions from System 1. System 3 incorporated ET into the ILP and mainly contributed to an increase in precision (from 42.9 to 44.3); we think that there could be more gain if more time expressions existed in the testset. With the help of additional high-precision rules (Y2), the temporal performance can further be improved, as shown by System 4. Finally, using the causal extraction obtained via (Do et al., 2011) in the joint framework, the proposed method 2284 achieved the best precision, recall, and F1 scores in our ablation study (Systems 1-5). According to the McNemar’s test (Everitt, 1992; Dietterich, 1998), all Systems 2-5 were significantly different to its preceding system with p<0.05. The second part of Table 2 compares several state-of-the-art systems on the same test set. ClearTK (Bethard, 2013) was the top performing system in TE3 in temporal relation extraction. Since it was designed for TE3 (not TB-Dense), it expectedly achieved a moderate recall on the test set of TB-Dense. CAEVO (Chambers et al., 2014) and Ning et al. (2017) were more recent methods and achieved better scores on TB-Dense. Compared with these state-of-the-art methods, the proposed joint system (System 5) achieved the best F1 score with a major improvement in recall. We think the low precision compared to System 8 is due to the lack of structured learning, and the low precision compared to System 7 is propagated from the baseline (System 1), which was tuned to maximize its F1 score. However, the effectiveness of the proposed information sources is already justified in Systems 1-5. 4.2 Joint Performance on Our New Dataset 4.2.1 Data Preparation TB-Dense only has temporal relation annotations, so in the evaluations above, we only evaluated our temporal performance. One existing dataset with both temporal and causal annotations available is the Causal-TimeBank dataset (Causal-TB) (Mirza and Tonelli, 2014). However, Causal-TB is sparse in temporal annotations and is even sparser in causal annotations: In Table 3, we can see that with four times more documents, Causal-TB still has fewer temporal relations (denoted as T-Links therein), compared to TB-Dense; as for causal relations (C-Links), it has less than two causal relations in each document on average. Note that the T-Link sparsity of Causal-TB originates from TimeBank, which is known to have missing links (Cassidy et al., 2014; Ning et al., 2017). The CLink sparsity was a design choice of Causal-TB in which C-Links were annotated based on only explicit causal markers (e.g., “A happened because of B”). Another dataset with parallel annotations is CaTeRs (Mostafazadeh et al., 2016b), which was primarily designed for the Story Cloze Test (Mostafazadeh et al., 2016a) based on common Doc Event T-Link C-Link TB-Dense 36 1.6k 5.7k EventCausality 25 0.8k 580 Causal-TB 183 6.8k 5.1k 318 New Dataset 25 1.3k 3.4k 172 Table 3: Statistics of our new dataset with both temporal and causal relations annotated, compared with existing datasets. T-Link: Temporal relation. C-Link: Causal relation. The new dataset is much denser than Causal-TB in both T-Links and C-Links. sense stories. It is different to the newswire domain that we are working on. Therefore, we decided to augment the EventCausality dataset provided in Do et al. (2011) with a modified version of the dense temporal annotation scheme proposed in Cassidy et al. (2014) and use this new dataset to showcase the proposed joint approach. The EventCausality dataset provides relatively dense causal annotations on 25 newswire articles collected from CNN in 2010. As shown in Table 3, it has more than 20 C-Links annotated per document on average (10 times denser than CausalTB). However, one issue is that its notion for events is slightly different to that in the temporal relation extraction regime. To construct parallel annotations of both temporal and causal relations, we preprocessed all the articles in EventCausality using ClearTK to extract events and then manually removed some obvious errors in them. To annotate temporal relations among these events, we adopted the annotation scheme from TB-Dense given its success in mitigating the issue of missing annotations with the following modifications. First, we used a crowdsourcing platform, CrowdFlower, to collect temporal relation annotations. For each decision of temporal relation, we asked 5 workers to annotate and chose the majority label as our final annotation. Second, we discovered that comparisons involving ending points of events tend to be ambiguous and suffer from low inter-annotator agreement (IAA), so we asked the annotators to label relations based on the starting points of each event. This simplification does not change the nature of temporal relation extraction but leads to better annotation quality. For more details about this data collection scheme, please refer to (Ning et al., 2018b) for more details. 4.2.2 Results Result on our new dataset jointly annotated with both temporal and causal relations is shown in Ta2285 Temporal Causal P R F1 Accuracy 1. Temporal Only 67.2 72.3 69.7 2. Causal Only 70.5 3. Joint System 68.6 73.8 71.1 77.3 Enforcing Gold Relations in Joint System 4. Gold Temporal 100 100 100 91.9 5. Gold Causal 69.3 74.4 71.8 100 Table 4: Comparison between the proposed method and existing ones, in terms of both temporal and causal performances. See Sec. 4.2.1 for description of our new dataset. Per the McNemar’s test, the joint system is significantly better than both baselines with p<0.05. Lines 4-5 provide the best possible performance the joint system could achieve if gold temporal/causal relations were given. ble 4. We split the new dataset into 20 documents for training and 5 documents for testing. In the training phase, the training parameters were tuned via 5-fold cross validation on the training set. Table 4 demonstrates the improvement of the joint framework over individual components. The “temporal only” baseline is the improved temporal extraction system for which the joint inference with causal links has NOT been applied. The “causal only” baseline is to use sc(·) alone for the prediction of each pair. That is, for a pair i, if sc{i 7→causes} > sc{i 7→caused by}, we then assign “causes” to pair i; otherwise, we assign “caused by” to pair i. Note that the “causal accuracy” column in Table 4 was evaluated only on gold causal pairs. In the proposed joint system, the temporal and causal scores were added up for all event pairs. The temporal performance got strictly better in precision, recall, and F1, and the causal performance also got improved by a large margin from 70.5% to 77.3%, indicating that temporal signals and causal signals are helpful to each other. According to the McNemar’s test, both improvements are significant with p<0.05. The second part of Table 4 shows that if gold relations were used, how well each component would possibly perform. Technically, these gold temporal/causal relations were enforced via adding extra constraints to ILP in Eq. (3) (imagine these gold relations as a special rule, and convert them into constraints like what we did in Eq. (2)). When using gold temporal relations, causal accuracy went up to 91.9%. That is, 91.9% of the C-Links satisfied the assumption that the cause is temporally before the effect. First, this number is much higher than the 77.3% on line 3, so there is still room for improvement. Second, it means in this dataset, there were 8.1% of the C-Links in which the cause is temporally after its effect. We will discuss this seemingly counter-intuitive phenomenon in the Discussion section. When gold causal relations were used (line 5), the temporal performance was slightly better than line 3 in terms of both precision and recall. The small difference means that the temporal performance on line 3 was already very close to its best. Compared with the first line, we can see that gold causal relations led to approximately 2% improvement in precision and recall in temporal performance, which is a reasonable margin given the fact that C-Links are often much sparser than T-Links in practice. Note that the temporal performance in Table 4 is consistently better than those in Table 2 because of the higher IAA in the new dataset. However, the improvement brought by joint reasoning with causal relations is the same, which further confirms the capability of the proposed approach. 5 Discussion We have consistently observed that on the TBDense dataset, if automatically tuned to optimize its F1 score, a system is very likely to have low precisions and high recall (e.g., Table 2). We notice that our system often predicts non-vague relations when the TB-Dense gold is vague, resulting in lower precision. However, on our new dataset, the same algorithm can achieve a more balanced precision and recall. This is an interesting phenomenon, possibly due to the annotation scheme difference which needs further investigation. The temporal improvements in both Table 2 and Table 4 are relatively small (although statistically significant). This is actually not surprising because C-Links are much fewer than T-Links in newswires which focus more on the temporal development of stories. As a result, many T-Links are not accompanied with C-Links and the improvements are diluted. But for those event pairs having both T-Links and C-Links, the proposed joint framework is an important scheme to synthesize both signals and improve both. The comparison between Line 5 and Line 3 in Table 4 is a showcase of the effectiveness. We think that a deeper reason for the improvement achieved via a joint framework is that causality often encodes 2286 Ex 4: Cause happened after effect. The shares fell to a record low of ¥60 and (e8:finished) at ¥67 before the market (e9:closed) for the New Year holidays. As she (e10:prepares) to (e11:host) her first show, Crowley writes on what viewers should expect. humans prior knowledge as global information (e.g., “death” is caused by “explosion” rather than causes “explosion”, regardless of the local context), while temporality often focuses more on the local context. From this standpoint, temporal information and causal information are complementary and helpful to each other. When doing error analysis for the fourth line of Table 4, we noticed some examples that break the commonly accepted temporal precedence assumption. It turns out that they are not annotation mistakes: In Example 4, e8:finished is obviously before e9:closed, but e9 is a cause of e8 since if the market did not close, the shares would not finish. In the other sentence of Example 4, she prepares before hosting her show, but e11:host is the cause of e10:prepares since if not for hosting, no preparation would be needed. In both cases, the cause is temporally after the effect because people are inclined to make projections for the future and change their behaviors before the future comes. The proposed system is currently unable to handle these examples and we believe that a better definition of what can be considered as events is needed, as part of further investigating how causality is expressed in natural language. Finally, the constraints connecting causal relations to temporal relations are designed in this paper as “if A is the cause of B, then A must be before B”. People have suggested other possibilities that involve the includes and simultaneously relations. While these other relations are simply different interpretations of temporal precedence (and can be easily incorporated in our framework), we find that they rarely happen in our dataset. 6 Conclusion We presented a novel joint framework, Temporal and Causal Reasoning (TCR), using CCMs and ILP to the extraction problem of temporal and causal relations between events. To show the benefit of TCR, we have developed a new dataset that jointly annotates temporal and causal annotations, and then exhibited that TCR can improve both temporal and causal components. We hope that this notable improvement can foster more interest in jointly studying multiple aspects of events (e.g., event sequencing, coreference, parent-child relations) towards the goal of understanding events in natural language. Acknowledgements We thank all the reviewers for providing insightful comments and critiques. This research is supported in part by a grant from the Allen Institute for Artificial Intelligence (allenai.org); the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM AI Horizons Network; by DARPA under agreement number FA8750-132-0008; and by the Army Research Laboratory (ARL) under agreement W911NF-09-2-0053 (the ARL Network Science CTA). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, of the Army Research Laboratory or the U.S. Government. Any opinions, findings, conclusions or recommendations are those of the authors and do not necessarily reflect the view of the ARL. References James F Allen. 1984. Towards a general theory of action and time. Artificial intelligence 23(2):123–154. Steven Bethard. 2013. ClearTK-TimeML: A minimalist approach to TempEval 2013. In SemEval. volume 2, pages 10–14. Steven Bethard and James H Martin. 2008. Learning semantic links from a corpus of parallel temporal and causal relations. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers. Association for Computational Linguistics, pages 177–180. Steven Bethard, James H Martin, and Sara Klingenstein. 2007. Timelines from text: Identification of syntactic temporal relations. In IEEE International Conference on Semantic Computing (ICSC). pages 11–18. Philip Bramsen, Pawan Deshpande, Yoong Keok Lee, and Regina Barzilay. 2006. Inducing temporal 2287 graphs. In Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP). pages 189–198. Taylor Cassidy, Bill McDowell, Nathanel Chambers, and Steven Bethard. 2014. An annotation framework for dense event ordering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). pages 501–506. Nathanael Chambers. 2013. NavyTime: Event and time ordering from raw text. In SemEval. volume 2, pages 73–77. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics 2:273– 284. Nathanael Chambers and Dan Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Nathanael Chambers, Shan Wang, and Dan Jurafsky. 2007. Classifying temporal relations between events. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions. pages 173–176. Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2012. Structured learning with constrained conditional models. Machine Learning 88(3):399–431. Pascal Denis and Philippe Muller. 2011. Predicting globally-coherent temporal structures from texts via endpoint inference and graph decomposition. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). volume 22, page 1788. Thomas G Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation 10(7):1895– 1923. Quang Xuan Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Edinburgh, Scotland. Quang Xuan Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Jesse Dunietz, Lori Levin, and Jaime Carbonell. 2017. Automatically tagging constructions of causation and their slot-fillers. Transactions of the Association for Computational Linguistics 5:117–133. Brian S Everitt. 1992. The analysis of contingency tables. CRC Press. Yoav Freund and Robert E. Schapire. 1998. Large margin classification using the Perceptron algorithm. In Proceedings of the Annual ACM Workshop on Computational Learning Theory (COLT). pages 209– 217. Bas¸ak G¨uler, Aylin Yener, and Ananthram Swami. 2016. Learning causal information flow structures in multi-layer networks. In IEEE Global Conference on Signal and Information Processing (GlobalSIP). pages 1340–1344. Gurobi Optimization, Inc. 2012. Gurobi optimizer reference manual. http://www.gurobi.com. Christopher Hidey and Kathy McKeown. 2016. Identifying causal relations using parallel wikipedia articles. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Machine learning of temporal relations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). pages 753–760. Paramita Mirza and Sara Tonelli. 2014. An analysis of causality between events and its relation to temporal information. In Proceedings the International Conference on Computational Linguistics (COLING). pages 2097–2106. Paramita Mirza and Sara Tonelli. 2016. CATENA: CAusal and TEmporal relation extraction from NAtural language texts. In The 26th International Conference on Computational Linguistics. pages 64–75. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016a. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). pages 839–849. Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James Allen, and Lucy Vanderwende. 2016b. CaTeRS: Causal and temporal relation scheme for semantic annotation of event structures. In Proceedings of the 4th Workshop on Events: Definition, Detection, Coreference, and Representation. pages 51–61. Qiang Ning, Zhili Feng, and Dan Roth. 2017. A structured learning approach to temporal relation extraction. In Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP). Copenhagen, Denmark. Qiang Ning, Hao Wu, Haoruo Peng, and Dan Roth. 2018a. Improving temporal relation extraction with a globally acquired statistical resource. In Proceedings of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). Association for Computational Linguistics. 2288 Qiang Ning, Hao Wu, and Dan Roth. 2018b. A multiaxis annotation scheme for event temporal relations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The TIMEBANK corpus. In Corpus linguistics. volume 2003, page 40. Bryan Rink, Cosmin Adrian Bejan, and Sanda M Harabagiu. 2010. Learning textual graph patterns to detect causal event relations. In FLAIRS Conference. Dan Roth and Wen-Tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Hwee Tou Ng and Ellen Riloff, editors, Proc. of the Conference on Computational Natural Language Learning (CoNLL). pages 1–8. Yizhou Sun, Kunqing Xie, Ning Liu, Shuicheng Yan, Benyu Zhang, and Zheng Chen. 2007. Causal relation of queries from temporal logs. In The International World Wide Web Conference. pages 1141– 1142. Naushad UzZaman, Hector Llorens, James Allen, Leon Derczynski, Marc Verhagen, and James Pustejovsky. 2013. SemEval-2013 Task 1: TempEval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics. volume 2, pages 1–9. Marc Verhagen, Robert Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. SemEval-2007 Task 15: TempEval temporal relation identification. In SemEval. pages 75–80. Marc Verhagen and James Pustejovsky. 2008. Temporal processing with the TARSQI toolkit. In 22nd International Conference on on Computational Linguistics: Demonstration Papers. pages 189–192. Marc Verhagen, Roser Sauri, Tommaso Caselli, and James Pustejovsky. 2010. SemEval-2010 Task 13: TempEval-2. In SemEval. pages 57–62.
2018
212
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2289–2299 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2289 Modeling Naive Psychology of Characters in Simple Commonsense Stories Hannah Rashkin†, Antoine Bosselut†, Maarten Sap†, Kevin Knight‡ and Yejin Choi†§ †Paul G. Allen School of Computer Science & Engineering, University of Washington §Allen Institute for Artificial Intelligence {hrashkin,msap,antoineb,yejin}@cs.washington.edu ‡ Information Sciences Institute & Computer Science, University of Southern California [email protected] Abstract Understanding a narrative requires reading between the lines and reasoning about the unspoken but obvious implications about events and people’s mental states — a capability that is trivial for humans but remarkably hard for machines. To facilitate research addressing this challenge, we introduce a new annotation framework to explain naive psychology of story characters as fully-specified chains of mental states with respect to motivations and emotional reactions. Our work presents a new largescale dataset with rich low-level annotations and establishes baseline performance on several new tasks, suggesting avenues for future research. 1 Introduction Understanding a story requires reasoning about the causal links between the events in the story and the mental states of the characters, even when those relationships are not explicitly stated. As shown by the commonsense story cloze shared task (Mostafazadeh et al., 2017), this reasoning is remarkably hard for both statistical and neural machine readers – despite being trivial for humans. This stark performance gap between humans and machines is not surprising as most powerful language models have been designed to effectively learn local fluency patterns. Consequently, they generally lack the ability to abstract away from surface patterns in text to model more complex implied dynamics, such as intuiting characters’ mental states or predicting their plausible next actions. In this paper, we construct a new annotation formalism to densely label commonsense short stories (Mostafazadeh et al., 2016) in terms of the mental states of the characters. The resultThe band instructor told the band to start playing. He often stopped the music when players were off-tone. They grew tired and started playing worse after a while. The instructor was furious and threw his chair. He cancelled practice and expected us to perform tomorrow. Instructor Players E M E M E M E M E M E M E M E M E M E M confident [esteem] [anger] need rest [esteem] frustrated angry afraid [disgust, fear] [esteem] M E E M [stability] Figure 1: A story example with partial annotations for motivations (dashed) and emotional reactions (solid). Open text explanations are in black (e.g., “frustrated”) and formal theory labels are in blue with brackets (e.g., “[esteem]”). ing dataset offers three unique properties. First, as highlighted in Figure 1, the dataset provides a fully-specified chain of motivations and emotional reactions for each story character as preand post-conditions of events. Second, the annotations include state changes for entities even when they are not mentioned directly in a sentence (e.g., in the fourth sentence in Figure 1, players would feel afraid as a result of the instructor throwing a chair), thereby capturing implied effects unstated in the story. Finally, the annotations encompass both formal labels from multiple theories of psychology (Maslow, 1943; Reiss, 2004; Plutchik, 1980) as well as open text descriptions of motivations and emotions, providing a comprehensive mapping between open text explanations and label categories (e.g., “to spend time with her son” 2290 Physiological needs Spiritual Growth Esteem Love/belonging Stability Maslow's needs curiosity, serenity, idealism, independence competition, honor, approval, power, status romance, belonging, family, social contact health, savings, order, safety food, rest Reiss' motives sadness surprise fear trust joy disgust anticipation anger Plutchik basic emotions Figure 2: Theories of Motivation (Maslow and Reiss) and Emotional Reaction (Plutchik). ! Maslow’s category love). Our corpus1 spans across 15k stories, amounting to 300k low-level annotations for around 150k character-line pairs. Using our new corpus, we present baseline performance on two new tasks focusing on mental state tracking of story characters: categorizing motivations and emotional reactions using theory labels, as well as describing motivations and emotional reactions using open text. Empirical results demonstrate that existing neural network models including those with explicit or latent entity representations achieve promising results. 2 Mental State Representations Understanding people’s actions, motivations, and emotions has been a recurring research focus across several disciplines including philosophy and psychology (Schachter and Singer, 1962; Burke, 1969; Lazarus, 1991; Goldman, 2015). We draw from these prior works to derive a set of categorical labels for annotating the step-by-step causal dynamics between the mental states of story characters and the events they experience. 2.1 Motivation Theories We use two popular theories of motivation: the “hierarchy of needs” of Maslow (1943) and the “basic motives” of Reiss (2004) to compile 5 coarse-grained and 19 fine-grained motivation categories, shown in Figure 2. Maslow’s “hierarchy of needs” are comprised of five categories, ranging from physiological needs to spiritual growth, which we use as coarse-level categories. Reiss (2004) proposes 19 more fine-grained categories that provide a more informative range of motivations. For example, even though they both relate 1We make our dataset publicly available at https:// uwnlp.github.io/storycommonsense/ to the physiological needs Maslow category, the food and rest motives from Reiss (2004) are very different. While the Reiss theory allows for finergrained annotations of motivation, the larger set of abstract concepts can be overwhelming for annotators. Motivated by Straker (2013), we design a hybrid approach, where Reiss labels are annotated as sub-categories of Maslow categories. 2.2 Emotion Theory Among several theories of emotion, we work with the “wheel of emotions” of Plutchik (1980), as it has been a common choice in prior literature on emotion categorization (Mohammad and Turney, 2013; Zhou et al., 2016). We use the eight basic emotional dimensions as illustrated in Figure 2. 2.3 Mental State Explanations In addition to the motivation and emotion categories derived from psychology theories, we also obtain open text descriptions of character mental states. These open text descriptions allow learning computational models that can explain the mental states of characters in natural language, which is likely to be more accessible and informative to end users than having theory categories alone. Collecting both theory categories and open text also allows us to learn the automatic mappings between the two, which generalizes the previous work of Mohammad and Turney (2013) on emotion category mappings. 3 Annotation Framework In this study, we choose to annotate the simple commonsense stories introduced by Mostafazadeh et al. (2016). Despite their simplicity, these stories pose a significant challenge to natural language understanding models (Mostafazadeh et al., 2017). 2291 (1) Entity Resolution (2a) Action
 Resolution (2b) Affect Resolution (3a) Motivation (3b) Emotional
 Reaction Character mentions:
 I, me (lines 1,4,5), Cousin (lines 1-5) Character affect lines: I, me: 2-5 Cousin: 2, 5 Character action lines: I, me: 1, 4, 5
 Cousin: 3, 4, 5 Motivation: … Line 1, I, me: 
 love/family Emotional Reaction: … Line 3, I, me: sad/disgusted/angry Story: (1) I let my cousin stay with me. (2) He had nowhere to go. (3) However, he was a slob. (4) I was about to kick him out. (5) When he cooked me a huge breakfast, I decided he could stay. Figure 3: The annotation pipeline for the fine-grained annotations with an example story. In addition, they depict multiple interactions between story characters, presenting rich opportunities to reason about character motivations and reactions. Furthermore, there are more than 98k such stories currently available covering a wide range of everyday scenarios. Unique Challenges While there have been a variety of annotated resources developed on the related topics of sentiment analysis (Mohammad and Turney, 2013; Deng and Wiebe, 2015), entity tracking (Hoffart et al., 2011; Weston et al., 2015), and story understanding (Goyal et al., 2010; Ouyang and McKeown, 2015; Lukin et al., 2016), our study is the first to annotate the full chains of mental state effects for story characters. This poses several unique challenges as annotations require (1) interpreting discourse (2) understanding implicit causal effects, and (3) understanding formal psychology theory categories. In prior literature, annotations of this complexity have typically been performed by experts (Deng and Wiebe, 2015; Ouyang and McKeown, 2015). While reliable, these annotations are prohibitively expensive to scale up. Therefore, we introduce a new annotation framework that pipelines a set of smaller isolated tasks as illustrated in Figure 3. All annotations were collected using crowdsourced workers from Amazon Mechanical Turk. 3.1 Annotation Pipeline We describe the components and workflow of the full annotation pipeline shown in Figure 3 below. The example story in the figure is used to illustrate the output of various steps in the pipeline (full annotations for this example are in the appendix). (1) Entity Resolution The first task in the pipeline aims to discover (1) the set of characters Ei in each story i and (2) the set of sentences Sij in which a specific character j 2 Ei is explicitly mentioned. For example, in the story in Figure 3, the characters identified by annotators are “I/me” and “My cousin”, whom appear in sentences {1, 4, 5} and {1, 2, 3, 4, 5}, respectively. We use Sij to control the workflow of later parts of the pipeline by pruning future tasks for sentences that are not tied to characters. Because Sij is used to prune follow-up tasks, we take a high recall strategy to include all sentences that at least one annotator selected. (2a) Action Resolution The next task identifies whether a character j appearing in a sentence k is taking any action to which a motivation can be attributed. We perform action resolution only for sentences k 2 Sij. In the running example, we would want to know that the cousin in line 2 is not doing anything intentional, allowing us to omit this line in the next pipeline stage (3a) where a character’s motives are annotated. Description of state (e.g., “Alex is feeling blue”) or passive event participation (e.g., “Alex trips”) are not considered volitional acts for which the character may have an underlying motive. For each line and story character pair, we obtain 4 annotations. Because pairs can still be filtered out in the next stage of annotation, we select a generous threshold where only 2 annotators must vote that an intentional action took place for the sentence to be used as an input to the motivation annotation task (3a). (2b) Affect Resolution This task aims to identify all of the lines where a story character j has an emotional reaction. Importantly, it is often possible to infer the emotional reaction of a character j even when the character does not explicitly appear in a sentence k. For instance, in Figure 3, we want to annotate the narrator’s reaction to line 2 even though they are not mentioned because their emotional response is inferrable. We obtain 4 an2292 Spir. growth Esteem Love Stability Phys. 0.13 0.3 0.3 0.22 0.17 become experienced meet goal; to look nice to support his friends be employed; stay dry rest more; food 8740 9676 19768 11645 4953 % Annotations where selected # Unique 
 Open Text disgust surprise anger trust sadness anticipation joy fear 0.2 0.51 0.49 0.23 0.25 0.16 0.33 0.14 outraged; picky dismayed; disoriented enraged; provoked touched; appreciated excluded; bleak future oriented; uptight happier; jubiliation frozen in fear; fatalistic % Annotations where selected # Unique 
 Open Text 3136 2964 2376 3205 2371 2055 2016 3868 Figure 4: Examples of open-text explanations that annotators provided corresponding with the categories they selected. The bars on the right of the categories represent the percentage of lines where annotators selected that category (out of those character-line pairs with positive motivation/emotional reaction). notations per character per line. The lines with at least 2 annotators voting are used as input for the next task: (3b) emotional reaction. (3a) Motivation We use the output from the action resolution stage (2a) to ask workers to annotate character motives in lines where they intentionally initiate an event. We provide 3 annotators a line from a story, the preceding lines, and a specific character. They are asked to produce a free response sentence describing what causes the character’s behavior in that line and to select the most related Maslow categories and Reiss subcategories. In Figure 3, an annotator described the motivation of the narrator in line 1 as wanting “to have company” and then selected the love (Maslow) and family (Reiss) as categorical labels. Because many annotators are not familiar with motivational theories, we require them to complete a tutorial the first time they attempt the task. (3b) Emotional Reaction Simultaneously, we use the output from the affect resolution stage (2b) to ask workers what the emotional response of a character is immediately following a line in which they are affected. As with the motives, we give 3 annotators a line from a story, its previous context, and a specific character. We ask them to describe in open text how the character will feel following the event in the sentence (up to three emotions). As a follow-up, we ask workers to compare their free responses against Plutchik categories by using 3-point likert ratings. In Figure 3, we include a response for the emotional reaction of the narrator in line 1. Even though the narrator was not mentioned directly in that line, an annotator recorded that they will react to their cousin being a slob by feeling “annoyed” and selected the Plutchik categories for sadness, disgust and anger. Fine-grained train dev test # annotated stories 10000 2500 2500 # characters / story 2.03 2.02 1.82 # char-lines w/ motiv 40154 8762 6831 # char-lines w/ emot 76613 14532 13785 Table 1: Annotated data statistics for each dataset 3.2 Dataset Statistics and Insights Cost The tasks corresponding to the theory category assignments are the hardest and most expensive in the pipeline (⇠$4 per story). Therefore, we obtain theory category labels only for a third of our annotated stories, which we assign to the development and test sets. The training data is annotated with a shortened pipeline with only open text descriptions of motivations and emotional reactions from two workers (⇠$1 per story). Scale Our dataset to date includes a total of 300k low-level annotations for motivation and emotion across 15,000 stories (randomly selected from the ROC story training set). It covers over 150,000 character-line pairs, in which 56k character-line pairs have an annotated motivation and 105k have an annotated change in emotion (i.e. a label other than none). Table 1 shows the break down across training, development, and test splits. Figure 4 shows the frequency of different labels being selected for motivational and emotional categories in cases with positive change. Agreements For quality control, we removed workers who consistently produced low-quality work, as discussed in the Appendix. In the categorization sets (Maslow, Reiss and Plutchik), we compare the performance of annotators by treating each individual category as a binary label (1 2293 Spiritual Growth Esteem Love Stability Physiological Other Idealism Indep Serenity Curiosity Other Approv Status Power Compet Honor Other Romance Contact Belong Family Other Health Tranq Order Savings Other Food Rest Spiritual Other -1.00 -0.11 -0.02 -0.11 0.00 -1.00 -0.02 -0.06 0.10 0.09 0.06 0.05 -0.07 -0.02 0.02 -0.06 -1.00 -0.03 0.05 0.04 0.02 -1.00 -0.10 -0.04 Idealism 0.02 0.26 0.04 0.04 -0.05 -0.09 0.01 0.01 0.06 -0.05 0.11 0.05 -0.18 0.00 0.02 -0.05 -0.07 -0.02 -0.01 0.01 -0.05 -0.03 -0.10 -0.17 Growth Indep 0.02 -0.01 0.18 0.10 0.13 0.05 0.02 0.06 0.01 0.02 0.00 0.00 -0.15 -0.06 -0.04 -0.10 -0.07 -0.09 -0.08 -0.01 -0.01 0.07 -0.14 -0.08 Serenity 0.01 0.03 0.06 0.27 0.06 0.01 -0.07 -0.07 -0.04 -0.09 -0.07 0.02 -0.07 -0.04 -0.08 -0.10 -0.07 0.01 -0.02 0.07 -0.05 0.02 -0.10 0.15 Curiosity -0.01 -0.02 0.13 0.03 0.40 -0.01 -0.04 -0.05 -0.04 -0.01 -0.08 -0.01 -0.16 -0.06 -0.07 -0.14 -0.01 -0.11 -0.12 -0.04 -0.10 0.08 -0.12 -0.07 Esteem Other -1.00 0.14 0.04 0.02 0.04 0.31 -0.05 0.03 0.10 -0.12 0.05 -1.00 -0.07 0.08 -1.00 -1.00 -1.00 -0.14 -0.05 -0.07 0.03 -1.00 -0.09 -0.04 Approv 0.02 -0.01 0.03 -0.08 -0.05 0.08 0.16 0.14 0.02 0.08 0.05 0.01 -0.03 0.02 0.08 -0.08 0.07 -0.06 -0.11 -0.04 -0.01 -0.11 -0.17 -0.18 Status -0.04 0.00 0.07 -0.05 -0.01 0.04 0.14 0.18 0.08 0.12 0.02 0.05 -0.07 -0.05 0.07 -0.14 0.06 -0.08 -0.12 -0.06 0.00 -0.13 -0.17 -0.15 Power 0.04 0.06 0.02 -0.10 -0.03 0.12 0.01 0.06 0.25 0.13 0.10 -0.06 -0.13 -0.08 -0.03 -0.05 0.02 -0.08 -0.06 -0.01 -0.01 -0.01 -0.15 -0.10 Compet -0.05 0.00 0.01 -0.12 -0.01 0.02 0.09 0.12 0.14 0.42 0.07 -0.05 -0.25 -0.09 0.07 -0.17 -0.03 -0.11 -0.14 -0.14 -0.10 -0.18 -0.26 -0.22 Honor 0.00 0.17 -0.01 -0.04 -0.06 -0.01 0.07 0.05 0.09 0.06 0.14 -0.03 -0.12 -0.04 0.07 -0.03 -0.03 -0.07 -0.05 0.01 0.01 0.04 -0.14 -0.10 Love Other 0.03 0.00 0.02 -0.07 -0.05 -1.00 0.09 0.06 -0.05 -0.03 0.04 0.14 -0.03 0.17 0.07 -0.02 -0.03 -0.09 -0.11 -0.07 -0.14 0.00 -0.14 -0.04 Romance 0.07 -0.13 -0.15 -0.07 -0.18 -1.00 -0.04 -0.09 -0.13 -0.19 -0.10 -0.01 0.65 0.03 -0.01 0.01 -0.10 -0.23 -0.20 -0.13 -0.29 -0.20 -0.20 -0.11 Contact 0.03 0.00 -0.07 -0.02 -0.07 0.01 0.00 -0.02 -0.07 -0.08 -0.02 0.15 0.04 0.39 0.11 -0.02 -0.02 -0.17 -0.17 -0.14 -0.19 -0.07 -0.15 -0.14 Belong -0.06 -0.02 -0.01 -0.01 -0.01 -0.02 0.07 0.07 0.00 0.03 0.05 0.01 0.02 0.10 0.11 -0.08 -0.02 -0.03 -0.08 -0.06 -0.06 -0.05 -0.11 -0.12 Family -0.02 -0.06 -0.12 0.00 -0.09 -0.03 -0.08 -0.09 -0.08 -0.19 0.00 0.01 -0.02 -0.03 -0.06 0.48 -0.06 -0.06 -0.08 -0.05 -0.19 -0.05 -0.11 -0.12 Stability Other -1.00 -0.02 0.07 0.03 -0.02 -1.00 -0.04 0.05 -0.07 -0.03 0.02 0.08 -0.07 -0.14 0.03 -0.07 -1.00 0.00 -0.11 0.08 0.12 0.18 -0.07 -0.03 Health -0.03 -0.04 -0.06 -0.02 -0.13 -0.08 -0.05 -0.06 -0.09 -0.10 -0.05 -0.07 -0.18 -0.20 -0.10 -0.06 0.00 0.45 0.15 -0.02 -0.16 0.14 -0.03 0.09 Tranq -0.07 -0.06 -0.05 -0.04 -0.10 0.02 -0.11 -0.08 -0.02 -0.16 -0.01 -0.15 -0.19 -0.16 -0.09 -0.08 0.01 0.10 0.42 0.13 -0.06 0.09 -0.18 -0.02 Order -0.03 0.00 -0.02 0.05 -0.07 -0.04 -0.03 -0.05 -0.01 -0.17 -0.01 -0.03 -0.15 -0.11 -0.05 -0.08 0.06 -0.02 0.11 0.24 0.14 -0.01 -0.16 0.04 Savings 0.06 0.02 0.01 -0.09 -0.10 0.01 -0.02 -0.01 0.00 -0.10 0.01 -0.16 -0.28 -0.16 -0.08 -0.16 0.09 -0.14 -0.05 0.09 0.45 -0.10 -0.12 -0.17 Physiolog. Other -1.00 -0.02 -0.03 -0.02 -0.01 0.13 -0.02 -0.02 -0.14 0.01 -0.14 0.05 -0.16 -0.04 -1.00 -0.11 -1.00 0.12 0.07 0.09 -0.01 -1.00 -0.04 0.10 Food -0.03 -0.11 -0.10 -0.11 -0.14 -1.00 -0.17 -0.22 -0.18 -0.20 -0.14 -0.15 -0.20 -0.18 -0.11 -0.09 -0.20 0.00 -0.13 -0.13 -0.12 -0.10 0.67 -0.08 Rest -0.02 -0.13 -0.05 0.15 -0.06 0.02 -0.22 -0.13 -0.16 -0.17 -0.17 -0.04 -0.15 -0.11 -0.11 -0.07 0.08 0.09 -0.02 0.02 -0.10 0.15 -0.05 0.56 Health (Stability) vs. Physiological needs Idealism vs. Honor Approval vs. Belonging Serenity vs. Rest Annotator 1 Annotator 2 < 0 max=
 0.68 0.10 NPMI 0.34 Disagreements Figure 5: NPMI confusion matrix on motivational categories for all annotator pairs with color scaling for legibility. The highest values are generally along diagonal or within Maslow categories (outlined in black). We highlight a few common points of disagreement between thematically similar categories. Label Type PPA KA % Agree w/ Maj. Lbl Maslow Dev .77 .30 0.88 Test .77 .31 0.89 Reiss Dev .91 .24 0.95 Test .91 .24 0.95 Plutchik Dev .71 .32 0.84 Test .70 .29 0.83 Table 2: Agreement Statistics (PPA = Pairwise percent agreement of worker responses per binary category, KA= Krippendorff’s Alpha) if they included the category in their set of responses) and averaging the agreement per category. For Plutchik scores, we count ‘moderately associated’ ratings as agreeing with ‘highly’ associated’ ratings. The percent agreement and Krippendorff’s alpha are shown in Table 2. We also compute the percent agreement between the individual annotations and the majority labels.2 These scores are difficult to interpret by themselves, however, as annotator agreement in our categorization system has a number of properties that are not accounted for by these metrics (disagreement preferences – joy and trust are closer than joy and anger – that are difficult to quantify in a principled way, hierarchical categories map2Majority label for the motivation categories is what was agreed upon by at least two annotators per category. For emotion categories, we averaged the point-wise ratings and counted a category if the average rating was ≥2. ping Reiss subcategories from Maslow categories, skewed category distributions that inflate PPA and deflate KA scores, and annotators that could select multiple labels for the same examples). To provide a clearer understanding of agreement within this dataset, we create aggregated confusion matrices for annotator pairs. First, we sum the counts of combinations of answers between all paired annotations (excluding none labels). If an annotator selected multiple categories, we split the count uniformly among the selected categories. We compute NPMI over the total confusion matrix. In Figure 5, we show the NPMI confusion matrix for motivational categories. In the motivation annotations, we find the highest scores on the diagonal (i.e., Reiss agreement), with most confusions occurring between Reiss motives in the same Maslow category (outlined black in Figure 5). Other disagreements generally involve Reiss subcategories that are thematically similar, such as serenity (mental relaxation) and rest (physical relaxation). We provide this analysis for Plutchik categories in the appendix, finding high scores along the diagonal with disagreements typically occurring between categories in a “positive emotion” cluster (joy, trust) or a “negative emotion” cluster (anger, disgust,sadness). 4 Tasks The multiple modes covered by the annotations in this new dataset allow for multiple new tasks to be explored. We outline three task types below, covering a total of eight tasks on which to evaluate. 2294 Explanation Generation State Classification Annotation Classification Character, Story context, Line Encoder LogReg LSTM Open-text
 Explanation Category ℎ= #$%&((, *ℎ+,) TFIDF LogReg Category Input Encoding Decoding Output Task Open-text
 Explanation Figure 6: General model architectures for three new task types Differences between task type inputs and outputs are summarized in Figure 6. State Classification The three primary tasks involve categorizing the psychological states of story characters for each of the label sets (Maslow, Reiss, Plutchik) collected for the dev and test splits of our dataset. In each classification task, a model is given a line of the story (along with optional preceding context lines) and a character and predicts the motivation (or emotional reaction). A binary label is predicted for each of the Maslow needs, Reiss motives or Plutchik categories. Annotation Classification Because the dev and test sets contain paired classification labels and free text explanations, we propose three tasks where a model must predict the correct Maslow/Reiss/Plutchik label given an emotional reaction or motivation explanation. Explanation Generation Finally, we can use the free text explanations to train models to describe the psychological state of a character in free text (examples in Figure 4). These explanations allow for two conditional generation tasks where the model must generate the words describing the emotional reaction or motivation of the character. 5 Baseline Models The general model architectures for the three tasks are shown in Figure 6. We describe each model component below. The state classification and explanation generation models could be trained separately or in a multi-task set-up. In the state classification and explanation generation tasks, a model is given a line from a story xs containing N words {ws 0, ws 1, . . . , ws N} from vocabulary V , a character in that story ej 2 E where E is the set of characters in the story, and (optionally) the preceding sentences in the story C = {x0 . . . , xs−1} containing words from vocabulary V . A representation for a character’s psychological state is encoded as: he = Encoder(xs, C[ej]) (1) where C[ej] corresponds to the concatenated subset of sentences in C where ej appears. 5.1 Encoders While the end classifier or decoder is different for each task, we use the same set of encoders based on word embeddings, common neural network architectures, or memory networks to formulate a representation of the sentence and character, he. Unless specified, he is computed by encoding separate vector representations for the sentence (xs ! hs) and character-specific context (C[ej] ! hc) and concatenating these encodings (he = [hc; hs]). We describe the encoders below: TF-IDF We learn a TD-IDF model on the full training corpus of Mostafazadeh et al. (2016) (excluding the stories in our dev/test sets). To encode the sentence, we extract TF-IDF features for its words, yielding vs 2 RV . A projection and nonlinearity is applied to these features to yield hs: hs = φ(Wsvs + bs) (2) where Ws 2 Rd⇥H. The character vector hc is encoded in the same way on sentences in the context pertaining to the character. GloVe We extract pretrained Glove vectors (Pennington et al., 2014) for each word in V . The word embeddings are max-pooled, yielding embedding vs 2 RH, where H is the dimensionality of the Glove vectors. Using this max-pooled representation, hs and hc are extracted in the same manner as for TF-IDF features (Equation 2). CNN We implement a CNN text categorization model using the same configuration as Kim (2014) to encode the sentence words. A sentence is represented as a matrix, vs 2 RM⇥d where each row is a word embedding xs n for a word ws n 2 xs. vs = [xs 0, xs 1, . . . , xs N] (3) hs = CNN(vs) (4) 2295 where CNN represents the categorization model from (Kim, 2014). The character vector hc is encoded in the same way with a separate CNN. Implementation details are provided in the appendix. LSTM A two-layer bi-LSTM encodes the sentence words and concatenates the final time step hidden states from both directions to yield hs. The character vector hc is encoded the same way. REN We use the “tied” recurrent entity network from Henaff et al. (2017). A memory cell m is initialized for each of the J characters in the story, E = {e0, . . . , eJ}. The REN reads documents one sentence at a time and updates mj for ej 2 E after reading each sentence. Unlike the previous encoders, all sentences of the context C are given to the REN along with the sentence xs. The model learns to distribute encoded information to the correct memory cells. The representation passed to the downstream model is: he = {mj}s (5) where {mj}s is the memory vector in the cell corresponding to ej after reading xs. Implementation details are provided in the appendix. NPN We also include the neural process network from Bosselut et al. (2018) with “tied” entities, but “untied” actions that are not grounded to particular concepts. The memory is initialized and accessed similarly as the REN. Exact implementation details are provided in the appendix. 5.2 State Classifier Once the sentence-character encoding he is extracted, the state classifier predicts a binary label ˆyz for every category z 2 Z where Z is the set of category labels for a particular psychological theory (e.g., disgust, surprise, etc. in the Plutchik wheel). We use logistic regression as a classifier: ˆyz = σ(Wzhe + bz) (6) where Wz and bz are a label-specific set of weights and biases for classifying each label z 2 Z. 5.3 Explanation Generator The explanation generator is a single-layer LSTM (Hochreiter and Schmidhuber, 1997) that receives the encoded sentence-character representation he and predicts each word yt in the explanation using the same method from Sutskever et al. (2014). Implementation details are provided in the appendix. 5.4 Annotation Classifier For annotation classification tasks, words from open-text explanations are encoded with TF-IDF features. The same classifier architecture from Section 5.2 is used to predict the labels. 6 Experimental Setup 6.1 Training State Classification The dev set D is split into two portions of 80% (D1) and 20% (D2). D1 is used to train the classifier and encoder. D2 is used to tune hyperparameters. The model is trained to minimize the weighted binary cross entropy of predicting a class label yz for each class z: L = Z X z=1 γzyz log ˆyz +(1−γz)(1−yz) log(1−ˆyz) (7) where Z is the number of labels in each of the three classifications tasks and γz is defined as: γz = 1 −e−p P(yz) (8) where P(yz) is the marginal class probability of a positive label for z in the training set. Annotation Classification The dev set is split in the same manner as for state classification. The TF-IDF features are trained on the set of training annotations Dt coupled with those from D1. The model must minimize the same loss as in Equation 7. Details are provided in the appendix. Explanation Generation We use the training set of open annotations to train a model to predict explanations. The decoder is trained to minimize the negative loglikelihood of predicting each word in the explanation of a character’s state: Lgen = − T X t=1 log P(yt|y0, ..., yt−1, he) (9) where he is the sentence-character representation produced by an encoder from Section 5.1. 6.2 Metrics Classification For the state and annotation classification task, we report the micro-averaged precision (P), recall (R), and F1 score of the Plutchik, Maslow, and Reiss prediction tasks. We report the results of selecting a label at random in the top two rows of Table 3. Note that random is low because the distribution of positive instances for each 2296 Model Maslow Reiss Plutchik P R F1 P R F1 P R F1 Random 7.45 49.99 12.96 1.76 50.02 3.40 10.35 50.00 17.15 Random (Weighted) 8.10 8.89 8.48 2.25 2.40 2.32 12.28 11.79 12.03 TF-IDF 30.10 21.21 24.88 18.40 20.67 19.46 20.05 24.11 21.90 + Entity Context 29.79 34.56 32.00 20.55 24.81 22.48 22.71 25.24 23.91 GloVe 25.15 29.70 27.24 16.65 18.83 17.67 15.19 30.56 20.29 + Entity Context 27.02 37.00 31.23 16.99 26.08 20.58 19.47 46.65 27.48 LSTM 24.64 35.30 29.02 19.91 19.76 19.84 20.27 30.37 24.31 + Entity Context 31.29 33.85 32.52 18.35 27.61 22.05 23.98 31.41 27.20 + Explanation Training 30.34 40.12 34.55 21.38 28.70 24.51 25.31 33.44 28.81 CNN (Kim, 2014) 26.21 31.09 28.44 20.30 23.24 21.67 21.15 23.36 22.20 + Entity Context 27.47 41.01 32.09 18.89 31.22 23.54 24.32 30.76 27.16 + Explanation Training 29.30 44.18 35.23 17.87 37.52 24.21 24.47 38.87 30.04 REN (Henaff et al., 2017) 26.24 42.14 32.34 16.79 22.20 19.12 26.22 33.26 29.32 + Explanation Training 26.85 44.78 33.57 16.73 26.55 20.53 25.30 37.30 30.15 NPN (Bosselut et al., 2018) 24.27 44.16 31.33 13.13 26.44 17.55 21.98 37.31 27.66 + Explanation Training 26.60 39.17 31.69 15.75 20.34 17.75 24.33 40.10 30.29 Table 3: State Classification Results category is very uneven: macro-averaged positive class probabilities of 8.2, 1.7, and 9.9% per category for Maslow, Reiss, and Plutchik respectively. Generation Because explanations tend to be short sequences (Figure 4) with high levels of synonymy, traditional metrics such as BLEU are inadequate for evaluating generation quality. We use the vector average and vector extrema metrics from Liu et al. (2016) computed using the Glove vectors of generated and reference words. We report results in Table 5 on the dev set and compare to a baseline that randomly samples an example from the dev set as a generated sequence. 6.3 Ablations Story Context vs. No Context Our dataset is motivated by the importance of interpreting story context to categorize emotional reactions and motivations of characters. To test this importance, we ablate hc, the representation of the context sentences pertaining to the character, as an input to the state classifier for each encoder (except the REN and NPN). In Table 3, this ablation is the first row for each encoder presented. Explanation Pretraining Because the state classification and explanation generation tasks use the same models to encode the story, we explore initializing a classification encoder with parameters trained on the generation task. For the CNN, LSTM, and REN encoders, we pretrain a generator to produce emotion or motivation explanations. We use the parameters from the emotion or motivation explanation generators to initialize the Plutchik or Maslow/Reiss classifiers respectively. 7 Experimental Results State Classification We show results on the test set for categorizing Maslow, Reiss, and Plutchik states in Table 3. Despite the difficulty of the task, all models outperform the random baseline. Interestingly, the performance boost from adding entity-specific contextual information (i.e., not ablating hc) indicates that the models learn to condition on a character’s previous experience to classify its mental state at the current time step. This effect can be seen in a story about a man whose flight is cancelled. The model without context predicts the same emotional reactions for the man, his wife and the pilot, but with context correctly predicts that the pilot will not have a reaction while predicting that the man and his wife will feel sad. For the CNN, LSTM, REN, and NPN models, we also report results from pretraining encoder parameters using the free response annotations from the training set. This pretraining offers a clear performance boost for all models on all three prediction tasks, showing that the parameters of the encoder can be pretrained on auxiliary tasks providing emotional and motivational state signal. The best performing models in each task are most effective at predicting Maslow physiological needs, Reiss food motives, and Plutchik reactions of joy. The relative ease of predicting motivations 2297 Maslow Reiss Plutchik TFIDF 64.81 48.60 53.44 Table 4: F1 scores of predicting correct category labels from free response annotations Model Motivation Emotion Avg VE Avg VE Random 56.02 45.75 40.23 39.98 LSTM 58.48 51.07 52.47 52.30 CNN 57.83 50.75 52.49 52.31 REN 58.83 51.79 53.95 53.79 NPN 57.77 51.77 54.02 53.85 Table 5: Vector average and extrema scores for generation of annotation explanations related to food (and physiological needs generally) may be because they involve a more limited and concrete set of actions such as eating or cooking. Annotation Classification Table 4 shows that a simple model can learn to map open text responses to categorical labels. This further supports our hypothesis that pretraining a classification model on the free-response annotations could be helpful in boosting performance on the category prediction. Explanation Generation Finally, we provide results for the task of generating explanations of motivations and emotions in Table 5. Because the explanations are closely tied to emotional and motivation states, the randomly selected explanation can often be close in embedding space to the reference explanations, making the random baseline fairly competitive. However, all models outperform the strong baseline on both metrics, indicating that the generated short explanations are closer semantically to the reference annotation. 8 Related work Mental State Annotations Incorporating emotion theories into NLP tasks has been explored in previous projects. Ghosh et al. (2017) modulate language model distributions by increasing the probability of words that express certain affective LIWC (Tausczik and Pennebaker, 2016) categories. More generally, various projects tackle the problem of generating text from a set of attributes like sentiment or generic-ness (Ficler and Goldberg, 2017; Dong et al., 2017). Similarly, there is also a body of research in reasoning about commonsense stories and discourse (Li and Jurafsky, 2017; Mostafazadeh et al., 2016) or detecting emotional stimuli in stories (Gui et al., 2017). Previous work in plot units (Lehnert, 1981) developed formalisms for affect and mental state in story narratives that included motivations and reactions. In our work, we collect mental state annotations for stories to used as a new resource in this space. Modeling Entity State Recently, novel works in language modeling (Ji et al., 2017; Yang et al., 2016), question answering (Henaff et al., 2017), and text generation (Kiddon et al., 2016; Bosselut et al., 2018) have shown that modeling entity state explicitly can boost performance while providing a preliminary interface for interpreting a model’s prediction. Entity modeling in these works, however, was limited to tracking entity reference (Kiddon et al., 2016; Yang et al., 2016; Ji et al., 2017), recognizing entity state similarity (Henaff et al., 2017) or predicting simple attributes from entity states (Bosselut et al., 2018). Our work provides a new dataset for tracking emotional reactions and motivations of characters in stories. 9 Conclusion We present a large scale dataset as a resource for training and evaluating mental state tracking of characters in short commonsense stories. This dataset contains over 300k low-level annotations for character motivations and emotional reactions. We provide benchmark results on this new resource. Importantly, we show that modeling character-specific context and pretraining on freeresponse data can boost labeling performance. While our work only use information present in our dataset, we view our dataset as a future testbed for evaluating models trained on any number of resources for learning common sense about emotional reactions and motivations. Acknowledgments We thank the reviewers for their insightful comments. We also thank Bowen Wang, xlab members, Martha Palmer, Tim O’Gorman, Susan W. Brown, and Ghazaleh Kazeminejad for helpful discussions on inter-annotator agreement and the annotation pipeline. This work was supported in part by NSF GRFP DGE-1256082, NSF IIS1714566, IIS-1524371, Samsung AI, and DARPA CwC (W911NF-15-1-0543). 2298 References Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, and Yejin Choi. 2018. Simulating action dynamics with neural process networks. In Proceedings of the 6th International Conference on Learning Representations. Kenneth Burke. 1969. A Grammar of Motives. Univ of California Press. Lingjia Deng and Janyce Wiebe. 2015. Mpqa 3.0: An entity/event-level sentiment corpus. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1323–1328. Li Dong, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke Xu. 2017. Learning to generate product reviews from attributes. In EACL. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In EMNLP Workshop on Stylistic Variation. Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-LM: A neural language model for customizable affective text generation. In ACL. Alvin I Goldman. 2015. Theory of Human Action. Princeton University Press. Amit Goyal, Ellen Riloff, and Hal Daum´e. 2010. Automatically producing plot unit representations for narrative text. In EMNLP. Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Lu Qin, and Jiachen Du. 2017. A question answering approach for emotion cause extraction. In EMNLP. Association for Computational Linguistics, Copenhagen, Denmark, pages 1594–1603. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2017. Tracking the world state with recurrent entity networks. In ICLR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735–1780. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In EMNLP. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic entity representations in neural language models. In EMNLP. Association for Computational Linguistics, Copenhagen, Denmark, pages 1831–1840. Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In EMNLP. pages 329–339. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Richard S Lazarus. 1991. Progress on a cognitivemotivational-relational theory of emotion. American Psychologist 46(8):819. Wendy G. Lehnert. 1981. Plot units and narrative summarization. Cognitive Science 5:293–331. Jiwei Li and Dan Jurafsky. 2017. Neural net models for Open-Domain discourse coherence. In EMNLP. Chia-Wei Liu, Ryan Joseph Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP. Stephanie M. Lukin, Kevin Bowden, Casey Barackman, and Marilyn A. Walker. 2016. Personabank: A corpus of personal narratives and their story intention graphs. CoRR abs/1708.09082. Abraham H Maslow. 1943. A theory of human motivation. Psychol. Rev. 50(4):370. Saif Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. Computational Intelligence 29:436–465. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In NAACL. Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. Lsdsem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics. pages 46–51. Jessica Ouyang and Kathleen McKeown. 2015. Modeling reportable events as turning points in narrative. In EMNLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In In EMNLP. pages 1532–1543. Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. Theories of emotion 1(3-31):4. Steven Reiss. 2004. Multifaceted nature of intrinsic motivation: The theory of 16 basic desires. Rev. Gen. Psychol. 8(3):179. Stanley Schachter and Jerome Singer. 1962. Cognitive, social, and physiological determinants of emotional state. Psychological review 69(5):379. 2299 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958. David Straker. 2013. Reiss’ 16 human needs. http: //changingminds.org/explanations/ needs/reiss_16_needs.htm. Accessed: 2018-02-21. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems. pages 3104–3112. Yla R Tausczik and James W Pennebaker. 2016. The psychological meaning of words: LIWC and computerized text analysis methods. J. Lang. Soc. Psychol. . Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 . Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2016. Reference-aware language models. CoRR abs/1611.01628. http://arxiv.org/abs/1611.01628. Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016. Situation recognition: Visual semantic role labeling for image understanding. In Conference on Computer Vision and Pattern Recognition. Deyu Zhou, Xuan Zhang, Yin Zhou, Quanming Zhao, and Xin Geng. 2016. Emotion distribution learning from texts. In EMNLP.
2018
213
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2300–2310 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2300 A Deep Relevance Model for Zero-Shot Document Filtering Chenliang Li1, Wei Zhou2, Feng Ji2, Yu Duan1, Haiqing Chen2 1School of Cyber Science and Engineering, Wuhan University, China {cllee,duanyu}@whu.edu.cn 2Alibaba Group, Hangzhou, China {fayi.zw,zhongxiu.jf,haiqing.chenhq}@alibaba-inc.com Abstract In the era of big data, focused analysis for diverse topics with a short response time becomes an urgent demand. As a fundamental task, information filtering therefore becomes a critical necessity. In this paper, we propose a novel deep relevance model for zero-shot document filtering, named DAZER. DAZER estimates the relevance between a document and a category by taking a small set of seed words relevant to the category. With pre-trained word embeddings from a large external corpus, DAZER is devised to extract the relevance signals by modeling the hidden feature interactions in the word embedding space. The relevance signals are extracted through a gated convolutional process. The gate mechanism controls which convolution filters output the relevance signals in a category dependent manner. Experiments on two document collections of two different tasks (i.e., topic categorization and sentiment analysis) demonstrate that DAZER significantly outperforms the existing alternative solutions, including the state-of-the-art deep relevance ranking models. 1 Introduction Filtering irrelevant information and organizing relevant information into meaningful topical categories is indispensable and ubiquitous. For example, a data analyst tracking an emerging event would like to retrieve the documents relevant to a specific topic (category) from a large document collection in a short response time. In the era of big data, the potentially possible categories covered by documents would be limitless. It is unrealistic to manually identify a lot of positive examples for each possible category. However, new information needs indeed emerge everywhere in many real-world scenarios. Recent studies on dataless text classification show promising results on reducing labeling effort (Liu et al., 2004; Druck et al., 2008; Chang et al., 2008; Song and Roth, 2014; Hingmire et al., 2013; Hingmire and Chakraborti, 2014; Chen et al., 2015; Li et al., 2016). Without any labeled document, a dataless classifier performs text classification by using a small set of relevant words for each category (called “seed words”). However, existing dataless classifiers do not consider document filtering. We need to provide the seed words for each category covered by the document collection, which is often infeasible in the real world. To this end, we are particularly interested in the task of zero-shot document filtering. Here, zeroshot means that the instances of the targeted categories are unseen during the training phase. To facilitate zero-shot filtering, we take a small set of seed words to represent a category of interest. This is extremely useful when the information need (i.e., the categories of interest) is dynamic and the text collection is large and temporally updated (e.g., the possible categories are hard to know). Specifically, we propose a novel deep relevance model for zero-shot document filtering, named DAZER. In DAZER, we use the word embeddings learnt from an external large text corpus to represent each word. A category can then be well represented also in the embedding space (called category embedding) through some composition with the word embeddings of the provided seed words. Given a small number of seed words provided for a category as input, DAZER is devised to produce a score indicating the relevance between a document and the category. It is intuitive to connect zero-shot document filtering 2301 with the task of ad-hoc retrieval. Indeed, by treating the seed words of each category as a query, the zero-shot document filtering is equivalent to ranking documents based on their relevance to the query. The relevance ranking is a core task in information retrieval, and has been studied for many years. Although they share the same formulation, these two tasks diverge fundamentally. For ad-hoc retrieval, a user constructs a query with a specific information need. The relevant documents are assumed to contain these query words. This is confirmed by the existing works that exact keyword match is still the most important signal of relevance in ad-hoc retrieval (Fang and Zhai, 2006; Wu et al., 2007; Eickhoff et al., 2015; Guo et al., 2016a,b). For document filtering, the seed words for a category are expected to convey the conceptual meaning of the latter. It is impossible to list all the words to fully cover the relevant documents of a category. Therefore, it is essential to capture the conceptual relevance for zero-shot document filtering. The classical retrieval models simply estimate the relevance based on the query keyword matching, which is far from capturing the conceptual relevance. The existing deep relevance models for ad-hoc retrieval utilize the statistics of the hard/soft-match signals in terms of cosine similarity between two word embeddings (Guo et al., 2016a; Xiong et al., 2017). However, the scalar information like cosine similarity between two embedding vectors is too coarse or limited to reflect the conceptual relevance. On the contrary, we believe that the embedding features could provide rich knowledge towards the conceptual relevance. A key challenge is to endow DAZER a strong generalization ability to also successfully extract the relevance signals for unseen categories. To achieve this purpose, we extract the relevance signals based on the hidden feature interactions between the category and each word in the embedding space. Specifically, two element-wise operations are utilized in DAZER: element-wise subtraction and element-wise product. Since these two kinds of interactions represent the relative information encoded in hidden embedding space, we expect that the relevance signal extraction process could generalize well to unseen categories. Firstly, DAZER utilizes a gated convolutional operation with k-max pooling to extract the relevance signals. Then, DAZER abstracts higherlevel relevance features through a multi-layer perceptron, which can be considered as a relevance aggregation procedure. At last, DAZER calculates an overall score indicating the relevance between a document and the category. Without further constraints, it is possible for DAZER to encode the bias towards the category-specific features seen during the training (i.e., model overfitting). Therefore, we further introduce an adversarial learning over the output of the relevance aggregation procedure. The purpose is to ensure that the higher-level relevance features contain no category-dependent information, leading to a better zero-shot filtering performance. To the best of our knowledge, DAZER is the first deep model to conduct zero-shot document filtering. We conduct extensive experiments on two real-world document collections from two different domains (i.e., 20-Newsgroup for topic categorization, and Movie Review for sentiment analysis). Our experimetnal results suggest that DAZER achieves promising filtering performance and performs significantly better than the existing alternative solutions, including state-of-the-art deep relevance ranking methods. 2 Deep Zero-Shot Document Filtering Figure 1 illustrates the network structure of the proposed DAZER model. It consists of two main components: relevance signal extraction and relevance aggregation. In the following, we present each component in detail. 2.1 Relevance Signal Extraction Given a document d = (w1, w2, ..., w|d|) and a set of seed words Sc = {sc,i} for category c, we first map each word w into its dense word embedding representation ew ∈Rle where le denotes the dimension number. The embedding representation is pre-trained by using a representation learning method from an external large text corpus. Since our aim is to capture the conceptual relevance, we simply take the averaged embedding of the seed words to represent a category in the embedding space: cc = 1/|Sc| P s∈Sc es. Interaction-based Representation. It is widely recognized that word embeddings are useful because both syntactic and semantic information of words are well encoded (Mikolov et al., 2013; Pennington et al., 2014). The element-wise hidden feature difference is a kind of relative infor2302 噯 噯 噯 噯 噯 噯 6HHGZRUGV &DWHJRU\YHFWRU 'RFXPHQW ,QWHUDFWLRQEDVHG UHSUHVHQWDWLRQ &RQYROXWLRQ .0$; 322/,1* ,QWHUDFWLRQ 5HOHYDQFHVLJQDOH[WUDFWLRQ 5HOHYDQFHDJJUHJDWLRQ 噯 噯 *DWH 0/3 噯 5HOHYHQFH UHSUHVHQWDWLRQ VRIWPD[ *5/ $GYHUVDULDOOHDUQLQJ I F_G 0/3 0/3  Figure 1: The architecture of DAZER Examples catheism −eatheist ≈cbaseball −ehitter cautos −etoyata ≈cmotorcycles −eyamaha cbaseball −estadium ≈cmed −ehosptial creligion.misc −efaith ≈cmed −epatient Table 1: Examples by using embedding offset. mation that captures the offset bettwen a word and a category in the embedding space. These embedding offsets contain more intricate relationships for a word pair. A well known example is: eking −equeen ≈eman −ewoman (Mikolov et al., 2013). Similar observations are made when we calculate the embedding offset between words and categories. Table 1 lists several interesting patterns observed for the embedding offsets between a category and a word in 20-Newsgroup dataset (ref. Section 3.2 for more details). We can see that the embedding offsets are somehow consistent with a particular relation between the two category-word pairs. An effective way to measure the relatedness for two words is the inner product or cosine similarity between two corresponding word embeddings. This can be considered as a particular linear combination of corresponding feature products for the two embeddings: rel(e1, e2) = P i g(e1, e2, i)e1,i · e2,i = g(e1, e2)T (e1 ⊙ e2) where g(e1, e2, i) refers to the weight calculated for i-th dimension, and g(e1, e2) = [g(e1, e2, 1); ...; g(e1, e2, le)], ⊙is the elementwise product operation. The element-wise product between two embeddings is also a kind of relative information. The sign of a product of two embeddings in a specific dimension indicates whether the two embeddings share the same polarity in this dimension. And the resultant value manifests to what extent that this agreement/disagreement reaches. It is intuitive that the element-wise Examples sign(cmideast ⊙emuslim) ≈sign(cmed ⊙edoctor) sign(cspace ⊙eorbit) ≈sign(chockey ⊙eespn) sign(celectronics ⊙ecircuit) ≈sign(cpc ⊙econtroller) sign(ccrypt ⊙ealgorithm) ≈sign(cspace ⊙eburning) Table 2: Examples by using element-wise product. product offers some kinds of semantic relations. We conduct the element-wise product for each category-word pair in 20-Newsgroup dataset. Table 2 lists some interesting patterns we observe. The sign(x) function returns 1 when x ≥0, otherwise return −1. Shown in the table, the sign pattern of the element-wise product encodes the relevance information between a category and its related words. Inspired by these observations, we use these two kinds of element-wise interactions to complement the representation of a word in a document. Specifically, for each word w in document d, we derive its interaction-based representation ec w towards category c as follows: ediff c,w = cc −ew (1) eprod c,w = cc ⊙ew (2) ec w = [ew ⊕ediff c,w ⊕eprod c,w ] (3) where ⊕is the vector concatenation operation. Note that these two kinds of feature interactions are mainly overlooked by the existing literature. The embedding offsets are used in deriving word semantic hierarchies in (Fu et al., 2014). However, there is no existing work incorporating these two kinds of feature interactions for relevance estimation. Here, we expect that these two kinds of feature interactions can magnify the relevance information regarding the category. Convolution with k-max Pooling. We utilize 2303 m convolution filters to extract the relevance signals for each word based on its local window of size l in the document. Specifically, after calculating the interaction-based representation d = (ec 1, ec 2, ..., ec |d|) for document d and category c, we apply the convolution operation as follows: ri = W1ec i−l:i+l + b1 (4) where ri ∈Rm is the hidden features regarding the relevance signal extracted for i-th word, W1 ∈Rm×3le(2l+1) and b1 ∈Rm are the weight matrix and the corresponding bias vector respectively, ec i−l:i+l refers to the concatenation from ec i−l to ec i+l. Both l zero vectors are padded to the begining and the end of the document. With a local window of size l, the convolution operation can extract more accurate relevance information by taking the consecutive words (e.g., phrases) into account. We then apply k-max pooling strategy to obtain the k most active features for each filter. Let rj k−max denote the k largest values for filter j, we form the overall relevance signals rd extracted by all m filters through the concatenation: rc,d = [r1 k−max ⊕r2 k−max... ⊕rm k−max]. Category-specific Gating Mechanism. Given a specific word w, the interaction-based representation ec w for each category c could be very different. Therefore, for a specific local context, the extracted relevance signal from a particular convolution filter could be also distinct for different categories. It is then reasonable to assume that the relevance signals for a specific category are captured by a subset of filters. We propose to identify which filters are relevant to a category through a category-specific gating mechanism. Given category c, category-specific gates ac ∈Rm are calculated as follows: ac = σ(W2ec + b2) (5) where W2 ∈Rm×3le and b2 ∈Rm are the weight matrix and bias vectors respectively, σ(·) is the sigmoid function. With category-specific gating mechanism, Equation 4 can be rewritten as follows: ri = ac ⊙(W1ec i−l:i+l + b1) Here, ac works as on-off switches for m filters. While ac,j →1 indicates that j-th filter should be turned on to capture the relevance singals under category c to its fullness, ac,j →0 indicates that the filter is turned off due to its irrelevance. This collaboration of the convolution operation and gating mechanism is similar to the Gated Linear Units (GLU) recently proposed in (Dauphin et al., 2017). Given an input X, GLU calculates the output as follow: h(X) = (XW + b) ⊙ σ(XV + c) where the first term in the right side refers to the convolution operation and the second term in the right side refers to the gating mechanism. In GLU, both the convolution operation and the gates share the same input X. In contrast, in this work, we aim to identify which filters capture the relevance signals in a category-dependent manner. The experimental results validate that this category-dependent setting brings significant benefit for zero-shot filtering performance (ref. Section 3). 2.2 Relevance Aggregation The raw relevance signals rc,d are somehow category-dependent, since the relevant filters are category-dependent. The hidden features regarding the relevance are distilled through a fullyconnected hidden layer with nonlinearity: hc,d = ga(W3rc,d + b3) (6) where W3 ∈Rla×3km and b3 ∈Rla are the weight matrix and bias vector respectively, ga(·) is the tanh function. This procedure can be considered as a relevance aggregation process. Then, the overall relevance score is then estimated as follow: f(c|d) = tanh(wT hc,d + b) (7) where w ∈Rla and b are the parameters and bias respective. 2.3 Model Training Adversarial Learning The hidden features hc,d are expected to be category-independent. However, there is no guarantee that the categoryspecific information is not mixed with the relevance information extracted in hc,d. Here, we introduce an adversarial learning mechanism to ensure that no category-specific information can be memorized during the training. Otherwise, the proposed DAZER may not generalize well to unseen categories. Specifically, we introduce an category classifier over hc,d to calculate the probability that hc,d belongs to each category seen during the training: pcat(·|hc,d) = softmax(W4hc,d + 2304 b4) where W4 ∈RC×la and b4 ∈RC are the weight matrix and bias vector for the classifier, C is the number of categories covered by the training set. We aim to optimize parameters φ = {W4, b4} to successfully classify hc,d to its true category. Let θ denote the parameters regarding the calculation of hc,d, i.e., θ = {W1, W2, W3, b1, b2, b3}, φ is optimized to minimize the negative log-likelihood: Lcat(θ, φ) = 1 |T| X (d,y)∈T −pcat(y|hy,d) (8) where T denotes the training set {(d, y)} such that document d is relevant to category y. On the other hand, we expect that hc,d carries no category specific information, such that the classifier can not perform the category classification precisely. Hence, we add the Gradient Reversal Layer (GRL) (Ganin and Lempitsky, 2015; Ganin et al., 2016) between hc,d and the category classifier. We can consider GRL as a pseudo-function Rλ(x): Rλ(x) = x; ∂Rλ ∂x = −λI (9) It means that θ is optimized to make hc,d indistinguishable by the classifier. In Equation 9, parameter λ controls the importance of the adversarial learning. DAZER is devised to return a relevance score, we utilize the pairwise margin loss for model training: Lhinge(θ, δ) = 1 |T| X (d,y)∈T max(0, ∆−f(y|d) + f(y|d− y )) (10) where document d− y is the negative sample for category y, ∆is the margin and set to be 1 in this work, and δ = {w, b}. Overall, the proposed DAZER is an end-to-end neural network model. The parameters Θ = {θ, φ, δ} are optimized via back propagation and stochastic gradient descent. Specifically, we utilize Adam (Kingma and Ba, 2014) algorithm for parameter update over minibatches. The final objective loss used in the training is as follow: L(Θ) =Lhinge(θ, δ) + Lcat(θ, φ) + λΘ∥Θ∥2 (11) where λΘ controls the importance of the regularizaton term. Label Seed Words very negative bad, horrible, negative, disgusting negative bad, confused, unexpected, useless, negative neutral normal, moderate, neutral, objective, impersonal positive good, positive, outstanding, satisfied, pleased very positive positive, impressive, unbelievable, awesome Table 3: Seed words selected for Movie Review. 3 Experiment In this section, we conduct experiments on two real-world document collections to evaluate the effectiveness of the proposed DAZER1. 3.1 Existing Alternative Methods Here, we compare the proposed DAZER against the following alternative solutions. BM25 Model: BM25 is a widely known retrieval model based on keyword matching (Robertson and Walker, 1994). The default parameter setting is used in the experiments. DSSM: DSSM utilizes a multi-layer perceptron to extract hidden representations for both the document and the query (Huang et al., 2013). Then, cosine similarity is calculated as the relevance score based on the representation vectors. Since we use pre-trained word embeddings from a large text corpus, we choose to replace the letter-tri-grams representation with the word embedding representation instead. We use the recommended network setting by its authors. DRMM: DRMM calculates the relevance based on the histogram information of the semantic relatedness scores between each word in the document and each query word (Guo et al., 2016a). The recommended network setting (i.e., LCH×IDF) and parameter setting are used. K-NRM: K-NRM is a kernel based neural model for relevance ranking based on word-level hard/soft matching signals (Xiong et al., 2017). We use the recommended setting as in their paper. DeepRank: DeepRank is a neural relevance ranking model based on the query-centric context (Pang et al., 2017). The recommended setting is used for evaluation. Seed-based Support Vector Machines (SSVM): We build a seed-driven training set by labeling a training document with a category if the document 1The implementation is available at https://github.com/WHUIR/ DAZER 2305 contains any seed word of that category. Then, we adopt a one-class SVM implemented by sklearn2 for document filtering3. The optimal performance is reported by tuning the hyper-parameter. 3.2 Datasets and Experimental Setup 20-Newsgroup (20NG)4 is a widely used benchmark for document classification research (Li et al., 2016). It consists of approximately 20K newsgroup articles from 20 different categories. The bydate version with 18, 846 documents is used here. As provided, the training set and test set contain 60% and 40% documents respectively. Movie Review5 is a collection of movie reviews in English (Pang and Lee, 2005). The scale dataset v1.0 is used in the experiments. Based on the numerical ratings, we split these reviews into five sentiment labels: very negative, negative, neutral, positive and very positive, which contains 167, 1030, 1786, 1682, 341 reviews respectively. For each sentiment label, we randomly split the reviews into a training set (80%) and a test set (20%). Since our work targets at zero-shot document filtering for unseen categories, the word embeddings pre-trained by Glove over a large text corpus with total 840 billion tokens6 are used across all the methods and the two datasets. The dimension of the word embeddings is le = 300. No further word embedding fine-tuning is applied. For both datasets, the stop words are removed firstly. Then, all the words are converted into their lowercased forms. We further remove the words whose word embeddings are not supported by Glove. Evaluation Protocol. With the specified unseen categories, we take all the training documents of the other categories to train a model. Then, all documents in the test set are used for evaluation. For each unseen category, the task is to rank the documents of that category higher than the others. Here, we choose to report mean average precision (MAP) for performance evaluation. MAP is a widely used metric to evaluate the ranking quality. The higher the relevant documents are ranked, 2http://scikit-learn.org 3Signed distance to the separating hyperplan is used for ranking documents. 4http://qwone.com/˜jason/20Newsgroups/ 5The Movie Review dataset is available at http://www.cs. cornell.edu/people/pabo/movie-review-data/ 6https://nlp.stanford.edu/projects/glove/ the larger the MAP value is, which means a better filtering performance. For all neural networks based models, the training documents from one randomly sampled training category work as the validation set for early stop. We report the averaged results over 5 runs for all the methods (excluding SSVM and BM25). The statistical significance is conducted by applying the student t-test. Seed Word Selection. For 20NG dataset, we directly use the seed words7 manually compiled in (Song and Roth, 2014). These seed words are selected from the category descriptions and widely used in the works of dataless text classification (Song and Roth, 2014; Chen et al., 2015; Li et al., 2016). For Movie Review, following the seed word selection process (i.e., assisted by standard LDA) proposed in (Chen et al., 2015), we manually select the seed words for each sentiment label. Table 3 lists the seed words selected for each sentiment label for Movie Review dataset. There are on average 5.2 and 4.6 seed words for each category over 20NG and Movie Review respectively. It is worthwhile to highlight that no category information is exploited within the seed word selection process. Parameter Setting. For DAZER, the number of convolution filters is m = 50 and k = 3 is used for k-max pooling. The dimension size for relevance aggregation is la = 75. The local window size l is set to be 2. The learning rate is 0.00001. The models are trained with a batch size of 16 and λΘ = 0.0001, λ = 0.1. 3.3 Performance Comparison For 20NG dataset, we randomly create 9 document filtering tasks which cover 10 out of 20 categories. For Movie Review, we take each sentiment label as an unseen category for evaluation. Table 4 lists the performance of 7 methods in terms of MAP for these filtering tasks. Here, we make the following observations. First, the proposed DAZER significantly achieves much better filtering performance on all 14 tasks across the two datasets. The averaged MAP of DAZER over these 14 filtering tasks is 0.671. Note that only 5.2 and 4.6 seed words are used on average for each task. The second best performer is K-NRM, which achieves the second 7The seed words are available at https://github.com/WHUIR/ STM 2306 Dataset Category DAZER DRMM K-NRM DeepRank DSSM SSVM BM25 20NG pc 0.535 0.382† 0.369† 0.144† 0.222† 0.117 0.313 med 0.826 0.662† 0.645† 0.033† 0.192† 0.104 0.403 baseball 0.764 0.731† 0.735† 0.294† 0.373† 0.291 0.414 space 0.780 0.593† 0.671† 0.285† 0.142† 0.140 0.641 med-space 0.805 0.640† 0.666† 0.101† 0.174† 0.122 0.522 atheismelectronics 0.464 0.242† 0.346† 0.418† 0.219† 0.132 0.263 christianmideast 0.712 0.662† 0.657† 0.298† 0.327† 0.161 0.579 baseballhockey 0.782 0.642† 0.736† 0.332† 0.135† 0.438 0.444 pc-windowxelectronics 0.489 0.274† 0.379† 0.183† 0.278† 0.120 0.314 Movie Review very negative 0.290 0.119† 0.114† 0.097† 0.216† 0.080 0.134 negative 0.807 0.528† 0.557† 0.423† 0.478† 0.236 0.090 neutral 0.798 0.764† 0.749† 0.686† 0.678† 0.365 0.007 positive 0.862 0.696† 0.706† 0.655† 0.753† 0.300 0.090 very positive 0.479 0.250† 0.339† 0.217† 0.271† 0.063 0.066 Avg 0.671 0.513 0.548 0.298 0.318 0.191 0.306 Table 4: Performance of the 7 methods for zero-shot document filtering in terms of MAP. The best and second best results are highlighted in boldface and underlined respectively, on each task. † indicates that the difference to the best result is statistically significant at 0.05 level. Avg: averaged MAP over all tasks. best on 7 tasks. Overall, the averaged performance gain for DAZER over K-NRM is about 30.8%. Second, We observe that DSSM performs signficantly better for sentiment analysis than for topic categorization. As discussed in Section 4, DSSM is designed to perform semantic matching. Compared with topic categorization, sentiment analysis is more like a semantic matching task. SSVM delivers the worst performance on both datasets. This illustrates that the quality of the labeled documents is essential for supervised learning techniques. Apparently, recruiting training documents with the provided seed words in a simple fashion is error-prone. We also note that BM25 achieves inconsistent performance over the two kinds of tasks. It performs especially worse for sentiment analysis. This is reasonable because there are more diverse ways to express a specific sentiment. It is hard to cover a reasonable proportion of documents with limited number of sentimental seed words. In comparison, the proposed DAZER obtains a consistent performance for both topic categorization and sentiment analysis. 3.4 Analysis of DAZER Component Setting. Here, we further discuss the impact of different component settings of DAZER on both 20NG and Movie Review datasets. Table 5 and 6 report the impacts of each component setting via an ablation test on the two datasets respectively. We can see that each component brings significantly positive benefit for document filtering. First, we can see that either element-wise subtraction or product contributes signifcantly to the performance improvement. Specifically, from Table 6, we can see that both the element-wise subtraction and element-wise product play equally on Movie Review dataset. On the other hand, it is observed that DAZER experiences significantly a much larger performance degradation on 20NG dataset. For example, a MAP of only 0.154 is achieved when eprod c,w is excluded from DAZER for the filtering task space. A much severer case is for the filtering task baseball-hockey. By excluding eprod c,w , the MAP performance of DAZER is reduced from 0.782 to 0.045. That is, the element-wise product is more critical for extracting relevance signals for topical categorization. We also observe that these two hidden feature interactions together play a more important role for DAZER. For example, without both ediff c,w and eprod c,w , DAZER only achieves a MAP of 0.126 for filtering task space. The large performance deterioration is also observed for other filtering tasks on 20NG dataset. Either adversarial learning or category-specific gate mechanism enhances the filtering performance of DAZER, which validates the effectiveness of the two components for enhancing con2307 Setting pc med baseball space med-space atheism-electronics christian-mideast baseball-hockey pc-windowx-electronics DAZER 0.535 0.826 0.764 0.780 0.805 0.464 0.712 0.782 0.489 - ediff c,w 0.524 0.810 0.755 0.785 0.802 0.454 0.705 0.788 0.462 - eprod c,w 0.219 0.043 0.200 0.154 0.139 0.217 0.244 0.045 0.141 - Gate 0.518 0.819 0.715 0.780 0.803 0.443 0.695 0.784 0.489 - Adv 0.531 0.819 0.749 0.775 0.795 0.458 0.701 0.779 0.485 Table 5: Impact of different settings for DAZER on 20NG. The best results are highlighted in boldface. - ediff c,w : no element-wise subtraction; - eprod c,w : no element-wise product; - Gate: no category-specific gate mechanism; - Adv: no adversarial learning. Setting very negative negative neutral positive very positive DAZER 0.290 0.807 0.798 0.862 0.479 - ediff c,w 0.246 0.773 0.776 0.847 0.453 - eprod c,w 0.258 0.779 0.785 0.847 0.430 - Gate 0.278 0.755 0.785 0.848 0.429 - Adv 0.261 0.779 0.776 0.827 0.444 Table 6: Impact of different settings for DAZER on Movie Review. The best results are highlighted in boldface. - ediff c,w : no element-wise subtraction; - eprod c,w : no element-wise product; - Gate: no categoryspecific gate mechanism; - Adv: no adversarial learning. ceptual relevance extraction. Also, without using adversarial learning, DAZER still achieves much better filtering performance than the existing baseline methods compared in Section 3.3. This observation is also held on 20NG dataset. This further validates that the two kinds of hidden feature interactions indeed encode rich knowledge towards the conceptual relevance. Impact of Seed Words. It has been recognized that the less seed words incur worse document classification performance in the existing dataless document classification techniques (Song and Roth, 2014; Chen et al., 2015; Li et al., 2016). Following these works, we also use the words appearing in the category name of 20NG dataset as the corresponding seed words8. There are on average 2.75 seed words for a category of 20NG. Table 7 reports the MAP performace of each method on 20NG dataset. The experimental results show that all methods investigated in Section 3.3 experience signficant performance degradation for most filtering tasks. We plan to incorporate the pseudo-relevance feedback into DAZER to tackle the scarcity of the seed words. One possible solution is to enrich the architecture of DAZER to allow few-shot document filtering. That is, the filtering decisions of high-confidence are utilized to derive more seed words for better filtering performance. 8The seed words based on the category name are available at https://github.com/WHUIR/STM 4 Related Work Document filtering is the task to separate relevant documents from the irrelevant ones for a specific topic (Robertson and Soboroff, 2002; Nanas et al., 2010; Gao et al., 2013, 2015; Proskurnia et al., 2017). Both ranking and classification based solutions have been developed (Harman, 1994; Robertson and Soboroff, 2002; Soboroff and Robertson, 2003). In earlier days, a filtering system is mainly devised to facilitate the document retrieval for the long-term information needs (Mostafa et al., 1997). The term-based pattern mining techniques are widely developed to perform document filtering. A network-based topic profile is built to exploit the term correlation patterns for document filtering (Nanas et al., 2010). Frequent term patterns in terms of finegrained hidden topics are proposed in (Gao et al., 2013, 2015) for doucment filtering. Very recently, frequent term patterns are also utilized to perform event-based microblog filtering (Proskurnia et al., 2017). However, these approaches are all based on supervised-learning, which requires a significant amount of positive documents for each topic. In the era of big data, the information space and new information needs are continuously growing. Retrieval of the relevance information in a short response time becomes a fundamental need. Recently, many works have been proposed to conduct document filtering in an entity-centric manner (Frank et al., 2012; Balog and Ramampiaro, 2013; Zhou and Chang, 2013; Reinanda et al., 2016). The task is to identify the documents relevant to a specific entity that is well defined in an 2308 Dataset Category DEZA DRMM KNRM DeepRank DSSM SSVM BM25 20NG pc 0.316 0.170 0.144 0.104 0.316 0.057 0.092 med 0.831 0.369 0.267 0.183 0.089 0.040 0.000 baseball 0.519 0.315 0.301 0.299 0.419 0.066 0.161 space 0.641 0.337 0.326 0.414 0.212 0.049 0.329 med-space 0.670 0.348 0.331 0.279 0.076 0.044 0.165 atheismelectronics 0.359 0.266 0.253 0.499 0.141 0.042 0.091 christianmideast 0.564 0.582 0.492 0.196 0.418 0.061 0.093 baseballhockey 0.577 0.409 0.391 0.336 0.154 0.061 0.194 pc-windowxelectronics 0.346 0.176 0.194 0.185 0.227 0.067 0.124 Table 7: Performance of the 7 methods for zero-shot document filtering in terms of MAP. The words appearing in the category name are used as the seed words. The best and second best results are highlighted in boldface and underlined respectively, on each task. external knowledge base. Specifically, Balog and Ramampiaro (2013) examine the choice of classification against ranking approaches. They found that ranking approach is more suitable for the filtering task. Following this conclusion, we formulate the zero-shot document filtering as a relevance ranking task. Many information needs may not be well represented by a specific entity. Hence, these entity-centric solutions are restricted to knowledge base related tasks. Many ad-hoc retrieval models can be used to perform zero-shot document filtering. Indeed, traditional term-based document filtering approaches utilize many term-weighting schemes developed for ad-hoc retrieval. Traditional adhoc retrieval models mainly estimate the relevance based on keyword matching. BM25 (Robertson and Walker, 1994) can be considered as the optimal practice in this line of literature. The recent advances in word embedding offer effective learning of word semantic relations from a large external corpus. Several neural relevance ranking models are proposed to preform ad-hoc retrieval based on word embeddings. Both K-NRM (Xiong et al., 2017) and DRMM (Guo et al., 2016a) estimate the relevance based on the macro-statistics of the hard/soft-match signals in terms of cosine similarity between two word embeddings. DeepRank (Pang et al., 2017) first measures the relevance signals from the query-centric context of each query keyword matching point through convolutional operations. Then, RNN based networks are adopted to aggregate these relevance signals. These works achieve significantly better retrieval performance than the keyword matching based solutions and represent the new state-of-the-art. The relevance between a query and a document can also be considered as a matching task between two pieces of text. There are many deep matching models, e.g., DSSM (Huang et al., 2013), ARCII (Hu et al., 2014), MatchPyramid (Pang et al., 2016), Match-SRNN (Wan et al., 2016). These models are mainly developed for some specific semantic matching tasks, e.g., paraphrase identification. Therefore, information like grammatical structure or sequence of words are often taken into consideration, which is not applicable to seed word based zero-shot document filtering. 5 Conclusion In this paper, we propose a novel deep relevance model for zero-shot document filtering, named DAZER. To enable DAZER to capture conceptual relevance and generalize well to unseen categories, two kinds of feature interactions, a gated convolutional network and an categoryindependent adversarial learning are devised. The experimental results over two different tasks validate the superiority of the proposed model. In the future, we plan to enrich the architecture of DAZER to allow few-shot document filtering by incorporating several labeled examples. 6 Acknowledgement This research was supported by National Natural Science Foundation of China (No.61502344), Natural Science Foundation of Hubei Province (No.2017CFB502), Natural Scientific Research Program of Wuhan University (No.2042017kf0225). Chenliang Li is the corresponding author. 2309 References Krisztian Balog and Heri Ramampiaro. 2013. Cumulative citation recommendation: classification vs. ranking. In SIGIR. pages 941–944. Ming-Wei Chang, Lev-Arie Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: Dataless classification. In AAAI. pages 830–835. Xingyuan Chen, Yunqing Xia, Peng Jin, and John A. Carroll. 2015. Dataless text classification with descriptive LDA. In AAAI. pages 2224–2231. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In ICML. pages 933– 941. Gregory Druck, Gideon S. Mann, and Andrew McCallum. 2008. Learning from labeled features using generalized expectation criteria. In SIGIR. pages 595–602. Carsten Eickhoff, Sebastian Dungs, and Vu Tran. 2015. An eye-tracking study of query reformulation. In SIGIR. pages 13–22. Hui Fang and ChengXiang Zhai. 2006. Semantic term matching in axiomatic approaches to information retrieval. In SIGIR. pages 115–122. John R. Frank, Max Kleiman-Weiner, Daniel A. Roberts, Feng Niu, Ce Zhang, Christopher R´e, and Ian Soboroff. 2012. Building an entity-centric stream filtering test collection for TREC 2012. In TREC. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In ACL. pages 1199– 1209. Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In ICML. pages 1180–1189. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor S. Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research 17:59:1–59:35. Yang Gao, Yue Xu, and Yuefeng Li. 2013. Patternbased topic models for information filtering. In ICDM Workshops. pages 921–928. Yang Gao, Yue Xu, and Yuefeng Li. 2015. Patternbased topics for document modelling in information filtering. IEEE Trans. Knowl. Data Eng. 27(6):1629–1642. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016a. A deep relevance matching model for ad-hoc retrieval. In CIKM. pages 55–64. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016b. Semantic matching by non-linear word transportation for information retrieval. In CIKM. pages 701–710. Donna Harman. 1994. Overview of the third text retrieval conference (TREC-3). In TREC. pages 1–20. Swapnil Hingmire and Sutanu Chakraborti. 2014. Topic labeled text classification: A weakly supervised approach. In SIGIR. pages 385–394. Swapnil Hingmire, Sandeep Chougule, Girish K. Palshikar, and Sutanu Chakraborti. 2013. Document classification by topic labeling. In SIGIR. pages 877–880. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In NIPS. pages 2042–2050. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry P. Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM. pages 2333–2338. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. Chenliang Li, Jian Xing, Aixin Sun, and Zongyang Ma. 2016. Effective document labeling with very few seed words: A topic model approach. In CIKM. Bing Liu, Xiaoli Li, Wee Sun Lee, and Philip S. Yu. 2004. Text classification by labeling words. In AAAI. pages 425–430. Tomas Mikolov, Kai Chen, Greg Corrada, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Javed Mostafa, Snehasis Mukhopadhyay, Wai Lam, and Mathew J. Palakal. 1997. A multilevel approach to intelligent information filtering: Model, system, and evaluation. ACM Trans. Inf. Syst. 15(4):368– 399. Nikolaos Nanas, Manolis Vavalis, and Anne N. De Roeck. 2010. A network-based model for highdimensional information filtering. In SIGIR. pages 202–209. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL. pages 115– 124. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2016. Text matching as image recognition. In AAAI. pages 2793–2799. 2310 Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, and Xueqi Cheng. 2017. Deeprank: A new deep architecture for relevance ranking in information retrieval. In CIKM. pages 257–266. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. pages 1532–1543. Julia Proskurnia, Ruslan Mavlyutov, Carlos Castillo, Karl Aberer, and Philippe Cudr´e-Mauroux. 2017. Efficient document filtering using vector space topic expansion and pattern-mining: The case of event detection in microposts. In CIKM. pages 457–466. Ridho Reinanda, Edgar Meij, and Maarten de Rijke. 2016. Document filtering for long-tail entities. In CIKM. pages 771–780. Stephen E. Robertson and Ian Soboroff. 2002. The TREC 2002 filtering track report. In TREC. Stephen E. Robertson and Steve Walker. 1994. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In SIGIR. pages 232–241. Ian Soboroff and Stephen E. Robertson. 2003. Building a filtering test collection for TREC 2002. In SIGIR. pages 243–250. Yangqiu Song and Dan Roth. 2014. On dataless hierarchical text classification. In AAAI. pages 1579– 1585. Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016. Match-srnn: Modeling the recursive matching structure with spatial RNN. In IJCAI. pages 2922–2928. Ho Chung Wu, Robert W. P. Luk, Kam-Fai Wong, and K. L. Kwok. 2007. A retrospective study of a hybrid document-context based retrieval model. Inf. Process. Manage. 43(5):1308–1331. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In SIGIR. pages 55–64. Mianwei Zhou and Kevin Chen-Chuan Chang. 2013. Entity-centric document filtering: boosting feature mapping through meta-features. In CIKM. pages 119–128.
2018
214
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2311–2320 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2311 Disconnected Recurrent Neural Networks for Text Categorization Baoxin Wang Joint Laboratory of HIT and iFLYTEK, iFLYTEK Research, Beijing, China [email protected] Abstract Recurrent neural network (RNN) has achieved remarkable performance in text categorization. RNN can model the entire sequence and capture long-term dependencies, but it does not do well in extracting key patterns. In contrast, convolutional neural network (CNN) is good at extracting local and position-invariant features. In this paper, we present a novel model named disconnected recurrent neural network (DRNN), which incorporates position-invariance into RNN. By limiting the distance of information flow in RNN, the hidden state at each time step is restricted to represent words near the current position. The proposed model makes great improvements over RNN and CNN models and achieves the best performance on several benchmark datasets for text categorization. 1 Introduction Text categorization is a fundamental and traditional task in natural language processing (NLP), which can be applied in various applications such as sentiment analysis (Tang et al., 2015), question classification (Zhang and Lee, 2003) and topic classification (Tong and Koller, 2001). Nowadays, one of the most commonly used methods to handle the task is to represent a text with a low dimensional vector, then feed the vector into a softmax function to calculate the probability of each category. Recurrent neural network (RNN) and convolutional neural network (CNN) are two kinds of neural networks usually used to represent the text. RNN can model the whole sequence and capture long-term dependencies (Chung et al., 2014). However, modeling the entire sequence sometimes case1: One of the seven great unsolved mysteries of mathematics may have been cracked by a reclusive Russian. case2: A reclusive Russian may have cracked one of the seven great unsolved mysteries of mathematics. Table 1: Examples of topic classification can be a burden, and it may neglect key parts for text categorization (Yin et al., 2017). In contrast, CNN is able to extract local and position-invariant features well (Scherer et al., 2010; Collobert et al., 2011). Table 1 is an example of topic classification, where both sentences should be classified as Science and Technology. The key phrase that determines the category is unsolved mysteries of mathematics, which can be well extracted by CNN due to position-invariance. RNN, however, doesn’t address such issues well because the representation of the key phrase relies on all the previous terms and the representation changes as the key phrase moves. In this paper, we incorporate positioninvariance into RNN and propose a novel model named Disconnected Recurrent Neural Network (DRNN). Concretely, we disconnect the information transmission of RNN and limit the maximal transmission step length as a fixed value k, so that the representation at each step only depends on the previous k −1 words and the current word. In this way, DRNN can also alleviate the burden of modeling the entire document. To maintain the position-invariance, we utilize max pooling to extract the important information, which has been suggested by Scherer et al. (2010). Our proposed model can also be regarded as a special 1D CNN where convolution kernels are replaced with recurrent units. Therefore, the maximal transmission step length can also be consid2312 ered as the window size in CNN. Another difference to CNN is that DRNN can increase the window size k arbitrarily without increasing the number of parameters. We also find that there is a trade-off between position-invariance and long-term dependencies in the DRNN. When the window size is too large, the position-invariance will disappear like RNN. By contrast, when the window size is too small, we will lose the ability to model long-term dependencies just like CNN. We find that the optimal window size is related to the type of task, but affected little by training dataset sizes. Thus, we can search the optimal window size by training on a small dataset. We conduct experiments on seven large-scale text classification datasets introduced by Zhang et al. (2015). The experimental results show that our proposed model outperforms the other models on all of these datasets. Our contributions can be concluded as follows: 1. We propose a novel model to incorporate position-variance into RNN. Our proposed model can both capture long-term dependencies and local information well. 2. We study the effect of different recurrent units, pooling operations and window sizes on model performance. Based on this, we propose an empirical method to find the optimal window size. 3. Our proposed model outperforms the other models and achieves the best performance on seven text classification datasets. 2 Related Work Deep neural networks have shown great success in many NLP tasks such as machine translation (Bahdanau et al., 2015; Tu et al., 2016), reading comprehension (Hermann et al., 2015), sentiment classification (Tang et al., 2015), etc. Nowadays, nearly most of deep neural networks models are based on CNN or RNN. Below, we will introduce some important works about text classification based on them. Convolutional Neural Networks CNN has been used in natural language processing because of the local correlation and position-invariance. Collobert et al. (2011) first utilize 1D CNN in part of speech (POS), named entity recognition (NER) and semantic role labeling (SRL). Kim (2014) proposes to classify sentence by encoding a sentence with multiple kinds of convolutional filters. To capture the relation between words, Kalchbrenner et al. (2014) propose a novel CNN model with a dynamic k-max pooling. Zhang et al. (2015) introduce an empirical exploration on the use of character-level CNN for text classification. Shallow CNN cannot encode long-term information well. Therefore, Conneau et al. (2017) propose to use very deep CNN in text classification and achieve good performance. Similarly, Johnson and Zhang (2017) propose a deep pyramid CNN which both achieves good performance and reduces training time. Recurrent Neural Networks RNN is suitable for handling sequence input like natural language. Thus, many RNN variants are used in text classification. Tang et al. (2015) utilize LSTM to model the relation of sentences. Similarly, Yang et al. (2016) propose hierarchical attention model which incorporates attention mechanism into hierarchical GRU model so that the model can better capture the important information of a document. Wang and Tian (2016) incorporate the residual networks (He et al., 2016) into RNN, which makes the model handle longer sequence. Xu et al. (2016) propose a novel LSTM with a cache mechanism to capture long-range sentiment information. Hybrid model Some researchers attempt to combine the advantages of CNN and RNN. (Xiao and Cho, 2016) extract local and global features by CNN and RNN separately. (Lai et al., 2015) firstly model sentences by RNN, and then use CNN to get the final representation. Shi et al. (2016) replace convolution filters with deep LSTM, which is similar to what is proposed in this paper. The main differences are as follows. Firstly, they regard their models as CNN and set a small window size of 3, while we propose to use a large window size. We argue that small window size makes the model lose the ability to capture long-term dependencies. Secondly, we utilize max pooling but not mean pooling, because max pooling can maintain position-invariance better (Scherer et al., 2010). Finally, our DRNN model is more general and can make use of different kinds of recurrent units. We find that using GRU as recurrent units outperforms LSTM which is utilized by Shi et al. (2016). 2313 w1 w2 w3 w4 h1 h2 h3 h4 RNN (a) RNN p w1 w2 w3 w4 p h1 h2 h3 h4 RNN RNN RNN RNN (b) DRNN w1 w2 w3 w4 h1 h2 h3 h4 p p Conv Conv Conv Conv (c) CNN Figure 1: Three model architectures. In order to ensure the consistency of the hidden output, we pad k −1 zero vectors on the left of the input sequence for DRNN and CNN. Here window size k is 3. 3 Method 3.1 Recurrent Neural Network (RNN) RNN is a class of neural network which models a sequence by incorporating the notion of time step (Lipton et al., 2015). Figure 1(a) shows the structure of RNN. Hidden states at each step depend on all the previous inputs, which sometimes can be a burden and neglect the key information (Yin et al., 2017). A variant of RNN has been introduced by Cho et al. (2014) with the name of gated recurrent unit (GRU). GRU is a special type of RNN, capable of learning potential long-term dependencies by using gates. The gating units can control the flow of information and mitigate the vanishing gradients problem. GRU has two types of gates: reset gate rt and update gate zt. The hidden state ht of GRU is computed as ht = (1 −zt) ⊙ht−1 + zt ⊙˜ht (1) where ht−1 is the previous state, ˜ht is the candidate state computed with new input information and ⊙is the element-wise multiplication. The update gate zt decides how much new information is updated. zt is computed as follows: zt = σ(Wzxt + Uzht−1) (2) here xt is the input vector at step t. The candidate state ˜ht is computed by ˜ht = tanh(Wxt + U(rt ⊙ht−1)) (3) where rt is the reset gate which controls the flow of previous information. Similarly to the update gate, the reset gate rt is computed as: rt = σ(Wrxt + Urht−1) (4) We can see that the representation of step t depends upon all the previous input vectors. Thus, we can also express the tth step state shown in Equation (5). ht = GRU(xt, xt−1, xt−2, ..., x1) (5) 3.2 Disconntected Recurrent Neural Networks (DRNN) To reduce the burden of modeling the entire sentence, we limit the distance of information flow in RNN. Like other RNN variants, we feed the input sequence into an RNN model and generate an output vector at each step. One important difference from RNN is that the state of our model at each step is only related to the previous k −1 words but not all the previous words. Here k is a hyperparameter called window size that we need to set. Our proposed model DRNN is illustrated in Figure 1(b). Since the output at each step only depends on the previous k −1 words and current word, the output can also be regarded as a representation of a phrase with k words. Phrases with the same k words will always have the same representation no matter where they are. That is, we incorporate the position-invariance into RNN by disconnecting the information flow of RNN. Similarly, we can get the state ht as follows: ht = RNN(xt, xt−1, xt−2, ..., xt−k+1) (6) Here k is the window size, and RNN can be naive RNN, LSTM (Hochreiter and Schmidhuber, 1997), GRU or any other kinds of recurrent units. 3.3 Comparison with Convolutional Neural Network (CNN) DRNN can be considered as a special 1D CNN which replace the convolution filters with recur2314 Max Pooling Softmax DGRU Input Sequence MLP MLP h1 h2 hn-1 hn ... ... Figure 2: Model architecture rent units. Let xt denote the tth input word vector. Then for each position t we can get a window vector ct. ct = [xt, xt−1, xt−2, ..., xt−k+1] (7) here, we concatenate k word vectors and generate vector ct. Then we can get the output of convolution as follows: ht = Wct + b (8) where W is a set of convolution filters and b is a bias vector. Then a pooling operation can be applied after the convolutional layer and generate a fixed size vector (Kim, 2014). Similarly to RNN and DRNN, we can also represent the context vector of CNN as followings: ht = Conv(xt, xt−1, xt−2, ..., xt−k+1) (9) Obviously, the parameters of convolution filters W increase as the window size k increases. By contrast, for DRNN the parameters do not increase with the increase of window size. Hence, DRNN can mitigate overfitting problem caused by the increase of parameters. 3.4 DRNN for Text Classification DRNN is a general model framework, which can be used for a variety of tasks. In this paper, we only discuss how to apply DRNN in text categorization. We utilize GRU as recurrent units of DRNN and get the context representation of each step. Every wt-3 wt-2 wt-1 wt ht ht wt-3 wt-2 wt-1 wt Figure 3: Dropout in DRNN. The dashed arrows indicate connections where dropout is applied. The left model only applies dropout in input and output layers, but the right model applies dropout in hidden states. context vector can be considered as a representation of a text fragment. Then we feed the context vectors into a multi-layer perceptron (MLP) to extract high-level features as illustrated in Figure 2. Before feeding the vectors into MLP, we utilize Batch Normalization (Ioffe and Szegedy, 2015) after DRNN, so that the model can alleviate the internal covariate shift problem. To get the text representation vector, we apply max pooing after MLP layer to extract the most important information and position-invariant features (Scherer et al., 2010). Finally, We feed the text representation vector into an MLP with rectified linear unit (ReLU) activation and send the output of MLP to a softmax function to predict the probability of each category. We use cross entropy loss function as follows: H(y, ˆy) = X i yi log ˆyi (10) where ˆyi is the predicted probability and yi is the true probability of class i. To alleviate the overfitting problem, we apply dropout regularization (Srivastava et al., 2014) in DRNN model. Dropout is usually applied in the input and output layers but not the hidden states of RNN, because the number of previous states is variable (Zaremba et al., 2014). In contrast, our DRNN model has a fixed window size for output at each step, so we also apply dropout in the hidden states. In this paper, we apply dropout in the input layer, output layer, and hidden states. The Figure 3 shows the difference to apply dropout between 2315 AG DBP Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. Tasks News Ontology SA SA QA SA SA Train dataset 120k 560k 560k 650k 1.4M 3.6M 3M Test dataset 7.6k 70k 38k 50k 60k 400k 650k Average Lengths 45 55 153 155 112 93 91 Classes Number 4 14 2 5 10 5 2 Table 2: Dataset information. Here SA refers to sentiment analysis, and QA refers to question answering. RNN and DRNN. 4 Experiments 4.1 Experimental Settings Datasets Introduction We use 7 large-scale text classification datasets which are proposed by Zhang et al. (2015). We summarize the datasets in Table 2. AG corpus is news and DBPedia is an ontology which comes from the Wikipedia. Yelp and Amazon corpus are reviews for which we should predict the sentiment. Here P. means that we only need to predict the polarities of the dataset, while F. indicates that we need predict the star number of the review. Yahoo! Answers (Yah. A.) is a question answering dataset. We can see that these datasets contain various domains and sizes, which would be credible to validate our models. Implementation Details We tokenize all the corpus with NLTK’s tokenizer (Bird and Loper, 2004). We limit the vocabulary size of each dataset as shown in Table 3. The words not in vocabulary are replaced with a special token UNK. Table 3 also shows the window sizes that we set for these datasets. We utilize the 300D GloVe 840B vectors (Pennington et al., 2014) as our pre-trained word embeddings. For words that do not appear in GloVe, we average the vector representations of 8 words around the word in training dataset as its word vector, which has been applied by Wang and Jiang (2016). When training our model, word embeddings are updated along with other parameters. We use Adadelta (Zeiler, 2012) to optimize all the trainable parameters. The hyperparameter of Adadelta is set as Zeiler (2012) suggest that ϵ is 1e −6 and ρ is 0.95. To avoid the gradient explosion problem, we apply gradient norm clipping (Pascanu et al., 2013). The batch size is set to 128 and all the dimensions of input vectors and hidden Corpus Window size Vocabulary size AG 15 100k DBP. 15 500k Yelp P. 20 200k Yelp F. 20 200k Yah. A. 20 500k Amz. F. 15 500k Amz. P. 15 500k Table 3: Experimental settings states are set to 300. 4.2 Experimental Results Table 4 shows that our proposed model significantly outperforms all the other models in 7 datasets. DRNN does not have too many hyperparameters. The main hyperparameter is the window size which can be determined by an empirical method. The top block shows the traditional methods and some other neural networks which are not based on RNN or CNN. The linear model (Zhang et al., 2015) achieves a strong baseline in small datasets, but performs not well in large data. FastText (Joulin et al., 2017) and region embedding methods (Qiao et al., 2018) achieve comparable performance with other CNN and RNN based models. The RNN based models are listed in the second block and CNN based models are in the third block. The D-LSTM (Yogatama et al., 2017) is a discriminative LSTM model. Hierarchical attention network (HAN) (Yang et al., 2016) is a hierarchical GRU model with attentive pooling. We can see that very deep CNN (VDCNN) (Conneau et al., 2017) performs well in large datasets. However, VDCNN is a CNN model with 29 convolutional layers, which needs to be tuned more carefully. By contrast, our proposed model can achieve 2316 Models AG DBP. Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. Linear model (Zhang et al., 2015) 7.64 1.31 4.36 40.14 28.96 44.74 7.98 FastText (Joulin et al., 2017) 7.5 1.4 4.3 36.1 27.7 39.8 5.4 Region.emb (Qiao et al., 2018) 7.2 1.1 4.7 35.1 26.3 39.1 4.7 D-LSTM (Yogatama et al., 2017) 7.9 1.3 7.4 40.4 26.3 HAN (Yang et al., 2016) 24.2 36.4 char-CNN (Zhang et al., 2015) 9.51 1.55 4.88 37.95 28.80 40.43 4.93 word-CNN (Zhang et al., 2015) 8.55 1.37 4.60 39.58 28.84 42.39 5.51 VDCNN (Conneau et al., 2017) 8.67 1.29 4.28 35.28 26.57 37.00 4.28 char-CRNN (Xiao and Cho, 2016) 8.64 1.43 5.51 38.18 28.26 40.77 5.87 DRNN 5.53 0.81 2.73 30.85 23.74 35.57 3.51 Table 4: Error rates (%) on seven datasets 3 5 10 15 20 30 40 Window size 5.6 5.8 6.0 6.2 6.4 6.6 6.8 Error rate (%) DGRU CNN Figure 4: DGRU compared with CNN better performance in these datasets by simply setting a large window size. Char-CRNN (Xiao and Cho, 2016) in the fourth block is a model which combines positioninvariance of CNN and long-term dependencies of RNN. Nevertheless, they do not achieve great improvements over other models. They first utilize convolution operation to extract position-invariant features, and then use RNN to capture long-term dependencies. Here, modeling the whole sequence with RNN leads to a loss of position-invariance. Compared with their model, our model can better maintain the position-invariance by max pooling (Scherer et al., 2010). Table 4 shows that our model achieves 10-50% relative error reduction compared with char-CRNN in these datasets. 4.3 Comparison with RNN and CNN In this section, we compare DRNN with CNN, GRU and LSTM (Hochreiter and Schmidhuber, 1997). To make these models comparable, we imModels AG DBP. Yelp P. CNN 6.30 1.13 4.08 GRU 6.25 0.96 3.41 LSTM 6.20 0.90 3.20 DRNN 5.53 0.81 2.73 Table 5: Comparison with RNN and CNN. Table shows the error rate (%) on three datasets. plement these models with the same architecture shown in Figure 2. We just replace the DRNN with CNN or RNN. we firstly compare DRNN with CNN on AG dataset. Figure 4 shows that DRNN performs far better than CNN. In addition, the optimal window size of CNN is 3, while for DRNN the optimal window size is 15. It indicates that DRNN can model longer sequence as window size increases. By contrast, simply increasing the window size of CNN only results in overfitting. That is also why Conneau et al. (2017) design complex CNN models to learn long-term dependencies other than simply increase the window size of convolution filters. In addition, we also compare our model with GRU and LSTM. The experimental results are shown in Table 5. Our model DRNN achieves much better performance than GRU and LSTM. Qualitative Analysis To investigate why DGRU performs better than CNN and GRU, we do some error analysis on Yelp P. dataset. Table 6 shows two examples which have been both 2317 case1: I love Hampton Inn but this location is in serious need of remodeling and some deep cleaning. Musty smell everywhere. case2: Pretty good service, but really busy and noisy!! It gets a little overwhelming because the sales people are very knowledgeable and bombard you with useless techy information to I guess impress you?? Anyways I bought the Ipad 3 and it is freaking awesome and makes up for the store. I would give the Ipad 3 a gazillion stars if I could. I left it at home today and got really sad when I was driving away. Boo Hoo!! Table 6: Examples of error analysis. The case 1 is a negative review and case 2 is a positive review. The first example is misclassified by CNN and classified correctly by GRU. The second one is just the contrary. DGRU classify both examples correctly. 3 5 10 15 20 30 40 Window size 5.6 5.8 6.0 6.2 6.4 Error rate (%) DGRU DLSTM (a) Comparison of recurrent units 3 5 10 15 20 30 40 Window size 5.6 5.8 6.0 6.2 6.4 Error rate (%) Max Mean Attentive (b) Comparison of pooling methods Figure 5: Component comparison classified correctly by DRNN. The first example is misclassified by CNN and classified correctly by GRU. It is just contrary to the second example. Considering the first example, CNN may extract some key phrases such as I love and misclassifies the example as Positive, while GRU can model long sequence and capture the information after but. For the second example, however, GRU still captures the information after but and neglects the key phrases such as pretty good service and freaking awesome, which leads to the wrong classification. DGRU can both extract the local key features such as pretty good service and capture long-term information such as the sentence after but, which makes it perform better than GRU and CNN. 4.4 Component Analysis Recurrent Unit In this part, we study the impact of different recurrent units on the effectiveness of DRNN. We choose three types of recurrent units: naive RNN, LSTM and GRU which have been compared by Chung et al. (2014). We carry out the experiments with different window sizes to eliminate the impact of window sizes. All the experiments in this part are conducted on the AG dataset. We find that the disconnected naive RNN performs just a little worse than disconnected LSTM (DLSTM) and disconnected GRU (DGRU) when the window size is lower than 5. However, when the window size is more than 10, its performance decreases rapidly and the error rate becomes even more than 20%. We believe that it is due to vanishing gradient problem of naive RNN. From Figure 5(a), we can see that window sizes affect the performance of DGRU and DLSTM. DGRU achieves the best performance when the window size is 15, while the best window size for DLSTM is 5. The performance of DGRU is always better than DLSTM no matter what the window size is. We also find that the DGRU model converges faster than DLSTM in the process of training. Therefore, we apply GRU as recurrent units of DRNN in this paper for all the other experiments. Pooling Method Pooling is a kind of method to subsample the values to capture more important information. In NLP, pooling can also convert a variable-length tensor or vector into a fixed-length 2318 3 5 10 15 20 30 40 Window size −0.15 −0.10 −0.05 0.00 Error reduction rate (%) AG DBP. Yelp P. (a) Different tasks 3 5 10 15 20 30 40 Window size 1.0 1.2 1.4 1.6 1.8 Error rate (%) 480K 120K 30K (b) Different training sets of DBP. 3 5 10 15 20 30 40 Window size 3.0 3.5 4.0 4.5 5.0 5.5 Error rate (%) 480K 120K 30K (c) Different training sets of Yelp P. Figure 6: Window size analysis. For better comparing the trends of different tasks, (a) shows the error reduction rates with different window sizes. (b) and (c) show the error rates of DBP. and Yelp P. with different training set numbers. one, so that it can be dealt with more easily. There’re several kinds of pooling methods such as max pooling, mean pooling and attentive pooling (dos Santos et al., 2016). We still conduct the experiments on AG dataset. Figure 5(b) shows the experimental results of three pooling methods along with different window sizes. From Figure 5(b), we can see that the DRNN model with max pooling performs better than the others. This may be because that max pooling can capture position-invariant features better (Scherer et al., 2010). We find attentive pooling is not significantly affected by window sizes. However, the performance of mean pooling becomes worse as the window becomes larger. 4.5 Window size analysis In this section, we mainly study what factors affect the optimal window size. In addition to the recurrent units and pooling methods discussed above, we believe the optimal window size may be also related to the amount of training data and the type of task. In order to study the factors that affect the optimal window size, we conduct experiments on three datasets: AG, DBP and Yelp Polarity. To eliminate the influence of differrnt training data sizes, we conduct experiments with the same training data size. From Figure 6(a) we can see that the type of task has a great impact on the optimal window size. For AG and DBPedia, the optimal window size is 15. However, for Yelp P. the optimal window size is 40 or even larger. The result is intuitive, because sentiment analysis such as Yelp often involves long-term dependencies (Tang et al., 2015), while topic classification such as AG and DBPedia relys more on the key phrases. From Figure 6(b) and Figure 6(c) we can see the effect of different training data sizes on the optimal window size. Surprisingly, the effect of different training data sizes on the optimal window size seems little. We can see that for both DBPedia and Yelp corpus, the trend of error rate with the window size is similar. This shows that the number of training data has little effect on the choice of the optimal window size. It also provides a good empirical way for us to choose the optimal window size. That is, conducting experiments on a small dataset first to select the optimal window size. 5 Conclusion In this paper, we incorporate position-invariance into RNN, so that our proposed model DRNN can both capture key phrases and long-term dependencies. We conduct experiments to compare the effects of different recurrent units and pooling operations. In addition, We also analyze what factors affect the optimal window size of DRNN and present an empirical method to search it. The experimental results show that our proposed model outperforms CNN and RNN models, and achieve the best performance in seven large-scale text classification datasets. Acknowledgments This work was supported by the National Key Research and Development Program of China (No. 2016YFC0800806). I would like to thank Jianfeng Li, Shijin Wang, Ting liu, Guoping Hu, Shangmin Guo, Ziyue Wang, Xiaoxue Wang and the anonymous reviewers for their insightful comments and suggestions. 2319 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Steven Bird and Edward Loper. 2004. Nltk: the natural language toolkit. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions. Association for Computational Linguistics, page 31. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 . Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2017. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. volume 1, pages 1107–1116. Cıcero Nogueira dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks. CoRR, abs/1602.03609 2(3):4. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 770– 778. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693– 1701. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning. pages 448–456. Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 562–570. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. volume 2, pages 427–431. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 . Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1746–1751. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI. volume 333, pages 2267– 2273. Zachary C Lipton, John Berkowitz, and Charles Elkan. 2015. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019 . Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning. pages 1310–1318. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532–1543. Chao Qiao, Bo Huang, Guocheng Niu, Daren Li, Daxiang Dong, Wei He, Dianhai Yu, and Hua Wu. 2018. A new method of region embedding for text classification. In International Conference on Learning Representations. Dominik Scherer, Andreas M¨uller, and Sven Behnke. 2010. Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks. Springer, pages 92–101. Yangyang Shi, Kaisheng Yao, Le Tian, and Daxin Jiang. 2016. Deep lstm based feature mapping for query classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1501–1511. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958. 2320 Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 conference on empirical methods in natural language processing. pages 1422–1432. Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. Journal of machine learning research 2(Nov):45–66. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. arXiv preprint arXiv:1601.04811 . Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with lstm. In Proceedings of NAACL-HLT. pages 1442–1451. Yiren Wang and Fei Tian. 2016. Recurrent residual learning for sequence classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 938–943. Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combining convolution and recurrent layers. arXiv preprint arXiv:1602.00367 . Jiacheng Xu, Danlu Chen, Xipeng Qiu, and Xuanjing Huang. 2016. Cached long short-term memory neural networks for document-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1660–1669. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1480–1489. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Sch¨utze. 2017. Comparative study of cnn and rnn for natural language processing. arXiv preprint arXiv:1702.01923 . Dani Yogatama, Chris Dyer, Wang Ling, and Phil Blunsom. 2017. Generative and discriminative text classification with recurrent neural networks. arXiv preprint arXiv:1703.01898 . Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 . Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 . Dell Zhang and Wee Sun Lee. 2003. Question classification using support vector machines. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval. ACM, pages 26–32. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems. pages 649–657.
2018
215
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2321–2331 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2321 Joint Embedding of Words and Labels for Text Classification Guoyin Wang, Chunyuan Li∗, Wenlin Wang, Yizhe Zhang Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin Duke University {gw60,cl319,ww107,yz196,ds337,xz139,r.henao,lcarin}@duke.edu Abstract Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences. We propose to view text classification as a label-word joint embedding problem: each label is embedded in the same space with the word vectors. We introduce an attention framework that measures the compatibility of embeddings between text sequences and labels. The attention is learned on a training set of labeled samples to ensure that, given a text sequence, the relevant words are weighted higher than the irrelevant ones. Our method maintains the interpretability of word embeddings, and enjoys a built-in ability to leverage alternative sources of information, in addition to input text sequences. Extensive results on the several large text datasets show that the proposed framework outperforms the state-of-the-art methods by a large margin, in terms of both accuracy and speed. 1 Introduction Text classification is a fundamental problem in natural language processing (NLP). The task is to annotate a given text sequence with one (or multiple) class label(s) describing its textual content. A key intermediate step is the text representation. Traditional methods represent text with hand-crafted features, such as sparse lexical features (e.g., n-grams) (Wang and Manning, 2012). Recently, neural models have been employed to learn text representations, including convolutional neural networks (CNNs) (Kalchbrenner ∗Corresponding author et al., 2014; Zhang et al., 2017b; Shen et al., 2017) and recurrent neural networks (RNNs) based on long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Wang et al., 2018). To further increase the representation flexibility of such models, attention mechanisms (Bahdanau et al., 2015) have been introduced as an integral part of models employed for text classification (Yang et al., 2016). The attention module is trained to capture the dependencies that make significant contributions to the task, regardless of the distance between the elements in the sequence. It can thus provide complementary information to the distance-aware dependencies modeled by RNN/CNN. The increasing representation power of the attention mechanism comes with increased model complexity. Alternatively, several recent studies show that the success of deep learning on text classification largely depends on the effectiveness of the word embeddings (Joulin et al., 2016; Wieting et al., 2016; Arora et al., 2017; Shen et al., 2018a). Particularly, Shen et al. (2018a) quantitatively show that the word-embeddings-based text classification tasks can have the similar level of difficulty regardless of the employed models, using the concept of intrinsic dimension (Li et al., 2018). Thus, simple models are preferred. As the basic building blocks in neural-based NLP, word embeddings capture the similarities/regularities between words (Mikolov et al., 2013; Pennington et al., 2014). This idea has been extended to compute embeddings that capture the semantics of word sequences (e.g., phrases, sentences, paragraphs and documents) (Le and Mikolov, 2014; Kiros et al., 2015). These representations are built upon various types of compositions of word vectors, ranging from simple averaging to sophisticated architectures. Further, they suggest that simple models are efficient and interpretable, and have the poten2322 tial to outperform sophisticated deep neural models. It is therefore desirable to leverage the best of both lines of works: learning text representations to capture the dependencies that make significant contributions to the task, while maintaining low computational cost. For the task of text classification, labels play a central role of the final performance. A natural question to ask is how we can directly use label information in constructing the text-sequence representations. 1.1 Our Contribution Our primary contribution is therefore to propose such a solution by making use of the label embedding framework, and propose the LabelEmbedding Attentive Model (LEAM) to improve text classification. While there is an abundant literature in the NLP community on word embeddings (how to describe a word) for text representations, much less work has been devoted in comparison to label embeddings (how to describe a class). The proposed LEAM is implemented by jointly embedding the word and label in the same latent space, and the text representations are constructed directly using the text-label compatibility. Our label embedding framework has the following salutary properties: (i) Label-attentive text representation is informative for the downstream classification task, as it directly learns from a shared joint space, whereas traditional methods proceed in multiple steps by solving intermediate problems. (ii) The LEAM learning procedure only involves a series of basic algebraic operations, and hence it retains the interpretability of simple models, especially when the label description is available. (iii) Our attention mechanism (derived from the text-label compatibility) has fewer parameters and less computation than related methods, and thus is much cheaper in both training and testing, compared with sophisticated deep attention models. (iv) We perform extensive experiments on several text-classification tasks, demonstrating the effectiveness of our label-embedding attentive model, providing state-of-the-art results on benchmark datasets. (v) We further apply LEAM to predict the medical codes from clinical text. As an interesting by-product, our attentive model can highlight the informative key words for prediction, which in practice can reduce a doctor’s burden on reading clinical notes. 2 Related Work Label embedding has been shown to be effective in various domains and tasks. In computer vision, there has been a vast amount of research on leveraging label embeddings for image classification (Akata et al., 2016), multimodal learning between images and text (Frome et al., 2013; Kiros et al., 2014), and text recognition in images (Rodriguez-Serrano et al., 2013). It is particularly successful on the task of zero-shot learning (Palatucci et al., 2009; Yogatama et al., 2015; Ma et al., 2016), where the label correlation captured in the embedding space can improve the prediction when some classes are unseen. In NLP, labels embedding for text classification has been studied in the context of heterogeneous networks in (Tang et al., 2015) and multitask learning in (Zhang et al., 2017a), respectively. To the authors’ knowledge, there is little research on investigating the effectiveness of label embeddings to design efficient attention models, and how to joint embedding of words and labels to make full use of label information for text classification has not been studied previously, representing a contribution of this paper. For text representation, the currently bestperforming models usually consist of an encoder and a decoder connected through an attention mechanism (Vaswani et al., 2017; Bahdanau et al., 2015), with successful applications to sentiment classification (Zhou et al., 2016), sentence pair modeling (Yin et al., 2016) and sentence summarization (Rush et al., 2015). Based on this success, more advanced attention models have been developed, including hierarchical attention networks (Yang et al., 2016), attention over attention (Cui et al., 2016), and multi-step attention (Gehring et al., 2017). The idea of attention is motivated by the observation that different words in the same context are differentially informative, and the same word may be differentially important in a different context. The realization of “context” varies in different applications and model architectures. Typically, the context is chosen as the target task, and the attention is computed over the hidden layers of a CNN/RNN. Our attention model is directly built in the joint embedding space of words and labels, and the context is specified by the label embedding. Several recent works (Vaswani et al., 2017; Shen et al., 2018b,c) have demonstrated that sim2323 ple attention architectures can alone achieve stateof-the-art performance with less computational time, dispensing with recurrence and convolutions entirely. Our work is in the same direction, sharing the similar spirit of retaining model simplicity and interpretability. The major difference is that the aforementioned work focused on self attention, which applies attention to each pair of word tokens from the text sequences. In this paper, we investigate the attention between words and labels, which is more directly related to the target task. Furthermore, the proposed LEAM has much less model parameters. 3 Preliminaries Throughout this paper, we denote vectors as bold, lower-case letters, and matrices as bold, uppercase letters. We use ⊘for element-wise division when applied to vectors or matrices. We use ◦for function composition, and ∆p for the set of one hot vectors in dimension p. Given a training set S = {(Xn, yn)}N n=1 of pair-wise data, where X ∈X is the text sequence, and y ∈Y is its corresponding label. Specifically, y is a one hot vector in single-label problem and a binary vector in multi-label problem, as defined later in Section 4.1. Our goal for text classification is to learn a function f : X 7→Y by minimizing an empirical risk of the form: min f∈F 1 N N X n=1 δ(yn, f(Xn)) (1) where δ : Y × Y 7→R measures the loss incurred from predicting f(X) when the true label is y, where f belongs to the functional space F. In the evaluation stage, we shall use the 0/1 loss as a target loss: δ(y, z) = 0 if y = z, and 1 otherwise. In the training stage, we consider surrogate losses commonly used for structured prediction in different problem setups (see Section 4.1 for details on the surrogate losses used in this paper). More specifically, an input sequence X of length L is composed of word tokens: X = {x1, · · · , xL}. Each token xl is a one hot vector in the space ∆D, where D is the dictionary size. Performing learning in ∆D is computationally expensive and difficult. An elegant framework in NLP, initially proposed in (Mikolov et al., 2013; Le and Mikolov, 2014; Pennington et al., 2014; Kiros et al., 2015), allows to concisely perform learning by mapping the words into an embedding space. The framework relies on so called word embedding: ∆D 7→RP , where P is the dimensionality of the embedding space. Therefore, the text sequence X is represented via the respective word embedding for each token: V = {v1, · · · , vL}, where vl ∈RP . A typical text classification method proceeds in three steps, endto-end, by considering a function decomposition f = f0 ◦f1 ◦f2 as shown in Figure 1(a): • f0 : X 7→V, the text sequence is represented as its word-embedding form V, which is a matrix of P × L. • f1 : V 7→z, a compositional function f1 aggregates word embeddings into a fixed-length vector representation z. • f2 : z 7→y, a classifier f2 annotates the text representation z with a label. A vast amount of work has been devoted to devising the proper functions f0 and f1, i.e., how to represent a word or a word sequence, respectively. The success of NLP largely depends on the effectiveness of word embeddings in f0 (Bengio et al., 2003; Collobert and Weston, 2008; Mikolov et al., 2013; Pennington et al., 2014). They are often pre-trained offline on large corpus, then refined jointly via f1 and f2 for task-specific representations. Furthermore, the design of f1 can be broadly cast into two categories. The popular deep learning models consider the mapping as a “black box,” and have employed sophisticated CNN/RNN architectures to achieve state-of-theart performance (Zhang et al., 2015; Yang et al., 2016). On the contrary, recent studies show that simple manipulation of the word embeddings, e.g., mean or max-pooling, can also provide surprisingly excellent performance (Joulin et al., 2016; Wieting et al., 2016; Arora et al., 2017; Shen et al., 2018a). Nevertheless, these methods only leverage the information from the input text sequence. 4 Label-Embedding Attentive Model 4.1 Model By examining the three steps in the traditional pipeline of text classification, we note that the use of label information only occurs in the last step, when learning f2, and its impact on learning the representations of words in f0 or word sequences in f1 is ignored or indirect. Hence, we propose a new pipeline by incorporating label information in every step, as shown in Figure 1(b): 2324 V <latexit sha1_base64="U Zh7YjksYlFNc+xwF1GVx2T7SFE=">ACXicbVC7TsMwF HXKq5RXgJHFUCExVSlCArYKFsYikbZSE1WO47RWbSeyHV AVZWbhV1gYALHyB2z8DW6aAVqOZOn4nHvte0+QMKq043xb laXldW16nptY3Nre8fe3euoOJWYuDhmsewFSBFGBXE1 Yz0EkQDxjpBuPrqd+9J1LRWNzpSUJ8joaCRhQjbaSBfeg J8oBjzpEIM6/Dkc6zAsi2MnzWnEf2HWn4RSAi6RZkjo 0R7YX14Y45QToTFDSvWbTqL9DElNMSPm1VSRBOExGpK+oQ JxovysWCWHx0YJYRLc4SGhfq7I0NcqQkPTKWZbaTmvan 4n9dPdXThZ1QkqSYCz6KUgZ1DKe5wJBKgjWbGIKwpGZWi EdIqxNejUTQnN+5UXinjYuG87tWb1VaZRBQfgCJyAJj gHLXAD2sAFGDyCZ/AK3qwn68V6tz5mpRWr7NkHf2B9/gAs mprB</latexit> <latexit sha1_base64="U Zh7YjksYlFNc+xwF1GVx2T7SFE=">ACXicbVC7TsMwF HXKq5RXgJHFUCExVSlCArYKFsYikbZSE1WO47RWbSeyHV AVZWbhV1gYALHyB2z8DW6aAVqOZOn4nHvte0+QMKq043xb laXldW16nptY3Nre8fe3euoOJWYuDhmsewFSBFGBXE1 Yz0EkQDxjpBuPrqd+9J1LRWNzpSUJ8joaCRhQjbaSBfeg J8oBjzpEIM6/Dkc6zAsi2MnzWnEf2HWn4RSAi6RZkjo 0R7YX14Y45QToTFDSvWbTqL9DElNMSPm1VSRBOExGpK+oQ JxovysWCWHx0YJYRLc4SGhfq7I0NcqQkPTKWZbaTmvan 4n9dPdXThZ1QkqSYCz6KUgZ1DKe5wJBKgjWbGIKwpGZWi EdIqxNejUTQnN+5UXinjYuG87tWb1VaZRBQfgCJyAJj gHLXAD2sAFGDyCZ/AK3qwn68V6tz5mpRWr7NkHf2B9/gAs mprB</latexit> <latexit sha1_base64="U Zh7YjksYlFNc+xwF1GVx2T7SFE=">ACXicbVC7TsMwF HXKq5RXgJHFUCExVSlCArYKFsYikbZSE1WO47RWbSeyHV AVZWbhV1gYALHyB2z8DW6aAVqOZOn4nHvte0+QMKq043xb laXldW16nptY3Nre8fe3euoOJWYuDhmsewFSBFGBXE1 Yz0EkQDxjpBuPrqd+9J1LRWNzpSUJ8joaCRhQjbaSBfeg J8oBjzpEIM6/Dkc6zAsi2MnzWnEf2HWn4RSAi6RZkjo 0R7YX14Y45QToTFDSvWbTqL9DElNMSPm1VSRBOExGpK+oQ JxovysWCWHx0YJYRLc4SGhfq7I0NcqQkPTKWZbaTmvan 4n9dPdXThZ1QkqSYCz6KUgZ1DKe5wJBKgjWbGIKwpGZWi EdIqxNejUTQnN+5UXinjYuG87tWb1VaZRBQfgCJyAJj gHLXAD2sAFGDyCZ/AK3qwn68V6tz5mpRWr7NkHf2B9/gAs mprB</latexit> z <latexit sha1_base64="5arKufyR9c N9ZiNsznhY2l36QxI=">ACUnicbVJNTwIxEO3iFyIq6tFLIzHxRBZjot6IXjxi FCUBQrdWaj2Y9N2MbDhPxoTD/4RLx60u3IQcJKmL+9aWemDWLOjPX9D6+wsrq2 vlHcLG2Vt3d2K3v7D0YlmkKLKq50OyAGOJPQsxyaMcaiAg4PAbP15n+OAJtmJL 3dhxDT5CBZBGjxDqX3nqSnihSgiw7R7J4idpmk3iPDdFqa0wKwZJSLiodmLNy Gc3LROFl0TLHZNSvVP2anwdeBvUZqKJZNPuVt26oaCJAWsqJMZ26H9teSrRlIM 7MzEQE/pMBtBxUBIBpfmM5niY8eEOFLaLWlxzv7NSIkwWX3O6ZoemkUtI/TOom NLnopk3FiQdLfi6KEY6twNmAcMg3U8rEDhGrmasV0SDSh1j1DyQ2hvtjyMmid1i 5r/u1ZtXE1m0YRHaIjdILq6Bw10A1qohai6BV9om8Pe/eV8H9kl9rwZvlHKC5KJ R/AHpCt3E=</latexit> <latexit sha1_base64="5arKufyR9c N9ZiNsznhY2l36QxI=">ACUnicbVJNTwIxEO3iFyIq6tFLIzHxRBZjot6IXjxi FCUBQrdWaj2Y9N2MbDhPxoTD/4RLx60u3IQcJKmL+9aWemDWLOjPX9D6+wsrq2 vlHcLG2Vt3d2K3v7D0YlmkKLKq50OyAGOJPQsxyaMcaiAg4PAbP15n+OAJtmJL 3dhxDT5CBZBGjxDqX3nqSnihSgiw7R7J4idpmk3iPDdFqa0wKwZJSLiodmLNy Gc3LROFl0TLHZNSvVP2anwdeBvUZqKJZNPuVt26oaCJAWsqJMZ26H9teSrRlIM 7MzEQE/pMBtBxUBIBpfmM5niY8eEOFLaLWlxzv7NSIkwWX3O6ZoemkUtI/TOom NLnopk3FiQdLfi6KEY6twNmAcMg3U8rEDhGrmasV0SDSh1j1DyQ2hvtjyMmid1i 5r/u1ZtXE1m0YRHaIjdILq6Bw10A1qohai6BV9om8Pe/eV8H9kl9rwZvlHKC5KJ R/AHpCt3E=</latexit> <latexit sha1_base64="5arKufyR9c N9ZiNsznhY2l36QxI=">ACUnicbVJNTwIxEO3iFyIq6tFLIzHxRBZjot6IXjxi FCUBQrdWaj2Y9N2MbDhPxoTD/4RLx60u3IQcJKmL+9aWemDWLOjPX9D6+wsrq2 vlHcLG2Vt3d2K3v7D0YlmkKLKq50OyAGOJPQsxyaMcaiAg4PAbP15n+OAJtmJL 3dhxDT5CBZBGjxDqX3nqSnihSgiw7R7J4idpmk3iPDdFqa0wKwZJSLiodmLNy Gc3LROFl0TLHZNSvVP2anwdeBvUZqKJZNPuVt26oaCJAWsqJMZ26H9teSrRlIM 7MzEQE/pMBtBxUBIBpfmM5niY8eEOFLaLWlxzv7NSIkwWX3O6ZoemkUtI/TOom NLnopk3FiQdLfi6KEY6twNmAcMg3U8rEDhGrmasV0SDSh1j1DyQ2hvtjyMmid1i 5r/u1ZtXE1m0YRHaIjdILq6Bw10A1qohai6BV9om8Pe/eV8H9kl9rwZvlHKC5KJ R/AHpCt3E=</latexit> f1 f0 X f2 y (a) Traditional method C G <latexit sha1_base64="usNIECuGP8YyDGtyIde srnE+VfU=">ACXicbVC7TsMwFHV4lvIKMLIYKiSmKkVIwFbBUMYiEVqpiSrHcVqrthPZDqiKMrPwKy wMgFj5Azb+BjfNAC1HsnR8zr32vSdIGFXacb6thcWl5ZXVylp1fWNza9ve2b1TcSoxcXHMYtkNkCKMCuJ qhnpJpIgHjDSCUZXE79zT6SisbjV4T4HA0EjShG2kh9+8AT5AHnCMRZl6LI51nmRdEsJXn1eLet2tO 3SkA50mjJDVQot23v7wxiknQmOGlOo1nET7GZKaYkbMq6kiCcIjNCA9QwXiRPlZsUoOj4wSwiW5gNC/ V3R4a4UmMemEoz21DNehPxP6+X6ujcz6hIUk0En4UpQzqGE5ygSGVBGs2NgRhSc2sEA+RFib9KomhMb syvPEPalf1J2b01rzskyjAvbBITgGDXAGmuAatIELMHgEz+AVvFlP1ov1bn1MSxesmcP/IH1+QPmwpqU </latexit> <latexit sha1_base64="usNIECuGP8YyDGtyIde srnE+VfU=">ACXicbVC7TsMwFHV4lvIKMLIYKiSmKkVIwFbBUMYiEVqpiSrHcVqrthPZDqiKMrPwKy wMgFj5Azb+BjfNAC1HsnR8zr32vSdIGFXacb6thcWl5ZXVylp1fWNza9ve2b1TcSoxcXHMYtkNkCKMCuJ qhnpJpIgHjDSCUZXE79zT6SisbjV4T4HA0EjShG2kh9+8AT5AHnCMRZl6LI51nmRdEsJXn1eLet2tO 3SkA50mjJDVQot23v7wxiknQmOGlOo1nET7GZKaYkbMq6kiCcIjNCA9QwXiRPlZsUoOj4wSwiW5gNC/ V3R4a4UmMemEoz21DNehPxP6+X6ujcz6hIUk0En4UpQzqGE5ygSGVBGs2NgRhSc2sEA+RFib9KomhMb syvPEPalf1J2b01rzskyjAvbBITgGDXAGmuAatIELMHgEz+AVvFlP1ov1bn1MSxesmcP/IH1+QPmwpqU </latexit> <latexit sha1_base64="usNIECuGP8YyDGtyIde srnE+VfU=">ACXicbVC7TsMwFHV4lvIKMLIYKiSmKkVIwFbBUMYiEVqpiSrHcVqrthPZDqiKMrPwKy wMgFj5Azb+BjfNAC1HsnR8zr32vSdIGFXacb6thcWl5ZXVylp1fWNza9ve2b1TcSoxcXHMYtkNkCKMCuJ qhnpJpIgHjDSCUZXE79zT6SisbjV4T4HA0EjShG2kh9+8AT5AHnCMRZl6LI51nmRdEsJXn1eLet2tO 3SkA50mjJDVQot23v7wxiknQmOGlOo1nET7GZKaYkbMq6kiCcIjNCA9QwXiRPlZsUoOj4wSwiW5gNC/ V3R4a4UmMemEoz21DNehPxP6+X6ujcz6hIUk0En4UpQzqGE5ygSGVBGs2NgRhSc2sEA+RFib9KomhMb syvPEPalf1J2b01rzskyjAvbBITgGDXAGmuAatIELMHgEz+AVvFlP1ov1bn1MSxesmcP/IH1+QPmwpqU </latexit> z <latexit sha1_base64="5arKufyR9cN9Z iNsznhY2l36QxI=">ACUnicbVJNTwIxEO3iFyIq6tFLIzHxRBZjot6IXjxiFCUBQr dWaj2Y9N2MbDhPxoTD/4RLx60u3IQcJKmL+9aWemDWLOjPX9D6+wsrq2vlHcLG2Vt3d2 K3v7D0YlmkKLKq50OyAGOJPQsxyaMcaiAg4PAbP15n+OAJtmJL3dhxDT5CBZBGjxDqX 3nqSnihSgiw7R7J4idpmk3iPDdFqa0wKwZJSLiodmLNyGc3LROFl0TLHZNSvVP2an wdeBvUZqKJZNPuVt26oaCJAWsqJMZ26H9teSrRlIM7MzEQE/pMBtBxUBIBpfmM5niY8 eEOFLaLWlxzv7NSIkwWX3O6ZoemkUtI/TOomNLnopk3FiQdLfi6KEY6twNmAcMg3U8rE DhGrmasV0SDSh1j1DyQ2hvtjyMmid1i5r/u1ZtXE1m0YRHaIjdILq6Bw10A1qohai6BV 9om8Pe/eV8H9kl9rwZvlHKC5KJR/AHpCt3E=</latexit> <latexit sha1_base64="5arKufyR9cN9Z iNsznhY2l36QxI=">ACUnicbVJNTwIxEO3iFyIq6tFLIzHxRBZjot6IXjxiFCUBQr dWaj2Y9N2MbDhPxoTD/4RLx60u3IQcJKmL+9aWemDWLOjPX9D6+wsrq2vlHcLG2Vt3d2 K3v7D0YlmkKLKq50OyAGOJPQsxyaMcaiAg4PAbP15n+OAJtmJL3dhxDT5CBZBGjxDqX 3nqSnihSgiw7R7J4idpmk3iPDdFqa0wKwZJSLiodmLNyGc3LROFl0TLHZNSvVP2an wdeBvUZqKJZNPuVt26oaCJAWsqJMZ26H9teSrRlIM7MzEQE/pMBtBxUBIBpfmM5niY8 eEOFLaLWlxzv7NSIkwWX3O6ZoemkUtI/TOomNLnopk3FiQdLfi6KEY6twNmAcMg3U8rE DhGrmasV0SDSh1j1DyQ2hvtjyMmid1i5r/u1ZtXE1m0YRHaIjdILq6Bw10A1qohai6BV 9om8Pe/eV8H9kl9rwZvlHKC5KJR/AHpCt3E=</latexit> <latexit sha1_base64="5arKufyR9cN9Z iNsznhY2l36QxI=">ACUnicbVJNTwIxEO3iFyIq6tFLIzHxRBZjot6IXjxiFCUBQr dWaj2Y9N2MbDhPxoTD/4RLx60u3IQcJKmL+9aWemDWLOjPX9D6+wsrq2vlHcLG2Vt3d2 K3v7D0YlmkKLKq50OyAGOJPQsxyaMcaiAg4PAbP15n+OAJtmJL3dhxDT5CBZBGjxDqX 3nqSnihSgiw7R7J4idpmk3iPDdFqa0wKwZJSLiodmLNyGc3LROFl0TLHZNSvVP2an wdeBvUZqKJZNPuVt26oaCJAWsqJMZ26H9teSrRlIM7MzEQE/pMBtBxUBIBpfmM5niY8 eEOFLaLWlxzv7NSIkwWX3O6ZoemkUtI/TOomNLnopk3FiQdLfi6KEY6twNmAcMg3U8rE DhGrmasV0SDSh1j1DyQ2hvtjyMmid1i5r/u1ZtXE1m0YRHaIjdILq6Bw10A1qohai6BV 9om8Pe/eV8H9kl9rwZvlHKC5KJR/AHpCt3E=</latexit> β <latexit sha1_base64="YJPS90Rs1XOpYUa1yiu V8LTbiU=">ACM3icbVDLSgMxFM34rPVdekmWARXZSqCuhPdCG4qdVRoS8lk7tRgHkOSUcrQj3Ljh7gRwY WKW/BzHQWVr0Qcjn3uSeEyacGev7L97U9Mzs3Hxlobq4tLyWltbvzQq1RQCqrjS1yExwJmEwDL4TrRQE TI4Sq8Pcn1qzvQhil5YcJ9AQZSBYzSqyj+rWzroR7qoQgMsq6bUHsKMu6Yzbo1F1QgvBkrtCVDwyQ+EuXJ B5YyH2a3W/4ReF/4JmCeqorFa/9tSNFE0FSEs5MabT9BPby4i2jHJwz6YGEkJvyQA6DkoiwPSywvQIbzsmwr HS7kiLC/bnREaEyd0nc7Vjfmt5eR/Wie18UEvYzJLUg6/ihObYK5wniGmglg8dIFQztyumN0QTal3OVR dC87flvyDYbRw2/PO9+tFxmUYFbaItIOaB8doVPUQgGi6AE9ozf07j16r96H9zlunfLKmQ0Ud7XN5I5rW k=</latexit> <latexit sha1_base64="YJPS90Rs1XOpYUa1yiu V8LTbiU=">ACM3icbVDLSgMxFM34rPVdekmWARXZSqCuhPdCG4qdVRoS8lk7tRgHkOSUcrQj3Ljh7gRwY WKW/BzHQWVr0Qcjn3uSeEyacGev7L97U9Mzs3Hxlobq4tLyWltbvzQq1RQCqrjS1yExwJmEwDL4TrRQE TI4Sq8Pcn1qzvQhil5YcJ9AQZSBYzSqyj+rWzroR7qoQgMsq6bUHsKMu6Yzbo1F1QgvBkrtCVDwyQ+EuXJ B5YyH2a3W/4ReF/4JmCeqorFa/9tSNFE0FSEs5MabT9BPby4i2jHJwz6YGEkJvyQA6DkoiwPSywvQIbzsmwr HS7kiLC/bnREaEyd0nc7Vjfmt5eR/Wie18UEvYzJLUg6/ihObYK5wniGmglg8dIFQztyumN0QTal3OVR dC87flvyDYbRw2/PO9+tFxmUYFbaItIOaB8doVPUQgGi6AE9ozf07j16r96H9zlunfLKmQ0Ud7XN5I5rW k=</latexit> <latexit sha1_base64="YJPS90Rs1XOpYUa1yiu V8LTbiU=">ACM3icbVDLSgMxFM34rPVdekmWARXZSqCuhPdCG4qdVRoS8lk7tRgHkOSUcrQj3Ljh7gRwY WKW/BzHQWVr0Qcjn3uSeEyacGev7L97U9Mzs3Hxlobq4tLyWltbvzQq1RQCqrjS1yExwJmEwDL4TrRQE TI4Sq8Pcn1qzvQhil5YcJ9AQZSBYzSqyj+rWzroR7qoQgMsq6bUHsKMu6Yzbo1F1QgvBkrtCVDwyQ+EuXJ B5YyH2a3W/4ReF/4JmCeqorFa/9tSNFE0FSEs5MabT9BPby4i2jHJwz6YGEkJvyQA6DkoiwPSywvQIbzsmwr HS7kiLC/bnREaEyd0nc7Vjfmt5eR/Wie18UEvYzJLUg6/ihObYK5wniGmglg8dIFQztyumN0QTal3OVR dC87flvyDYbRw2/PO9+tFxmUYFbaItIOaB8doVPUQgGi6AE9ozf07j16r96H9zlunfLKmQ0Ud7XN5I5rW k=</latexit> V <latexit sha1_base64="U Zh7YjksYlFNc+xwF1GVx2T7SFE=">ACXicbVC7TsMw FHXKq5RXgJHFUCExVSlCArYKFsYikbZSE1WO47RWbSeyH VAVZWbhV1gYALHyB2z8DW6aAVqOZOn4nHvte0+QMKq043 xblaXldW16nptY3Nre8fe3euoOJWYuDhmsewFSBFGBXE 1Yz0EkQDxjpBuPrqd+9J1LRWNzpSUJ8joaCRhQjbaSB fegJ8oBjzpEIM6/Dkc6zAsi2MnzWnEf2HWn4RSAi6RZk jo0R7YX14Y45QToTFDSvWbTqL9DElNMSPm1VSRBOExGp K+oQJxovysWCWHx0YJYRLc4SGhfq7I0NcqQkPTKWZbaT mvan4n9dPdXThZ1QkqSYCz6KUgZ1DKe5wJBKgjWbGIKw pGZWiEdIqxNejUTQnN+5UXinjYuG87tWb1VaZRBQfgC JyAJjgHLXAD2sAFGDyCZ/AK3qwn68V6tz5mpRWr7NkHf 2B9/gAsmprB</latexit> <latexit sha1_base64="U Zh7YjksYlFNc+xwF1GVx2T7SFE=">ACXicbVC7TsMw FHXKq5RXgJHFUCExVSlCArYKFsYikbZSE1WO47RWbSeyH VAVZWbhV1gYALHyB2z8DW6aAVqOZOn4nHvte0+QMKq043 xblaXldW16nptY3Nre8fe3euoOJWYuDhmsewFSBFGBXE 1Yz0EkQDxjpBuPrqd+9J1LRWNzpSUJ8joaCRhQjbaSB fegJ8oBjzpEIM6/Dkc6zAsi2MnzWnEf2HWn4RSAi6RZk jo0R7YX14Y45QToTFDSvWbTqL9DElNMSPm1VSRBOExGp K+oQJxovysWCWHx0YJYRLc4SGhfq7I0NcqQkPTKWZbaT mvan4n9dPdXThZ1QkqSYCz6KUgZ1DKe5wJBKgjWbGIKw pGZWiEdIqxNejUTQnN+5UXinjYuG87tWb1VaZRBQfgC JyAJjgHLXAD2sAFGDyCZ/AK3qwn68V6tz5mpRWr7NkHf 2B9/gAsmprB</latexit> <latexit sha1_base64="U Zh7YjksYlFNc+xwF1GVx2T7SFE=">ACXicbVC7TsMw FHXKq5RXgJHFUCExVSlCArYKFsYikbZSE1WO47RWbSeyH VAVZWbhV1gYALHyB2z8DW6aAVqOZOn4nHvte0+QMKq043 xblaXldW16nptY3Nre8fe3euoOJWYuDhmsewFSBFGBXE 1Yz0EkQDxjpBuPrqd+9J1LRWNzpSUJ8joaCRhQjbaSB fegJ8oBjzpEIM6/Dkc6zAsi2MnzWnEf2HWn4RSAi6RZk jo0R7YX14Y45QToTFDSvWbTqL9DElNMSPm1VSRBOExGp K+oQJxovysWCWHx0YJYRLc4SGhfq7I0NcqQkPTKWZbaT mvan4n9dPdXThZ1QkqSYCz6KUgZ1DKe5wJBKgjWbGIKw pGZWiEdIqxNejUTQnN+5UXinjYuG87tWb1VaZRBQfgC JyAJjgHLXAD2sAFGDyCZ/AK3qwn68V6tz5mpRWr7NkHf 2B9/gAsmprB</latexit> f0 X Y f0 f2 y f1 (b) Proposed joint embedding method Figure 1: Illustration of different schemes for document representations z. (a) Much work in NLP has been devoted to directly aggregating word embedding V for z. (b) We focus on learning label embedding C (how to embed class labels in a Euclidean space), and leveraging the “compatibility” G between embedded words and labels to derive the attention score β for improved z. Note that ⊗ denotes the cosine similarity between C and V. In this figure, there are K=2 classes. • f0: Besides embedding words, we also embed all the labels in the same space, which act as the “anchor points” of the classes to influence the refinement of word embeddings. • f1: The compositional function aggregates word embeddings into z, weighted by the compatibility between labels and words. • f2: The learning of f2 remains the same, as it directly interacts with labels. Under the proposed label embedding framework, we specifically describe a label-embedding attentive model. Joint Embeddings of Words and Labels We propose to embed both the words and the labels into a joint space i.e., ∆D 7→RP and Y 7→RP . The label embeddings are C = [c1, · · · , cK], where K is the number of classes. A simple way to measure the compatibility of label-word pairs is via the cosine similarity G = (C⊤V) ⊘ˆG, (2) where ˆG is the normalization matrix of size K×L, with each element obtained as the multiplication of ℓ2 norms of the c-th label embedding and l-th word embedding: ˆgkl = ∥ck∥∥vl∥. To further capture the relative spatial information among consecutive words (i.e., phrases1) and introduce non-linearity in the compatibility measure, we consider a generalization of (2). Specifically, for a text phase of length 2r + 1 centered at l, the local matrix block Gl−r:l+r in G measures the label-to-token compatibility for the “label-phrase” pairs. To learn a higher-level compatibility stigmatization ul between the l-th phrase and all labels, we have: ul = ReLU(Gl−r:l+rW1 + b1), (3) where W1 ∈R2r+1 and b1 ∈RK are parameters to be learned, and ul ∈RK. The largest compatibility value of the l-th phrase wrt the labels is collected: ml = max-pooling(ul). (4) Together, m is a vector of length L. The compatibility/attention score for the entire text sequence is: β = SoftMax(m), (5) where the l-th element of SoftMax is βl = exp(ml) PL l′=1 exp(ml′). The text sequence representation can be simply obtained via averaging the word embeddings, weighted by label-based attention score: z = X l βlvl. (6) Relation to Predictive Text Embeddings Predictive Text Embeddings (PTE) (Tang et al., 2015) is the first method to leverage label embeddings to improve the learned word embeddings. We discuss three major differences between PTE and our LEAM: (i) The general settings are different. PTE casts the text representation through heterogeneous networks, while we consider text representation through an attention model. (ii) In PTE, the text representation z is the averaging of word embeddings. In LEAM, z is weighted averaging of word embeddings through the proposed labelattentive score in (6). (iii) PTE only considers the linear interaction between individual words and labels. LEAM greatly improves the performance by considering nonlinear interaction between phrase 1We call it “phrase” for convenience; it could be any longer word sequence such as a sentence and paragraph etc. when a larger window size r is considered. 2325 and labels. Specifically, we note that the text embedding in PTE is similar with a very special case of LEAM, when our window size r = 1 and attention score β is uniform. As shown later in Figure 2(c) of the experimental results, LEAM can be significantly better than the PTE variant. Training Objective The proposed joint embedding framework is applicable to various text classification tasks. We consider two setups in this paper. For a learned text sequence representation z = f1◦f0(X), we jointly optimize f = f0◦f1◦f2 over F, where f2 is defined according to the specific tasks: • Single-label problem: categorizes each text instance to precisely one of K classes, y ∈ ∆K min f∈F 1 N N X n=1 CE(yn, f2(zn)), (7) where CE(·, ·) is the cross entropy between two probability vectors, and f2(zn) = SoftMax (z′ n), with z′ n = W2zn + b2 and W2 ∈RK×P , b2 ∈RK are trainable parameters. • Multi-label problem: categorizes each text instance to a set of K target labels {yk ∈ ∆2|k = 1, · · · , K}; there is no constraint on how many of the classes the instance can be assigned to, and min f∈F 1 NK N X n=1 K X k=1 CE(ynk, f2(znk), (8) where f2(znk) = 1 1+exp(z′ nk), and z′ nk is the kth column of z′ n. To summarize, the model parameters θ = {V, C, W1, b1, W2, b2}. They are trained endto-end during learning. {W1, b1} and {W2, b2} are weights in f1 and f2, respectively, which are treated as standard neural networks. For the joint embeddings {V, C} in f0, the pre-trained word embeddings are used as initialization if available. 4.2 Learning & Testing with LEAM Learning and Regularization The quality of the jointly learned embeddings are key to the model performance and interpretability. Ideally, we hope that each label embedding acts as the “anchor” points for each classes: closer to the word/sequence representations that are in the same classes, while farther from those in different classes. To best achieve this property, we consider to regularize the learned label embeddings ck to be on its corresponding manifold. This is imposed by the fact ck should be easily classified as the correct label yk: min f∈F 1 K K X n=1 CE(yk, f2(ck)), (9) where f2 is specficied according to the problem in either (7) or (8). This regularization is used as a penalty in the main training objective in (7) or (8), and the default weighting hyperparameter is set as 1. It will lead to meaningful interpretability of learned label embeddings as shown in the experiments. Interestingly in text classification, the class itself is often described as a set of E words {ei, i = 1, · · · , E}. These words are considered as the most representative description of each class, and highly distinguishing between different classes. For example, the Yahoo! Answers Topic dataset (Zhang et al., 2015) contains ten classes, most of which have two words to precisely describe its class-specific features, such as “Computers & Internet”, “Business & Finance” as well as “Politics & Government” etc. We consider to use each label’s corresponding pre-trained word embeddings as the initialization of the label embeddings. For the datasets without representative class descriptions, one may initialize the label embeddings as random samples drawn from a standard Gaussian distribution. Testing Both the learned word and label embeddings are available in the testing stage. We clarify that the label embeddings C of all class candidates Y are considered as the input in the testing stage; one should distinguish this from the use of groundtruth label y in prediction. For a text sequence X, one may feed it through the proposed pipeline for prediction: (i) f1: harvesting the word embeddings V, (ii) f2: V interacts with C to obtain G, pooled as β, which further attends V to derive z, and (iii) f3: assigning labels based on the tasks. To speed up testing, one may store G offline, and avoid its online computational cost. 2326 Model Parameters Complexity Seq. Operation CNN m · h · P O(m · h · L · P) O(1) LSTM 4 · h · (h + P) O(L · h2 + h · L · P) O(L) SWEM 0 O(L · P) O(1) Bi-BloSAN 7·P 2+5·P O(P 2·L2/R+P 2·L+P 2·R2) O(1) Our model K · P O(K · L · P) O(1) Table 1: Comparisons of CNN, LSTM, SWEM and our model architecture. Columns correspond to the number of compositional parameters, computational complexity and sequential operations 4.3 Model Complexity We compare CNN, LSTM, Simple Word Embeddings-based Models (SWEM) (Shen et al., 2018a) and our LEAM wrt the parameters and computational speed. For the CNN, we assume the same size m for all filters. Specifically, h represents the dimension of the hidden units in the LSTM or the number of filters in the CNN; R denotes the number of blocks in the Bi-BloSAN; P denotes the final sequence representation dimension. Similar to (Vaswani et al., 2017; Shen et al., 2018a), we examine the number of compositional parameters, computational complexity and sequential steps of the four methods. As shown in Table 1, both the CNN and LSTM have a large number of compositional parameters. Since K ≪m, h, the number of parameters in our models is much smaller than for the CNN and LSTM models. For the computational complexity, our model is almost same order as the most simple SWEM model, and is smaller than the CNN or LSTM by a factor of mh/K or h/K. 5 Experimental Results Setup We use 300-dimensional GloVe word embeddings Pennington et al. (2014) as initialization for word embeddings and label embeddings in our model. Out-Of-Vocabulary (OOV) words are initialized from a uniform distribution with range [−0.01, 0.01]. The final classifier is implemented as an MLP layer followed by a sigmoid or softmax function depending on specific task. We train our model’s parameters with the Adam Optimizer (Kingma and Ba, 2014), with an initial learning rate of 0.001, and a minibatch size of 100. Dropout regularization (Srivastava et al., 2014) is employed on the final MLP layer, with dropout rate 0.5. The model is implemented using Tensorflow and is trained on GPU Titan X. The code to reproduce the experimental results is at https://github.com/guoyinwang/LEAM Dataset # Classes # Training # Testing AGNews 4 120k 7.6k Yelp Binary 2 560 k 38k Yelp Full 5 650k 38k DBPedia 14 560k 70k Yahoo 10 1400k 60k Table 2: Summary statistics of five datasets, including the number of classes, number of training samples and number of testing samples. 5.1 Classification on Benchmark Datasets We test our model on the same five standard benchmark datasets as in (Zhang et al., 2015). The summary statistics of the data are shown in Table 2, with content specified below: • AGNews: Topic classification over four categories of Internet news articles (Del Corso et al., 2005) composed of titles plus description classified into: World, Entertainment, Sports and Business. • Yelp Review Full: The dataset is obtained from the Yelp Dataset Challenge in 2015, the task is sentiment classification of polarity star labels ranging from 1 to 5. • Yelp Review Polarity: The same set of text reviews from Yelp Dataset Challenge in 2015, except that a coarser sentiment definition is considered: 1 and 2 are negative, and 4 and 5 as positive. • DBPedia: Ontology classification over fourteen non-overlapping classes picked from DBpedia 2014 (Wikipedia). • Yahoo! Answers Topic: Topic classification over ten largest main categories from Yahoo! Answers Comprehensive Questions and Answers version 1.0, including question title, question content and best answer. We compare with a variety of methods, including (i) the bag-of-words in (Zhang et al., 2015); (ii) sophisticated deep CNN/RNN models: large/small word CNN, LSTM reported in (Zhang et al., 2015; Dai and Le, 2015) and deep CNN (29 layer) (Conneau et al., 2017); (iii) simple compositional methods: fastText (Joulin et al., 2016) and simple word embedding models (SWEM) (Shen et al., 2018a); (iv) deep attention models: hierarchical attention network (HAN) (Yang et al., 2327 Model Yahoo DBPedia AGNews Yelp P. Yelp F. Bag-of-words (Zhang et al., 2015) 68.90 96.60 88.80 92.20 58.00 Small word CNN (Zhang et al., 2015) 69.98 98.15 89.13 94.46 58.59 Large word CNN (Zhang et al., 2015) 70.94 98.28 91.45 95.11 59.48 LSTM (Zhang et al., 2015) 70.84 98.55 86.06 94.74 58.17 SA-LSTM (word-level) (Dai and Le, 2015) 98.60 Deep CNN (29 layer) (Conneau et al., 2017) 73.43 98.71 91.27 95.72 64.26 SWEM (Shen et al., 2018a) 73.53 98.42 92.24 93.76 61.11 fastText (Joulin et al., 2016) 72.30 98.60 92.50 95.70 63.90 HAN (Yang et al., 2016) 75.80 Bi-BloSAN⋄(Shen et al., 2018c) 76.28 98.77 93.32 94.56 62.13 LEAM 77.42 99.02 92.45 95.31 64.09 LEAM (linear) 75.22 98.32 91.75 93.43 61.03 Table 3: Test Accuracy on document classification tasks, in percentage. ⋄We ran Bi-BloSAN using the authors’ implementation; all other results are directly cited from the respective papers. 2016); (v) simple attention models: bi-directional block self-attention network (Bi-BloSAN) (Shen et al., 2018c). The results are shown in Table 3. Testing accuracy Simple compositional methods indeed achieve comparable performance as the sophisticated deep CNN/RNN models. On the other hand, deep hierarchical attention model can improve the pure CNN/RNN models. The recently proposed self-attention network generally yield higher accuracy than previous methods. All approaches are better than traditional bag-of-words method. Our proposed LEAM outperforms the state-of-the-art methods on two largest datasets, i.e., Yahoo and DBPedia. On other datasets, LEAM ranks the 2nd or 3rd best, which are similar to top 1 method in term of the accuracy. This is probably due to two reasons: (i) the number of classes on these datasets is smaller, and (ii) there is no explicit corresponding word embedding available for the label embedding initialization during learning. The potential of label embedding may not be fully exploited. As the ablation study, we replace the nonlinear compatibility (3) to the linear one in (2) . The degraded performance demonstrates the necessity of spatial dependency and nonlinearity in constructing the attentions. Nevertheless, we argue LEAM is favorable for text classification, by comparing the model size and time cost Table 4, as well as convergence speed in Figure 2(a). The time cost is reported as the wall-clock time for 1000 iterations. LEAM maintains the simplicity and low cost of SWEM, compared with other models. LEAM uses much less model parameters, and converges significantly Model # Parameters Time cost (s) CNN 541k 171 LSTM 1.8M 598 SWEM 61K 63 Bi-BloSAN 3.6M 292 LEAM 65K 65 Table 4: Comparison of model size and speed. faster than Bi-BloSAN. We also compare the performance when only a partial dataset is labeled, the results are shown in Figure 2(b). LEAM consistently outperforms other methods with different proportion of labeled data. Hyper-parameter Our method has an additional hyperparameter, the window size r to define the length of “phase” to construct the attention. Larger r captures long term dependency, while smaller r enforces the local dependency. We study its impact in Figure 2(c). The topic classification tasks generally requires a larger r, while sentiment classification tasks allows relatively smaller r. One may safely choose r around 50 if not finetuning. We report the optimal results in Table 3. 5.2 Representational Ability Label embeddings are highly meaningful To provide insight into the meaningfulness of the learned representations, in Figure 3 we visualize the correlation between label embeddings and document embeddings based on the Yahoo dateset. First, we compute the averaged document embeddings per class: ¯zk = 1 |Sk| P i∈Sk zi, where Sk is the set of sample indices belonging to class k. Intuitively, ¯zk represents the center of embedded 2328 0 2K 4K # Iteration 50 60 70 80 Accuracy (%) LEAM CNN LSTM Bi-Blosa 0.1 1 10 100 Proportion (%) of labeled data 40 60 80 Accuracy (%) LEAM CNN LSTM SWEM 76.0 76.5 77.0 Yahoo! 98.6 98.8 99.0 DBPedia 0 25 50 75 94.0 94.5 95.0 Yelp Polarity 0 25 50 75 # Window Size 61 62 63 64 Accuracy (%) Yelp Full (a) Convergence speed (b) Partially labeled data (c) Effects of window size Figure 2: Comprehensive study of LEAM, including convergence speed, performance vs proportion of labeled data, and impact of hyper-parameter 0.1 0.0 0.1 0.2 0.3 0.4 Society Culture Science Mathematics Health Education Reference Computers Internet Sports Business Finance Entertainment Music Family Relationships Politics Government (a) Cosine similarity matrix (b) t-SNE plot of joint embeddings Figure 3: Correlation between the learned text sequence representation z and label embedding V. (a) Cosine similarity matrix between averaged ¯z per class and label embedding V, and (b) t-SNE plot of joint embedding of text z and labels V. text manifold for class k. Ideally, the perfect label embedding ck should be the representative anchor point for class k. We compute the cosine similarity between ¯zk and ck across all the classes, shown in Figure 3(a). The rows are averaged per-class document embeddings, while columns are label embeddings. Therefore, the on-diagonal elements measure how representative the learned label embeddings are to describe its own classes, while off-diagonal elements reflect how distinctive the label embeddings are to be separated from other classes. The high on-diagonal elements and low off-diagonal elements in Figure 3(a) indicate the superb ability of the label representations learned from LEAM. Further, since both the document and label embeddings live in the same high-dimensional space, we use t-SNE (Maaten and Hinton, 2008) to visualize them on a 2D map in Figure 3(b). Each color represents a different class, the point clouds are document embeddings, and the label embeddings are the large dots with black circles. As can be seen, each label embedding falls into the internal region of the respective manifold, which again demonstrate the strong representative power of label embeddings. Interpretability of attention Our attention score β can be used to highlight the most informative words wrt the downstream prediction task. We visualize two examples in Figure 4(a) for the Yahoo dataset. The darker yellow means more important words. The 1st text sequence is on the topic of “Sports”, and the 2nd text sequence is “Entertainment”. The attention score can correctly detect the key words with proper scores. 5.3 Applications to Clinical Text To demonstrate the practical value of label embeddings, we apply LEAM for a real health care scenario: medical code prediction on the Electronic Health Records dataset. A given patient may have multiple diagnoses, and thus multi-label learning is required. Specifically, we consider an open-access dataset, MIMIC-III (Johnson et al., 2016), which 2329 AUC F1 Model Macro Micro Macro Micro P@5 Logistic Regression 0.829 0.864 0.477 0.533 0.546 Bi-GRU 0.828 0.868 0.484 0.549 0.591 CNN (Kim, 2014) 0.876 0.907 0.576 0.625 0.620 C-MemNN (Prakash et al., 2017) 0.833 0.42 Attentive LSTM (Shi et al., 2017) 0.900 0.532 CAML (Mullenbach et al., 2018) 0.875 0.909 0.532 0.614 0.609 LEAM 0.881 0.912 0.540 0.619 0.612 Table 5: Quantitative results for doctor-notes multi-label classification task. contains text and structured records from a hospital intensive care unit. Each record includes a variety of narrative notes describing a patients stay, including diagnoses and procedures. They are accompanied by a set of metadata codes from the International Classification of Diseases (ICD), which present a standardized way of indicating diagnoses/procedures. To compare with previous work, we follow (Shi et al., 2017; Mullenbach et al., 2018), and preprocess a dataset consisting of the most common 50 labels. It results in 8,067 documents for training, 1,574 for validation, and 1,730 for testing. Results We compare against the three baselines: a logistic regression model with bag-ofwords, a bidirectional gated recurrent unit (BiGRU) and a single-layer 1D convolutional network (Kim, 2014). We also compare with three recent methods for multi-label classification of clinical text, including Condensed Memory Networks (C-MemNN) (Prakash et al., 2017), Attentive LSTM (Shi et al., 2017) and Convolutional Attention (CAML) (Mullenbach et al., 2018). To quantify the prediction performance, we follow (Mullenbach et al., 2018) to consider the micro-averaged and macro-averaged F1 and area under the ROC curve (AUC), as well as the precision at n (P@n). Micro-averaged values are calculated by treating each (text, code) pair as a separate prediction. Macro-averaged values are calculated by averaging metrics computed per-label. P@n is the fraction of the n highestscored labels that are present in the ground truth. The results are shown in Table 5. LEAM provides the best AUC score, and better F1 and P@5 values than all methods except CNN. CNN consistently outperforms the basic Bi-GRU architecture, and the logistic regression baseline performs worse than all deep learning architectures. (a) Yahoo dataset (b) Clinical text Figure 4: Visualization of learned attention β. We emphasize that the learned attention can be very useful to reduce a doctor’s reading burden. As shown in Figure 4(b), the health related words are highlighted. 6 Conclusions In this work, we first investigate label embeddings for text representations, and propose the label-embedding attentive models. It embeds the words and labels in the same joint space, and measures the compatibility of word-label pairs to attend the document representations. The learning framework is tested on several large standard datasets and a real clinical text application. Compared with the previous methods, our LEAM algorithm requires much lower computational cost, and achieves better if not comparable performance relative to the state-of-the-art. The learned attention is highly interpretable: highlighting the most informative words in the text sequence for the downstream classification task. Acknowledgments This research was supported by DARPA, DOE, NIH, ONR and NSF. 2330 References Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. 2016. Label-embedding for image classification. IEEE transactions on pattern analysis and machine intelligence. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2017. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 1107–1116. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2016. Attention-overattention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems, pages 3079–3087. Gianna M Del Corso, Antonio Gulli, and Francesco Romani. 2005. Ranking a stream of news. In Proceedings of the 14th international conference on World Wide Web. ACM. Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. Devise: A deep visual-semantic embedding model. In NIPS. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. EACL. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. ACL. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. NIPS 2014 deep learning workshop. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning, pages 1188–1196. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. 2018. Measuring the intrinsic dimension of objective landscapes. In International Conference on Learning Representations. Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In COLING. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. arXiv preprint arXiv:1802.05695. Mark Palatucci, Dean Pomerleau, Geoffrey E Hinton, and Tom M Mitchell. 2009. Zero-shot learning with semantic output codes. In NIPS. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. 2331 Aaditya Prakash, Siyuan Zhao, Sadid A Hasan, Vivek V Datla, Kathy Lee, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2017. Condensed memory networks for clinical diagnostic inferencing. In AAAI. Jose A Rodriguez-Serrano, Florent Perronnin, and France Meylan. 2013. Label embedding for text recognition. In Proceedings of the British Machine Vision Conference. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018a. Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms. In ACL. Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, and Lawrence Carin. 2017. Deconvolutional latent-variable model for text sequence matching. AAAI. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018b. Disan: Directional self-attention network for rnn/cnn-free language understanding. AAAI. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. 2018c. Bi-directional block selfattention for fast and memory-efficient sequence modeling. ICLR. Haoran Shi, Pengtao Xie, Zhiting Hu, Ming Zhang, and Eric P Xing. 2017. Towards automated icd coding using deep learning. arXiv preprint arXiv:1711.04075. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research. Jian Tang, Meng Qu, and Qiaozhu Mei. 2015. Pte: Predictive text embedding through large-scale heterogeneous text networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1165–1174. ACM. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL. Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. 2018. Topic compositional neural language model. AISTATS. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. ICLR. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2016. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. TACL. Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. In ACL. Honglun Zhang, Liqiang Xiao, Wenqing Chen, Yongkun Wang, and Yaohui Jin. 2017a. Multitask label embedding for text classification. arXiv preprint arXiv:1710.07210. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017b. Deconvolutional paragraph representation learning. In NIPS. Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016. Attention-based lstm network for cross-lingual sentiment classification. In EMNLP.
2018
216
Neural Sparse Topical Coding Min Peng1, Qianqian Xie1,*, Yanchun Zhang2, Hua Wang2, Xiuzheng Zhang3, Jimin Huang1 and Gang Tian1 1School of Computer Science, WuHan University, WuHan, China 2Centre for Applied Informatics, Victoria University, Melbourne, Australia 3School of Science, RMIT University, Melbourne, Australia *Corresponding Author {pengm, xieq, huangjimin, tiang2008}@whu.edu.cn {yanchun.zhang, hua.wang}@vu.edu.au, [email protected] Abstract Topic models with sparsity enhancement have been proven to be effective at learning discriminative and coherent latent topics of short texts, which is critical to many scientific and engineering applications. However, the extensions of these models require carefully tailored graphical models and re-deduced inference algorithms, limiting their variations and applications. We propose a novel sparsityenhanced topic model, Neural Sparse Topical Coding (NSTC) base on a sparsityenhanced topic model called Sparse Topical Coding (STC). It focuses on replacing the complex inference process with the back propagation, which makes the model easy to explore extensions. Moreover, the external semantic information of words in word embeddings is incorporated to improve the representation of short texts. To illustrate the flexibility offered by the neural network based framework, we present three extensions base on NSTC without re-deduced inference algorithms. Experiments on Web Snippet and 20Newsgroups datasets demonstrate that our models outperform existing methods. 1 Introduction Topic models with sparsity enhancement have proven to be effective tools for exploratory analysis of the overload of short text content. The latent representations learned by these models are central to many applications. However, these models have trouble to rapidly explore variations for the approximate inference methods of them. Even subtle variations on models can increase the complexity of the inference methods, which is especially apparent for non-conjugate models. With the development of deep learning, many works combine topic models with neural language model to overcome the computation complexity of topic models (Larochelle and Lauly, 2012a; Cao et al., 2015; Tian et al., 2016). Most of these methods adopt multiple neural network layers to model the generation process of each document. Nevertheless, these methods yield the same poor performance in short texts as traditional topic models. There are also many works introducing new techniques such as word embeddings to traditional topic models. Word embeddings has proven to be effective at capturing syntactic and semantic information of words. Many works (Das et al., 2015; Hu and Tsujii, 2016; Li et al., 2016) have shown that the additional semantics in word embeddings can enhance the performance of traditional topic models. Yet these models have the same trouble in making extensions as traditional topic models. Base on the above observations, we propose Neural Sparse Topical Coding (NSTC) by jointly utilizing word embeddings and neural network with a sparsity-enhanced topic model, Sparse Topical Coding (STC). We adopt neural network to model the generation process of STC to simplify the complex inference and improve flexibility, and incorporate external semantics provided by word embeddings to improve the overall accuracy. To illustrate the dramatic flexibility offered by the end-to-end neural network, we present three extensions base on NSTC. Our proposed models all benefit from both sides: 1) when compared with the neural based topic models, which stuck in the sparseness of word co-occurrence information, they show how the sparsity mechanism and word embeddings enrich the features and improve the performance; 2) while with topic models with sparsity enhancement, our models present how the black-box inference method of neural network accelerates the training process and increases the flexibility. To evaluate the effectiveness of our models by conducting experiments on 20 Newsgroups and Web Snippet datasets. 2 Related Work Topic models with sparsity enhancement: The performance of traditional topic models are compromised by the sparse word co-occurrence information when applied in short texts. To overcome the bottleneck, there have been many efforts to address the problem of sparsity in short texts. Based on traditional LDA, (Williamson et al., 2010) introduced a Spike and Slab prior to model the sparsity in finite and infinite latent topic structures of text. To consider the dual-sparsity of topics per document and terms per topic, (Lin et al., 2014) proposed a dual-sparse topic model that addresses the sparsity in both the topic mixtures and the word usage. There are also some non-probabilistic sparse topic models, which can directly control the sparsity by imposing regularizers. For example, the non-negative matrix factorization (NMF) (Heiler and Schn¨orr, 2006) formalized topic modeling as a problem of minimizing loss function regularized by lasso. Similarly, (Zhu and Xing, 2011) presented sparse topical coding (STC) by utilizing the Laplacian prior to directly control the sparsity of inferred representations. Additionally, (Peng et al., 2016) presented sparse topical coding with sparse groups (STCSG) to find sparse word and document representations of texts. However, over complicated inference procedure of these sparse topic models make them difficult to rapidly explore variations. Topic models with word embeddings: There are many works tried to incorporate word embeddings with topic models to improve the performance. (Das et al., 2015) proposed a new technique for topic modeling by treating the document as a collection of word embeddings and topics itself as multivariate Gaussian distributions in the embedding space. However, the assumption that topics are unimodal in the embedding space is not appropriate, since topically related words can occur distantly from each other in the embedding space. Therefore, (Hu and Tsujii, 2016) proposed latent concept topic model (LCTM), which modeled a topic as a distribution of concepts, where each concept defined another distribution of word vectors. (Nguyen et al., 2015) proposed Latent Feature Topic Modeling (LFTM), which extended LDA to incorporate word embeddings as latent features. (Li et al., 2016) focused on combing the local information of word embeddings and the global information of LDA, thus proposed a model TopicVec yielded by the variational inference method. However, these models also have trouble to rapidly explore variations. Neural Topic Models: There are also some efforts trying to combine topic models with neural networks to represent words and documents simultaneously. (Larochelle and Lauly, 2012a) proposed a neural network topic model that is similarly inspired by the Replicated Softmax. (Cao et al., 2015) proposed a novel neural topic model (NTM) where the representation of words and documents are efficiently and naturally combined into a uniform framework. (Das et al., 2015) proposed a new technique for topic modeling by treating the document as a collection of word embeddings and topics itself as multivariate Gaussian distributions in the embedding space. To address the limitations of the bag-of-words assumption, (Tian et al., 2016) proposed Sentence Level Recurrent Topic Model (SLRTM) by using a Recurrent Neural Networks (RNN) based framework to model long range dependencies between words. Nevertheless, most of aforementioned works yield poor performance in short texts. 3 Neural Sparse Topical Coding Firstly, we define that D = {1, ..., M} is a document set with size M, T = {1, ..., K} is a topic collection with K topics, V = {1, .., N} is a vocabulary with N words, and wd = {wd,1, ..., wd,|I|} is a vector of terms representing a document d, where I is the index of words in document d, and wd,n(n ∈I) is the frequency of word n in document d. Moreover, we denote β ∈RN×K as a global topic dictionary for the whole document set with K bases, θd ∈RK is the document code of each document d and sd,n ∈RK is the word code of each word n in each document d. To yield interpretable patterns, (θ, s, β) are constrained to be non-negative. 3.1 Sparse Topical Coding STC is a hierarchical non-negative matrix factorization for learning hierarchical latent representations of input samples. In STC, each document and each word is represented as a low-dimensional code in topic space, which can be employed in many tasks. Based on the global topic dictionary β of all documents with K topic bases sampled from a uniform distribution, the generative process of each document d is described as follows: 1. Sample the document code θd from a prior p(θd) ∼Laplace(λ−1). 2. For each observed word n in document d: (a) Sample the word code sd,n from a conditional distribution p(sd,n|θd) ∼ supergaussian(θd, γ−1, ρ−1). (b) Sample the observed word count wd,n from a distribution p(wd,n|sd,n ∗βn) ∼ Poisson(sd,n ∗βn) To achieve sparse word codes, STC defines p(sd,n|θd) as a product of two component distributions p(sd,n|θd) ∼ p(sd,n|θd, γ)p(sd,n|ρ), where p(sd,n|θd, γ) is an isotropic Gaussian distribution, and p(sd,n|ρ) is a Laplace distribution. The composite distribution is super-Gaussian: p(sd,n|θd) ∝exp(γ||sd,nθd||2 2ρ||sd,n||1). With the Laplace term, the composite distribution tends to yield sparse word codes. For the same purpose, the prior distribution p(θd) of document codes is a Laplace prior. Although STC has closed form coordinate descent equations for parameters (θ, s, β), it is inflexible for its complex inference process. 3.2 Neural Network View of Sparse Topical Coding We devote to rebuild STC with a neural network to simplify it’s inference process via BackPropogation. After generating the topic dictionary from neural network, our model follows the generative story below for each document d: 1. For each word n in document d: (a) Sample a latent variable word code sd,n ∼fg(d, n). (b) Sample the observed word count wd,n from p(wd,n|sd,n, βn) ∼ Poisson(sd,n ∗βn) In our model, we have several assumptions: 1) To simplify our model and acclerate the inference process, we collapse the document code from our model. As illuatrated in (Bai et al., 2013) and STC paper (Zhu and Xing, 2011), we can naturally generate each document code via a aggregation of all sampled word codes among all topics, after inferring the global topic dictionary and the word codes of words belong to each document: θd = D X d=1 Nd X n=1 sd,nk βkn/ D X d=1 Nd X n=1 K X k=1 sd,nk βkn; 2) We replace the composite super-Gaussian prior of the word codes and the uniform distribution of the topic dictionary with the neural network. In the topic dictionary neural network, we introduce the word semantic information via word embeddings to enrich the feature space for short texts; 3) Similar to STC, the observed word count is sampled from Poisson distribution, which is more appropriate for the discrete count data than other exponential family distributions. 3.3 Neural Sparse Topical Coding In this section, we introduce the detailed layer structures of NSTC in Figure 1. We explicitly ex( , ) C d n ( , ) ( , ) ( ) C d n s d n n    Word basis layer ( )n  Word code layer ( , ) s d n Topic dictionary Word code ,2 d W Lookup tableWE 1 ( ) relu WE W   ,2 ( , ) ( ( ,:)) d s d n relu n W  ( , ) d n Document d Word n Figure 1: Schematic overview of NSTC. plain each layer of NSTC below: Input layer (n, d): A word n of document d ∈ D, where D is a document set. Word embedding layer (WE ∈RN×300): Supposing the word number of the vocabulary is N, this layer devotes to transform each word to a distributed embedding representation. Here, we adopt the pre-trained embeddings by GloVe based on a large Wikipedia dataset1. Word code layers (sd ∈RN×K): These layers generate the K-dimensional word code of input word n in document d. s(d, n) = fs(d, n) (1) where fs is a multilayer perceptron. In order to achieve interpretable word codes as in STC, 1http://nlp.stanford.edu/projects/glove/ we constrain s to be non-negative, therefore we apply the relu activation function on the output of the neural network. Although imposing nonnegativity constraints could potentially result in sparser and more interpretable patterns, we exert l1 norm regularization on s to further keep the sparse assumption. Topic dictionary layers (β ∈RN×K): These layers aim at converting WE to a topic dictionary similar to the one in STC. β(n) = fβ(WE) (2) where fβ is a multilayer perceptron. We make a simplex projection among the output of topic dictionary neural network.We normalize each column of the dictionary via the simplex projection as follow: β.k = project(β.k), ∀k (3) The simplex projection is the same as the sparsemax activation function in (Martins and Astudillo, 2016), providing the theoretical base of its employment in a neural network trained with backpropagation. After the simplex projection, each column of the topic dictionary is promised to be sparse, non-negative and united. Score layer (Cd,n ∈R1×1): NSTC outputs the matching score of a word n and a document d with the dot product of s(d, n) and β(n) in this layer. The output score is utilized to approximate the observed word count wd,n. C(d, n) = s(d, n) ∗β(n) (4) Given the count wd,n of word n in document d, we can directly use it to supervise the training process. According to the architecture of our model, for each word n and each document d, the cost function is: L = l(wd,n, C(d, n)) + λ||sd,n||1 (5) where l is the log-Poisson loss, λ is the regularization factors. 3.4 Extensions of NSTC To future illustrate the benefits of a black box inference method, which allows rapidly explore new models, we present three variants of NSTC. Deep l1 Approximation. STC is based on the idea of sparse coding, in which the sparse code s of the input w can be obtained by solving the loss function for a given dictionary β. In (Gregor and LeCun, 2010), the parameterized encoder, named learned ISTA (LISTA) was proposed to efficiently approximate the l1 based sparse code. Based on the consideration, we present a enhanced NSTC via employing the deep l1 regularized encoder similar to LISTA, named NSTCE. We devise a feed-forward neural network as illustrated in Figure 2, to efficiently approximate the l1 based sparse code s of the input w. F(wd; Wd, bd) = relu(wd ∗Wd + bd) (6) The goal is to make the prediction of neural network predictor F after the fixed depth as close as possible to the optimal set of coefficients s∗in Eq.4. To jointly optimizing all parameters with the dictionary β together, we add another term to the loss function in Eq.4, and enforce the representation s to be as close as possible to the feed forward prediction (Kavukcuoglu et al., 2010): L =l(wd,n, C(d, n)) + λ||sd,n||1 + α X n ||sd −F(wd; Wd, bd)||2 2 (7) Minimizing the loss with respect to s produces a sparse representation that simultaneously reconstructs the word count and is not too different from the predicted representation. w W b s relu Figure 2: Deep l1 encoder. Group Sparse Regularization. Based on STC, (Bai et al., 2013) presented GSTC to discover document-level sparse or admixture proportion for short texts, in which the group sparse is employed to achieve sparse code at document level by taking into account the structure of bag of words. Here, we just need to add the group sparse regularization on the weight matrix, to make a neural network extension of GSTC, called NGSTC. We consider each column of sd as a group. L = l(wd,n, C(d, n)) + λ K X k=1 ||sd,.k||2 (8) Sparse Group Lasso. Similar to GSTC, STCSG (Peng et al., 2016) was proposed to learn sparse word and document codes, which relaxes the normalization constraint of the inferred representations with sparse group lasso. Base on STCSG, we propose a novel neural topic model called NSTCSG. We imposse the sparse group lasso on the word code, and have the following cost function: L = l(wd,n, C(d, n))+λ1||sd,n||1+λ2 K X k=1 ||sd,.k||2 (9) These three models have the same neural network structures as NSTC, and can be trained end to end with out re-deduced mathematical inference. Moreover, group and sparse group sparsity can help reduce the intrinsic complexity of the model by eliminating neurons as shown in Figure 3, and thus can help obtain practical speed ups in deep neural networks. 3.5 Optimization For the first two models with the lasso regularizer, we can directly ulitize the end to end stochastic gradient descent (SGD) to perform optimizing. The last two optimizing objectives of NGSTC and NSTCSG are formed as a combination of both smooth and non-smooth terms, they can be solved via proximal stochastic gradient descent. The proximal gradient algorithm first obtains the intermediate solution via SGD on the loss only, and then optimize for the regularization term via performing Euclidean projection of it to the solution space, as in the following formulation: min st+1 d,n R(st+1 d,n ) + 1 2||st+1 d,n −s t+ 1 2 d,n ||2 2 (10) where R is the regularization, s t+ 1 2 d,n the intermediate solution obtained by SGD, st+1 d,n is the variable to obtain after the current iteration. For the group lasso, the above problem has the closed-form solution. The proximal operator for the group lasso regularizer in Eq.8 is given as follow: proxGL(sd,nk) = (1 − λ ||sd,.k||2 )+sd,nk (11) The proximal operator for the sparse group lasso regularizer in Eq.9 is given as follow: proxSGL(sd,nk) =(1 − λ2 ||sign(sd,.k, λ1)||2 )+ sign(sd,nk, λ1) (12) The detailed algorithm framework of NGSTC and NSTCSG is shown in Algorithm 1. Algorithm 1 Training Algorithm for our models Require: a document d ∈D 1: t = t + 1 2: repeat 3: Compute the partial derivatives of weight matrices,s, and β with a non-regularized objective; 4: Update weight matrices, s, and β using SGD. 5: Update s using proximal operator 6: Update β using simplex projection. 7: until convergence 4 Experiments 4.1 Data and Setting We perform our experiments on two benchmark datasets: • 20Newsgroups: is comprised of 18775 newsgroup articles with 20 categories, and contains 60698 unique words2. • Web Snippet: includes 12340 Web search snippets with 8 categories, we remove the words with fewer than 3 characters and with document frequency less than 3 in the dataset3. We compare the performance of the NSTC with the following baselines: • LDA (Blei et al., 2001). A classical probabilistic topic model. We use the LDA package4 for its implementation. We use the settings with iteration number n = 2000, the Dirichlet parameter for distribution over topics α = 0.1 and the Dirichlet parameter for distribution over words η = 0.01. • STC (Zhu and Xing, 2011). It is a sparsityenhanced non-probabilistic topic model. We use the code released by the authors5. We set the regularization constants as λ = 0.3, ρ = 0.0001 and the maximum number of iterations of hierarchical sparse coding, dictionary learning as 100. 2http://www.qwone.com/ jason/20Newsgroups/ 3http://jwebpro.sourceforge.net/data-web-snippets.tar.gz 4https://pypi.python.org/pypi/lda 5http://bigml.cs.tsinghua.edu.cn/ jun/stc.shtml/ (a) (b) (c) Figure 3: (a) Lasso: the Lasso penalty removes elements without optimizing neuron-level considerations (highlighted in red). (b) Group lasso: when grouping weights from the the same input neuron into each group, the group sparsity has an effect of completely removing some neurons (highlighted in red). (c) Sparse group lasso: it combines the advantages of the former two formulations, which can remove some neurons and elements (highlighted in red). • DocNADE (Larochelle and Lauly, 2012b). An unsupervised neural network topic model of documents and has shown that it is a competitive model both as a generative model and as a document representation learning algorithm6. In DocNADE, the hidden size is 50, the learning rate is 0.0004 , the bath size is 64 and the max training number is 50000. • GaussianLDA (Das et al., 2015). A new technique for topic modeling by treating the document as a collection of word embeddings and topics itself as multivariate Gaussian distributions in the embedding space7. We use default values for the parameters. • TopicVec (Li et al., 2016). A model incorporates generative word embedding model with LDA 8. We also use default values for the parameters. Our three models are implemented in Python using TensorFlow9. For both datasets, we use the pretrained 300-dimensional word embeddings from Wikipedia by GloVe, and it is fixed during training. For each out-of-vocab word, we sample a random vector from a normal distribution. In practice, we use a regular learning rate 0.00001 for both dataset. We set the regularization factor λ = 1, α = 1, λ1 = 0.6, λ2 = 0.4. In initialization, all weight matrices are randomly initialized with the uniformed distribution in the interval [0, 0.001] for web snippet, and [0, 0.0001] for 20Newsgroups. 6https://github.com/huashiyiqike/TMBP/tree/master/DocN ADE 7https://github.com/rajarshd/Gaussian LDA 8https://github.com/askerlee/topicvec 9https://www.tensorflow.org/ 4.2 Classification Accuracy We perform text classification tasks on Web Snippet dataset and 20Newsgroups. For the Web Snippet, we follow its original partition that consists of 10060 training documents and 2280 test documents. On 20Newsgroups, we we keep 60% documents for training and 40% for testing as in (Zhu and Xing, 2011). We adopt the SVM as the classifier with the document representations learned by topic models. Figure 4 reports the convergence curves of loss and accuracy over iterations. The results show that the loss and accuracy of our method can achieve convergence after almost 100 epochs on web snippets and 50 epochs on 20Newsgroups with appropriate learning rate. Table 1 reports the classification accuracy on both datasets under different methods with different settings on the number of topics K = {50, 100, 150, 200, 250}. We can found that 1) The NSTCSG yields the highest accuracy, followed by NGSTC, NSTCE and NSTC which all outperform the DocNADE, GLDA, STC and LDA. 2) The embedding based models (NSTCSG, NGSTC, NSTCE, NSTC, DocNADE and GLDA) generate better document representations than STC and LDA separately, demonstrating the representative power of neural networks based on word embeddings. 3) Sparse models (NSTCSG, NGSTC, NSTCE, NSTC and STC) are superior to non-sparse models NTM and LDA separately. It indicates that sparse topic models are more suitable to short documents. 4) The NSTCSG perform better than NGSTC, followed by NSTC, which illustrates both sparse group lasso and group lasso penalty are contribute to learning the document representations with clear semantic explanations. 5) The accuracies of DocNADE decrease with the increasing of the topic K. This is may because DocNADE may generate the document topic distribution with many indistinct non-zeros due to the data sparsity caused by the increasing number of topics. Notice that LDA has the same performance on the web snippet dataset. 0 20 40 60 80 100 0.00 0.17 0.34 0.51 0.68 0.85 Accuracy K k=200 k=150 k=100 k=50 0 20 40 60 80 100 0 15 30 45 60 75 Avg Loss iterations k=200 k=150 k=100 k=50 (a) web snippet 0 10 20 30 40 50 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy K k=50 k=100 k=150 k=200 0 10 20 30 40 50 0 30 60 90 120 Avg Loss iterations k=50 k=100 k=150 k=200 (b) 20Newsgroup Figure 4: The loss and accuracy curves with the iterations on two datasets,on different number of topic K settings. 50 100 150 200 250 0 20 40 60 80 100 Sparsity K NSTCSG NSTC LDA NTM STC (a) word codes 50 100 150 200 250 0 20 40 60 80 100 Sparsity K NSTCSG NGSTC LDA NTM STC (b) document codes Figure 5: The average sparsity ratio of word and document codes. 4.3 Sparse Ratio We further compare the sparsity of the learned latent representations of words and documents from different models on Web Snippet. Word code: For each word n, we compute the average word code and average sparsity ratio of them as in (Zhu and Xing, 2011). Figure 5a presents the average word sparse ratio of word codes discovered by different models for Web Snippet. Note that the NGSTC can not yield sparse word codes but sparse document codes. We can see that 1) The NSTCSG learns the sparsest word codes, followed by NSTC and STC, which perform much better than NTM and LDA. 2) The word codes discovered by LDA and NTM are very dense for lacking the mechanism to learn the focused topics. The sparsity in both models is mainly caused by the data scarcity. 3)The representations learned by sparse models (NSTCSG, NSTC and STC) are much sparser, which indicates each word concentrates on only a small number of topics in these models, and therefore the word codes are more clear and semantically concentrated. 4) Meanwhile, the sparse ratio of STC and NSTC are lower than NSTCSG. It proves the sparse group lasso penalty can easily allow to provide networks with a high level of sparsity. Document code: We further quantitatively evaluate the average sparse ratio on latent representations of documents from different models, as shown in Figure 5b. We can see that 1) The NSTCSG yields the highest sparsity ratio, followed by NGSTC and STC, which outperform NTM and LDA by a large margin. 2) The document codes discovered by LDA and NTM are still very dense, while the representations learned by sparse models (NSTC and STC) are much sparser. It indicates the sparse models can discover focused topics and obtain discriminative representations of documents. 3) Similar to the word code, NGSTC outperforms NGSTC and STC. It demonstrates that the sparse group lasso penalty can achieve better sparsity than group lasso and lasso. 4) The sparsity ratios of sparse models increase with the topic numbers. The possible reason is that the sparse models tend to learn the focused topic number which approaches to the real topic number, and an increasing number of redundant topics can be discarded. 5) The NSTCSG inherits the advantages of NSTC and NGSTC, which can achieve the sparse topic representations of words and documents. 4.4 Generative Model Evaluation To further evaluate our models as a generative model of documents, we show the test document perplexity achieved by each topic model on the 20NewsGroups with 50 topic numbers in table 2. Notice that the topic number in TopicVec can not be specified to a fixed value, thus we follow the set in (Li et al., 2016) with 281 topics. In table 3, we show the top-9 words of learned focused topics in 20 Newsgroups datasets. For each topic, we list top-9 words according to their probabiliTable 1: Classification accuracy of different models on Web snippet and 20NG, with different number of topic K settings. Dataset Snippet 20NG k 50 100 150 200 250 50 100 150 200 250 LDA 0.682 0.592 0.573 0.615 0.583 0.545 0.615 0.607 0.613 0.623 STC 0.678 0.699 0.724 0.731 0.723 0.602 0.631 0.647 0.652 0.654 DocNADE 0.656 0.656 0.645 0.646 0.647 0.682 0.670 0.646 0.583 0.573 GLDA 0.669 0.689 0.675 0.670 0.623 0.367 0.438 0.465 0.496 0.526 NSTC 0.734 0.756 0.791 0.793 0.789 0.634 0.671 0.682 0.690 0.72 NSTCE 0.739 0.778 0.801 0.803 0.810 0.631 0.681 0.682 0.701 0.721 NGSTC 0.773 0.792 0.813 0.811 0.821 0.670 0.681 0.701 0.712 0.737 NSTCSG 0.788 0.813 0.821 0.823 0.829 0.665 0.687 0.691 0.717 0.735 Table 2: Perplexity on test dataset. Model 20NG LDA 1091 STC 611 DocNADE 896 TopicVec 650 NSTC 517 Table 3: Top Words of Learned Topics for 20Newsgroups. computer sport drug weapon space-flight computer hockey tobacco nuclear nasa windows games drug guns flyers ibm motorcycl fallacy crime space drive team aids booming air disk play hiv controller statelite system groups dades firearms send dos came illeg military launch key rom same wiring apartment hardware ball adict neutral la ties under the corresponding topic. It is obvious that the learned topics are clear and meaningful. Such as economics, hockey, games, play, ball in the topic about sport. In Figure 6, we also use the 2-dimensional t-SNE method to get the visualization of the learned latent representations for Web Snippet and 20Newsgroups Dataset with 200 topics. For Web Snippet, we sample 10% of the whole dataset. For 20newsgroups, we sample 30% of the dataset. It is obvious to see that all documents are clustered into 8 and 20 distinct categories. It proves the semantic effectiveness of the documents codes learned by our model. 5 Conclusion In this paper, we propose a novel neural sparsityenhanced topic model NSTC, which improves STC by incorporating the neural network and word embeddings. Compared with other word embedding based and neural network based topic models, it overcomes the computation complexity of topic models, and improve the generation of representation over short documents. We present 40 20 0 20 40 40 20 0 20 40 75 50 25 0 25 50 75 80 60 40 20 0 20 40 60 80 Figure 6: T-SNE embeddings of learned document representations for Web Snippet and 20NewsGroups. Different colors mean different categories. three variants of NSTC to illustrate the great flexibility of our framework. Experimental results demonstrate the effectiveness and efficiency of our models. For future work, we are interested in various extensions, including combining STC with autoencoding variational Bayes (AVB). Acknowledgments This work is supported by the National Science Foundation of China, under grant No.61472291 and grant No.61272110. References Lu Bai, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. Group sparse topical coding: from code to topic. In Proceedings of the sixth ACM international conference on Web search and data mining. ACM, pages 315–324. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2001. Latent dirichlet allocation. Journal of Machine Learning Research 3:993–1022. Ziqiang Cao, Sujian Li, Yang Liu, Wenjie Li, and Heng Ji. 2015. A novel neural topic model and its supervised extension. In AAAI. Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian lda for topic models with word embeddings. In ACL. Karol Gregor and Yann LeCun. 2010. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning (ICML-10). pages 399–406. Matthias Heiler and Christoph Schn¨orr. 2006. Learning sparse representations by non-negative matrix factorization and sequential cone programming. Journal of Machine Learning Research 7(Jul):1385– 1407. Weihua Hu and Jun’ichi Tsujii. 2016. A latent concept topic model for robust topic inference using word embeddings. In The 54th Annual Meeting of the Association for Computational Linguistics. page 380. Koray Kavukcuoglu, Marc’Aurelio Ranzato, and Yann LeCun. 2010. Fast inference in sparse coding algorithms with applications to object recognition. arXiv preprint arXiv:1010.3467 . Hugo Larochelle and Stanislas Lauly. 2012a. A neural autoregressive topic model. In NIPS. Hugo Larochelle and Stanislas Lauly. 2012b. A neural autoregressive topic model. In Advances in Neural Information Processing Systems. pages 2708–2716. Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. 2016. Generative topic embedding: a continuous representation of documents. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 666–675. Tianyi Lin, Wentao Tian, Qiaozhu Mei, and Hong Cheng. 2014. The dual-sparse topic model: mining focused topics and focused terms in short text. In WWW. Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In International Conference on Machine Learning. pages 1614–1623. Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. 2015. Improving topic models with latent feature word representations. Transactions of the Association for Computational Linguistics 3:299–313. Min Peng, Qianqian Xie, Jiajia Huang, Jiahui Zhu, Shuang Ouyang, Jimin Huang, and Gang Tian. 2016. Sparse topical coding with sparse groups. In WAIM. Fei Tian, Bin Gao, Di He, and Tie-Yan Liu. 2016. Sentence level recurrent topic model: Letting topics speak for themselves. CoRR abs/1604.02038. Sinead Williamson, Chong Wang, Katherine A. Heller, and David M. Blei. 2010. The ibp compound dirichlet process and its application to focused topic modeling. In ICML. Jun Zhu and Eric P. Xing. 2011. Sparse topical coding. CoRR abs/1202.3778.
2018
217
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2341–2351 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2341 Document Similarity for Texts of Varying Lengths via Hidden Topics Hongyu Gong* Tarek Sakakini* Suma Bhat* Jinjun Xiong † *University of Illinois at Urbana-Champaign, USA †T. J. Watson Research Center, IBM *{hgong6, sakakini, spbhat2}@illinois.edu †[email protected] Abstract Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of the incorporation of domain knowledge to text matching. 1 Introduction Measuring the similarity between documents is of key importance in several natural processing applications including information retrieval (Salton and Buckley, 1988), book recommendation (Gopalan et al., 2014), news categorization (Ontrup and Ritter, 2002) and essay scoring (Landauer, 2003). A range of document similarity approaches have been proposed and effectively used in recent applications including (Lai et al., 2015; Bordes et al., 2015). Central to the tasks discussed above is the assumption that the documents being compared are of comparable lengths. Advances in language processing approaches to transform natural language understanding, such as text summarization and recommendation, have generated new requirements for comparing documents. For instance, summarization techniques Table 1: A Sample Concept-Project Matching Concept Heredity: Inheritance and Variation of Traits All cells contain genetic information in the form of DNA molecules. Genes are regions in the DNA that contain the instructions that code for the formation of proteins. Project Pedigree Analysis: A Family Tree of Traits Do you have the same hair color or eye color as your mother? When we look at members of a family it is easy to see that some physical characteristics or traits are shared. To start this project, you should draw a pedigree showing the different members of your family. Ideally you should include multiple people from at least three generations. (extractive and abstractive) are capable of automatically generating textual summaries by converting a long document of several hundred words into a condensed text of only a few words while preserving the core meaning of the original text (Kedzie and McKeown, 2016). Conceivably, a related aspect of summarization is the task of bidirectional matching of a summary and a document or a set of documents, which is the focus of this study. The document similarity considered in this paper is between texts that have significant differences not only in length, but also in the abstraction level (such as a definition of an abstract concept versus a detailed instance of that abstract concept). As an illustration, consider the task of matching a Concept with a Project as shown in Table 1. Here a Concept is a grade-level science curriculum item and represents the summary. A Project, listed in a collection of science projects, represents the document. Projects typically are long texts including an introduction, materials and procedures, whereas science concepts are much shorter in comparison having a title and a concise and abstract description. The concepts and projects are described in detail in Section 5.1. The matching 2342 task here is to automatically suggest a hands-on project for a given concept in the curriculum, such that the project can help reinforce a learner’s basic understanding of the concept. Conversely, given a science project, one may need to identify the concept it covers by matching it to a listed concept in the curriculum. This would be conceivable in the context of an intelligent tutoring system. Challenges to the matching task mentioned above include: 1) The mismatch in the relative lengths of the documents being compared – a long piece of text (henceforth termed document) and a short piece of text (termed summary) – gives rise to the vocabulary mismatch problem, where the document and the summary do not share a majority of terms. 2) The context mismatch problem arising because a document provides a reasonable amount of text to infer the contextual meaning of a term, but a summary only provides a limited context, which may or may not involve the same terms considered in the document. These challenges render existing approaches to comparing documents–for instance, those that rely on document representations (e.g., Doc2Vec (Le and Mikolov, 2014))–inadequate, because the predominance of non-topic words in the document introduces noise to its representation while the summary is relatively noise-free, rendering Doc2Vec inadequate for comparing them. Our approach to the matching problem is to allow a multi-view generalization of the document, where multiple hidden topics are used to establish a common ground to capture as much information of the document and the summary as possible and use this to score the relevance of the pair. We empirically validate our approach on two tasks – that of project-concept matching in gradelevel science and that of scientific paper-summary matching – using both custom-made and publicly available datasets. The main contributions of this paper are: 1. We propose an embedding-based hidden topic model to extract topics and measure their importance in long documents. 2. We present a novel geometric approach to compare documents with differing modality (a long document to a short summary) and validate its performance relative to strong baselines. 3. We explore the use of domain-specific word embeddings for the matching task and show the explicit benefit of incorporating domain knowledge in the algorithm. 4. We make available the first dataset1 on projectconcept matching in the science domain to help further research in this area. 2 Related Works Document similarity approaches quantify the degree of relatedness between two pieces of texts of comparable lengths and thus enable matching between documents. Traditionally, statistical approaches (e.g., (Metzler et al., 2007)) and vectorspace-based methods (including the robust Latent Semantic Analysis (LSA) (Dumais, 2004)) have been used for text similarity. More recently, neural network-based methods have been used for document representation and these include average word embeddings (Mikolov et al., 2013), Doc2Vec (Le and Mikolov, 2014), Skip-Thought vectors (Kiros et al., 2015), recursive neural networkbased methods (Socher et al., 2014), LSTM architectures (Tai et al., 2015), and convolutional neural networks (Blunsom et al., 2014). Considering works that avoid using an explicit document representation for comparing documents, the state-of-the-art method is Word Mover’s Distance (WMD), which relies on pretrained word embeddings (Kusner et al., 2015). Given these embeddings, the WMD defines the distance between two documents as the best transport cost of moving all words from one document to another within the space of word embeddings. The advantages of WMD are that it is hyperparameter free and achieves high retrieval accuracy on document classification tasks with documents of comparable lengths. However, it is computationally expensive for long documents (Kusner et al., 2015). Clearly, what is lacking in prior literature is a study of document similarity approaches that match documents with widely different sizes. It is this gap in literature that we expect to fill by way of this study. Latent Variable Models. Latent variable models including count-based and probabilistic models have been studied in many previous works. Countbased models such as Latent Semantic Indexing (LSI) compare two documents based on their combined vocabulary (Deerwester et al., 1990). When 1Our code and data are available at: https: //github.com/HongyuGong/DocumentSimilarity-via-Hidden-Topics.git 2343 (a) word geometry of general embedding (b) word geometry of science domain embeddings Figure 1: Two key words “forces” and “matters” are shown in red and blue respectively. Red words represent different senses of “forces”, and blue words carry senses of “matters”. “forces” mainly refers to “army” and “matters” refers to “issues” in general embedding of (a), whereas “forces” shows its sense of “gravity” and “matters” shows the sense of “solids” in science-domain embedding of (b) documents have highly mismatched vocabularies such as those that we study, relevant documents might be classified as irrelevant. Our model is built upon word-embeddings which is more robust to such a vocabulary mismatch. Probabilistic models such as Latent Dirichlet Analysis (LDA) define topics as distributions over words (Blei et al., 2003). In our model, topics are low-dimensional real-valued vectors (more details in Section 4.2). 3 Domain Knowledge Domain information pertaining to specific areas of knowledge is made available in texts by the use of words with domain-specific meanings or senses. Consequently, domain knowledge has been shown to be critical in many NLP applications such as information extraction and multi-document summarization (Cheung and Penn, 2013a), spoken language understanding (Chen et al., 2015), aspect extraction (Chen et al., 2013) and summarization (Cheung and Penn, 2013b). As will be described later, our distance metric for comparing a document and a summary relies on word embeddings. We show in this work, that embeddings trained on a science-domain corpus lead to better performance than embeddings on the general corpus (WikiCorpus). Towards this, we extract a science-domain sub-corpus from the WikiCorpus, and the corpus extraction will be detailed in Section 5. To motivate the domain-specific behavior of polysemous words, we will qualitatively explore how domain-specific embeddings differ from the general embeddings on two polysemous science terms: forces and matters. Considering the fact that the meaning of a word is dictated by its neighbors, for each set of word embeddings, we plot the neighbors of these two terms in Figure 1 on to 2 dimensions using Locally Linear Embedding (LLE), which preserves word distances (Roweis and Saul, 2000). We then analyze the sense of the focus terms–here, forces and matters. From Figure 1(a), we see that for the word forces, its general embedding is close to army, soldiers, allies indicating that it is related with violence and power in a general domain. Shifting our attention to Figure 1(b), we see that for the same term, its science embedding is closer to torque, gravity, acceleration implying that its science sense is more about physical interactions. Likewise, for the word matters, its general embedding is surrounded by affairs and issues, whereas, its science embedding is closer to particles and material, prompting that it represents substances. Thus, we conclude that domain specific embeddings (here, science), is capable of incorporating domain knowledge into word representations. We use this observation in our document-summary matching system to which we turn next. 4 Model Our model that performs the matching between document and summary is depicted in Figure 2. It is composed of three modules that perform preprocessing, document topic generation, and relevance measurement between a document and a summary. Each of these modules is discussed below. 2344 Figure 2: The system for document-summary matching 4.1 Preprocessing The preprocessing module tokenizes texts and removes stop words and prepositions. This step allows our system to focus on the content words without impacting the meaning of original texts. 4.2 Topic Generation from Documents We assume that a document (a long text) is a structured collection of words, with the ‘structure’ brought about by the composition of topics. In some sense, this ‘structure’ is represented as a set of hidden topics. Thus, we assume that a document is generated from certain hidden “topics”, analogous to the modeling assumption in LDA. However, unlike in LDA, the “topics” here are neither specific words nor the distribution over words, but are are essentially a set of vectors. In turn, this means that words (represented as vectors) constituting the document structure can be generated from the hidden topic vectors. Introducing some notation, the word vectors in a document are {w1, . . . , wn}, and the hidden topic vectors of the document are {h1, . . . , hK}, where wi, hk 2 Rd, d = 300 in our experiments. Linear operations using word embeddings have been empirically shown to approximate their compositional properties (e.g. the embedding of a phrase is nearly the sum of the embeddings of its component words) (Mikolov et al., 2013). This motivates the linear reconstruction of the words from the document’s hidden topics while minimizing the reconstruction error. We stack the K topic vectors as a topic matrix H = [h1, . . . , hK](K < d). We define the reconstructed word vector ˜wi for the word wi as the optimal linear approximation given by topic vectors: ˜wi = H ˜↵i, where ˜↵i = argmin ↵i2RK kwi −H↵ik2 2. (1) The reconstruction error E for the whole document is the sum of each word’s reconstruction error and is given by: E = nP i=1 kwi −˜wik2 2. This being a function of the topic vectors, our goal is to find the optimal H⇤so as to minimize the error E: H⇤= argmin H2Rd⇥K E(H) = argmin H2Rd⇥K n X i=1 min ↵i kwi −H↵ik2 2, (2) where k·k is the Frobenius norm of a matrix. Without loss of generality, we require the topic vectors {hi}K i=1 to be orthonormal, i.e., hT i hj = 1(i=j). As we can see, the optimization problem (2) describes an optimal linear space spanned by the topic vectors, so the norm and the linear dependency of the vectors do not matter. With the orthonormal constraints, we simplify the form of the reconstructed vector ˜wi as: ˜wi = HHT wi. (3) We stack word vectors in the document as a matrix W = [w1, . . . , wn]. The equivalent formulation to problem (2) is: min H kW −HHT Wk2 2 s.t. HT H = I, (4) where I is an identity matrix. The problem can be solved by Singular Value Decomposition (SVD), using which, the matrix W can be decomposed as W = U⌃VT , where UT U = I,VT V = I, and ⌃is a diagonal matrix where the diagonal elements are arranged in a decreasing order of absolute values. We show in the supplementary material that the first K vectors in the matrix U are exactly the solution to H⇤= [h⇤ 1, . . . , h⇤ K]. We find optimal topic vectors H⇤ = [h⇤ 1, . . . , h⇤ K] by solving problem (4). We note that these topic vectors are not equally important, and we say that one topic is more important than another if it can reconstruct words 2345 with smaller error. Define Ek as the reconstruction error when we only use topic vector h⇤ k to reconstruct the document: Ek = kW −h⇤ kh⇤ k T Wk2 2. (5) Now define ik as the importance of topic h⇤ k, which measures the topic’s ability to reconstruct the words in a document: ik = kh⇤ k T Wk2 2 (6) We show in the supplementary material that the higher the importance ik is, the smaller the reconstruction error Ek is. Now we normalize ik as¯ik so that the importance does not scale with the norm of the word matrix W, and so that the importances of the K topics sum to 1. Thus, ¯ik = ik/( K X j=1 ij). (7) The number of topics K is a hyperparameter in our model. A small K may not cover key ideas of the document, whereas a large K may keep trivial and noisy information. Empirically we find that K = 15 captures most important information from the document. 4.3 Topic Mapping to Summaries We have extracted K topic vectors {h⇤ k}K k=1 from the document matrix W, whose importance is reflected by {¯ik}K k=1. In this module, we measure the relevance of a document-summary pair. Towards this, a summary that matches the document should also be closely related with the “topics” of that document. Suppose the vectors of the words in a summary are stacked as a d ⇥m matrix S = [s1, . . . , sm], where sj is the vector of the j-th word in a summary. Similar to the reconstruction of the document, the summary can also be reconstructed from the documents’ topic vectors as shown in Eq. (3). Let ˜sk j be the reconstruction of the summary word sj given by one topic h⇤ k: ˜sk j = h⇤ kh⇤ k T sj. Let r(h⇤ k, sj) be the relevance between a topic vector h⇤ k and summary word sj. It is defined as the cosine similarity between ˜sk j and sj: r(h⇤ k, sj) = sjT ˜sk j /(ksjk2 · k˜sk j k2). (8) Furthermore, let r(h⇤ k, S) be the relevance between a topic vector and the summary, defined to be the average similarity between the topic vector and the summary words: r(h⇤ k, S) = 1 m m X j=1 r(h⇤ k, sj). (9) The relevance between a topic vector and a summary is a real value between 0 and 1. As we have shown, the topics extracted from a document are not equally important. Naturally, a summary relevant to more important topics is more likely to better match the document. Therefore, we define r(W, S) as the relevance between the document W and the summary S, and r(W, S) is the sum of topic-summary relevance weighted by the importance of the topic: r(W, S) = K X k=1 ¯ik · r(h⇤ k, S), (10) where ¯ik is the importance of topic h⇤ k as defined in (7). The higher r(W, S) is, the better the summary matches the document. We provide a visual representation of the documents as shown in Figure 3 to illustrate the notion of hidden topics. The two documents are from science projects: a genetics project, Pedigree Analysis: A Family Tree of Traits (ScienceBuddies, 2017a), and a weather project, How Do the Seasons Change in Each Hemisphere (ScienceBuddies, 2017b). We project all embeddings to a three-dimensional space for ease of visualization. As seen in Figure 3, the hidden topics reconstruct the words in their respective documents to the extent possible. This means that the words of a document lie roughly on the plane formed by their corresponding topic vectors. We also notice that the summary words (heredity and weather respectively for the two projects under consideration) lie very close to the plane formed by the hidden topics of the relevant project while remaining away from the plane of the irrelevant project. This shows that the words in the summary (and hence the summary itself) can also be reconstructed from the hidden topics of documents that match the summary (and are hence ‘relevant’ to the summary). Figure 3 visually explains the geometric relations between the summaries, the hidden topics and the documents. It also validates the representation power of the extracted hidden topic vectors. 2346 Figure 3: Words mode and genes from the document on genetics and words storm and atmospheric from document on weather are represented by pink and blue points respectively. Linear space of hidden topics in genetics form the pink plane, where summary word heredity (the red point) roughly lies. Topic vectors of the document on weather form the blue plane, and the summary word weather (the darkblue point) lies almost on the same plane. 5 Experiments In this section, we evaluate our documentsummary matching approach on two specific applications where texts of different sizes are compared. One application is that of concept-project matching useful in science education and the other is that of summary-research paper matching. Word Embeddings. Two sets of 300dimension word embeddings were used in our experiments. They were trained by the Continuous Bag-of-Words (CBOW) model in word2vec (Mikolov et al., 2013) but on different corpora. One training corpus is the full English WikiCorpus of size 9 GB (Al-Rfou et al., 2013). The second consists of science articles extracted from the WikiCorpus. To extract these science articles, we manually selected the science categories in Wikipedia and considered all subcategories within a depth of 3 from these manually selected root categories. We then extracted all articles in the aforementioned science categories resulting in a science corpus of size 2.4 GB. The word vectors used for documents and summaries are both from the pretrained word2vec embeddings. Baselines We include two state-of-the-art methods of measuring document similarity for comparison using their implementations available in gensim ( ˇReh˚uˇrek and Sojka, 2010). (1) Word movers’ distance (WMD) (Kusner et al., 2015). WMD quantifies the distance between a pair of documents based on word embeddings as introduced previously (c.f. Related Work). We take the negative of their distance as a measure of document similarity (here between a document and a summary). (2) Doc2Vec (Le and Mikolov, 2014). Document representations have been trained with neural networks. We used two versions of doc2vec: one trained on the full English Wikicorpus and a second trained on the science corpus, same as the corpora used for word embedding training. We used the cosine similarity between two text vectors to measure their relevance. For a given document-summary pair, we compare the scores obtained using the above two methods with that obtained using our method. 5.1 Concept-Project matching Science projects are valuable resources for learners to instigate knowledge creation via experimentation and observation. The need for matching a science concept with a science project arises when learners intending to delve deeper into certain concepts search for projects that match a given concept. Additionally, they may want to identify the concepts with which a set of projects are related. We note that in this task, science concepts are highly concise summaries of the core ideas in projects, whereas projects are detailed instructions of the experimental procedures, including an introduction, materials and a description of the procedure, as shown in Table 1. Our matching method provides a way to bridge the gap between abstract concepts and detailed projects. The format of the concepts and the projects is discussed below. Concepts. For the purpose of this study we use the concepts available in the Next Generation Science Standards (NGSS) (NGSS, 2017). Each concept is accompanied by a short description. For example, one concept in life science is Heredity: Inheritance and Variation of Traits. Its description is All cells contain genetic information in the form of DNA molecules. Genes are regions in the DNA that contain the instructions that code for the formation of proteins. Typical lengths of concepts are around 50 words. Projects. The website Science Buddies (ScienceBuddies, 2017c) provides a list of projects from a variety of science and engineering disciplines such 2347 Table 2: Classification results for the Concept-Project Matching task. All performance differences were statistically significant at p = 0.01. method topic science topic wiki wmd science wmd wiki doc2vec science doc2vec wiki precision 0.758 ± 0.012 0.750 ± 0.009 0.643 ± 0.070 0.568 ± 0.055 0.615 ± 0.055 0.661 ± 0.084 recall 0.885 ± 0.071 0.842 ± 0.010 0.735 ± 0.119 0.661 ± 0.119 0.843 ± 0.066 0.737 ± 0.149 fscore 0.818 ± 0.028 0.791 ± 0.007 0.679 ± 0.022 0.595 ± 0.020 0.695 ± 0.019 0.681 ± 0.032 as physical sciences, life sciences and social sciences. A typical project consists of an abstract, an introduction, a description of the experiment and the associated procedures. A project typically has more than 1000 words. Dataset. We prepared a representative dataset 537 pairs of projects and concepts involving 53 unique concepts from NGSS and 230 unique projects from Science Buddies. Engineering undergraduate students annotated each pair with the decision whether it was a good match or not and received research credit. As a result, each conceptproject pair received at least three annotations, and upon consolidation, we considered a conceptproject pair to be a good match when a majority of the annotators agreed. Otherwise it was not considered a good match. The ratio between good matches and bad matches in the collected data was 44 : 56. Classification Evaluation. Annotations from students provided the ground truth labels for the classification task. We randomly split the dataset into tuning and test instances with a ratio of 1 : 9. A threshold score was tuned on the tuning data, and concept-project pairs with scores higher than this threshold were classified as a good matches during testing. We performed 10-fold cross validation, and report the average precision, recall, F1 score and their standard deviation in Table 2. Our topic-based metric is denoted as “topic”, and the general-domain and science-domain embeddings are denoted as “wiki” and “science” respectively. We show the performance of our method against the two baselines while varying the underlying embeddings, thus resulting in 6 different combinations. For example, “topic science” refers to our method with science embeddings. From the table (column 1) we notice the following: 1) Our method significantly outperforms the two baselines by a wide margin (⇡10%) in both the general domain setting as well as the domain-specific setting. 2) Using science domainspecific word embeddings instead of the general word embeddings results in the best performance across all algorithms. This performance was observed despite the word embeddings being trained on a significantly smaller corpus compared to the general domain corpus. Besides the classification metrics, we also evaluated the directed matching from concepts to projects with ranking metrics. Ranking Evaluation Our collected dataset resulted in having a many-to-many matching between concepts and projects. This is because the same concept was found to be a good match for multiple projects and the same project was found to match many concepts. The previously described classification task evaluated the bidirectional concept-project matching. Next we evaluated the directed matching from concepts to projects, to see how relevant these top ranking projects are to a given input concepts. Here we use precision@k (Radlinski and Craswell, 2010) as the evaluation metric, considering the percentage of relevant ones among top-ranking projects. For this part, we only considered the methods using science domain embeddings as they have shown superior performance in the classificaiton task. For each concept, we check the precision@k of matched projects and place it in one of k+1 bins accordingly. For example, for k=3, if only two of the three top projects are a correct match, the concept is placed in the bin corresponding to 2/3. In Figure 4, we show the percentage of concepts that fall into each bin for the three different algorithms for k=1,3,6. We observe that recommendations using the hidden topic approach fall more in the high value bin compared to others, performing consistently better than two strong baselines. The advantage becomes more obvious at precision@6. It is worth mentioning that wmd science falls behind doc2vec science in the classification task while it outperforms in the ranking task. 5.2 Text Summarization The task of matching summaries and documents is commonly seen in real life. For example, we 2348 (a) Precision@1 (b) Precision@3 (c) Precision@6 Figure 4: Ranking Performance of All Methods use an event summary “Google’s AlphaGo beats Korean player Lee Sedol in Go” to search for relevant news, or use the summary of a scientific paper to look for related research publications. Such matching constitutes an ideal task to evaluate our matching method between texts of different sizes. Dataset. We use a dataset from the CL-SciSumm Shared Task (Jaidka et al., 2016). The dataset consists of 730 ACL Computational Linguistics research papers covering 50 categories in total. Each category consists of a reference paper (RP) and around 10 citing papers (CP) that contain citations to the RP. A human-generated summary for the RP is provided and we use the 10 CP as being relevant to the summary. The matching task here is between the summary and all CPs in each category. Evaluation. For each paper, we keep all of its content except the sections of experiments and acknowledgement (these sections were omitted because often their content is often less related to the topic of the summary). The typical summary length is about 100 words, while a paper has more than 2000 words. For each topic, we rank all 730 papers in terms of their relevance generated by our method and baselines using both sets of embeddings. For evaluation, we use the information retrieval measure of precision@k, which considers the number of relevant matches in the top-k matchings (Manning et al., 2008). For each combination of the text similarity approaches and embeddings, we show precision@k for different k’s in Figure 5. We observe that our method with science embedding achieves the best performance compared to the baselines, once again showing not only the benefits of our method but also that of incorporating domain knowledge. 6 Discussion Analysis of Results. From the results of the two tasks we observe that our method outperforms two Figure 5: Summary-Article Matching strong baselines. The reason for WMD’s poor performance could be that the many uninformative words (those unrelated to the central topic) make WMD overestimate the distance between the document-summary pair. As for doc2vec, its single vector representation may not be able to capture all the key topics of a document. A project could contain multifaceted information, e.g., a project to study how climate change affects grain production is related to both environmental science and agricultural science. Effect of Topic Number. The number of hidden topics K is a hyperparameter in our setting. We empirically evaluate the effect of topic number in the task of concept-project mapping. Figure 6 shows the F1 scores and the standard deviations at different K. As we can see, optimal K is 18. When K is too small, hidden topics are too few to capture key information in projects. Thus we can see that the increase of topic number from 3 to 6 brings a big improvement to the performance. Topic numbers larger than the optimal value degrade the performance since more topics incorporate noisy information. We note that the perfor2349 Figure 6: F1 score on concept-project matching with different topic numbers K mance changes are mild when the number of topics are in the range of [18, 31]. Since topics are weighted by their importance, the effect of noisy information from extra hidden topics is mitigated. Interpretation of Hidden Topics. We consider the summary-paper matching as an example with around 10 papers per category. We extracted the hidden topics from each paper, reconstructed words with these topics as shown in Eq. (3), and selected the words which had the smallest reconstruction errors. These words are thus closely related to the hidden topics, and we call them topic words to serve as an interpretation of the hidden topics. We visualize the cloud of such topic words on the set of papers about word sense disambiguation as shown in Figure 7. We see that the words selected based on the hidden topics cover key ideas such as disambiguation, represent, classification and sentence. This qualitatively validates the representation power of hidden topics. More examples are available in the supplementary material. We interpret this to mean that proposed idea of multiple hidden topics captures the key information of a document. The extracted “hidden topics” represent the essence of documents, suggesting the appropriateness of our relevance metric to measure the similarity between texts of different sizes. Even though our focus in this study was the science domain we point out that the results are more generally valid since we made no domainspecific assumptions. Varying Sensitivity to Domain. As shown in the results, the science-domain embeddings improved the classification of concept-project matching for Figure 7: Topic words from papers on word sense disambiguation the topic-based method by 2% in F1-score, WMD by 8% and doc2vec by 1%, thus underscoring the importance of domain-specific word embeddings. Doc2vec is less sensitive to the domain, because it provides document-level representation. Even if some words cannot be disambiguated due to the lack of domain knowledge, other words in the same document can provide complementary information so that the document embedding does not deviate too much from its true meaning. Our method, also a word embedding method, is not as sensitive to domain as WMD. It is robust to the polysemous words with domain-sensitive semantics, since hidden topics are extracted in the document level. Broader contexts beyond just words provide complementary information for word sense disambiguation. 7 Conclusion We propose a novel approach to matching documents and summaries. The challenge we address is to bridge the gap between detailed long texts and its abstraction with hidden topics. We incorporate domain knowledge into the matching system to gain further performance improvement. Our approach has beaten two strong baselines in two downstream applications, concept-project matching and summary-research paper matching. Acknowledgments This work is supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) - a research collaboration as part of the IBM AI Horizons Network. We thank ACL anonymous reviewers for their constructive suggestions. We thank Mathew Monfort for helping deploy annotation tasks, and thank Jong Yoon Lee for the dataset collection. 2350 References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 183–192, Sofia, Bulgaria. Association for Computational Linguistics. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Phil Blunsom, Edward Grefenstette, and Nal Kalchbrenner. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Yun-Nung Chen, William Yang Wang, Anatole Gershman, and Alexander I Rudnicky. 2015. Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding. In ACL (1), pages 483–494. Zhiyuan Chen, Arjun Mukherjee, Bing Liu, Meichun Hsu, Malu Castellanos, and Riddhiman Ghosh. 2013. Exploiting domain knowledge in aspect extraction. In EMNLP, pages 1655–1667. Jackie Chi Kit Cheung and Gerald Penn. 2013a. Probabilistic domain modelling with contextualized distributional semantic vectors. In ACL (1), pages 392– 401. Jackie Chi Kit Cheung and Gerald Penn. 2013b. Towards robust abstractive multi-document summarization: A caseframe analysis of centrality and domain. In ACL (1), pages 1233–1242. Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391. Susan T Dumais. 2004. Latent semantic analysis. Annual review of information science and technology, 38(1):188–230. Prem K Gopalan, Laurent Charlin, and David Blei. 2014. Content-based recommendations with poisson factorization. In Advances in Neural Information Processing Systems, pages 3176–3184. Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2016. Overview of the cl-scisumm 2016 shared task. In In Proceedings of Joint Workshop on Bibliometric-enhanced Information Retrieval and NLP for Digital Libraries (BIRNDL 2016). Chris Kedzie and Kathleen McKeown. 2016. Extractive and abstractive event summarization over streaming web text. In IJCAI, pages 4002–4003. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294–3302. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In International Conference on Machine Learning, pages 957–966. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI, volume 333, pages 2267– 2273. Thomas K Landauer. 2003. Automatic essay assessment. Assessment in education: Principles, policy & practice, 10(3):295–308. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1188–1196. Christopher D Manning, Prabhakar Raghavan, Hinrich Sch¨utze, et al. 2008. Introduction to information retrieval, volume 1. Cambridge university press Cambridge. Donald Metzler, Susan Dumais, and Christopher Meek. 2007. Similarity measures for short segments of text. In European Conference on Information Retrieval, pages 16–27. Springer. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. NGSS. 2017. Available at: https: //www.nextgenscience.org. Accessed: 2017-06-30. Jorg Ontrup and Helge Ritter. 2002. Hyperbolic selforganizing maps for semantic navigation. In Advances in neural information processing systems, pages 1417–1424. Filip Radlinski and Nick Craswell. 2010. Comparing the sensitivity of information retrieval metrics. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 667–674. ACM. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New 2351 Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Sam T Roweis and Lawrence K Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326. Gerard Salton and Christopher Buckley. 1988. Termweighting approaches in automatic text retrieval. Information processing & management, 24(5):513– 523. ScienceBuddies. 2017a. Available at: https: //www.sciencebuddies.org/sciencefair-projects/project-ideas/ Genom p010/genetics-genomics/ pedigree-analysis-a-family-treeof-traits. Accessed: 2017-06-30. ScienceBuddies. 2017b. Available at: https: //www.sciencebuddies.org/sciencefair-projects/project-ideas/ Weather p006/weather-atmosphere/ how-do-the-seasons-change-in-eachhemisphere. Accessed: 2017-06-30. ScienceBuddies. 2017c. Available at: http:// www.sciencebuddies.org. Accessed: 201706-30. Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207–218. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1556–1566.
2018
218
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2352–2362 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2352 Eyes are the Windows to the Soul: Predicting the Rating of Text Quality Using Gaze Behaviour ⋆Sandeep Mathias, ⋆,♣,♦Diptesh Kanojia, ⋆Kevin Patel, ⋆Samarth Agrawal ♠Abhijit Mishra, ⋆Pushpak Bhattacharyya ⋆CSE Department, IIT Bombay ♣IITB-Monash Research Academy ♦Monash University, Australia ♠IBM Research, India ⋆,♣{sam,diptesh,kevin.patel,samartha,pb}@cse.iitb.ac.in ♠[email protected] Abstract Predicting a reader’s rating of text quality is a challenging task that involves estimating different subjective aspects of the text, like structure, clarity, etc. Such subjective aspects are better handled using cognitive information. One such source of cognitive information is gaze behaviour. In this paper, we show that gaze behaviour does indeed help in effectively predicting the rating of text quality. To do this, we first model text quality as a function of three properties - organization, coherence and cohesion. Then, we demonstrate how capturing gaze behaviour helps in predicting each of these properties, and hence the overall quality, by reporting improvements obtained by adding gaze features to traditional textual features for score prediction. We also hypothesize that if a reader has fully understood the text, the corresponding gaze behaviour would give a better indication of the assigned rating, as opposed to partial understanding. Our experiments validate this hypothesis by showing greater agreement between the given rating and the predicted rating when the reader has a full understanding of the text. 1 Introduction Automatically rating the quality of a text is an interesting challenge in NLP. It has been studied since Page’s seminal work on automatic essay grading in the mid-1960s (Page, 1966). This is due to the dependence of quality on different aspects such as the overall structure of the text, clarity, etc. that are highly qualitative in nature, and whose scoring can vary from person to person (Person, 2013). Scores for such qualitative aspects cannot be inferred solely from the text and would benefit from psycholinguistic information, such as gaze behaviour. Gaze based features have been used for co-reference resolution (Ross et al., 2016), sentiment analysis (Joshi et al., 2014) and translation annotation complexity estimation (Mishra et al., 2013). They could also be very useful for education applications, like evaluating readability (Mishra et al., 2017) and in automatic essay grading. In this paper, we consider the following qualitative properties of text: Organization, Coherence and Cohesion. A text is well-organized if it begins with an introduction, has a body and ends with a conclusion. One of the other aspects of organization is the fact that it takes into account how the content of the text is split into paragraphs, with each paragraph denoting a single idea. If the text is too long, and not split into paragraphs, one could consider the text to be badly organized1. A text is coherent if it makes sense to a reader. A text is cohesive if it is well connected. Coherence and cohesion are two qualities that are closely related. A piece of text that is well-connected usually makes sense. Conversely, a piece of text that makes sense is usually well-connected. However, it is possible for texts to be coherent but lack cohesion. Table 1 provides some examples for texts that are coherent and cohesive, as well as those that lack one of those qualities. There are different ways to model coherence and cohesion. Since coherence is a measure of how much sense the text makes, it is a semantic property of the text. It requires sentences within the text to be interpreted, by themselves, as well as with other sentences in the text (Van Dijk, 1980). On the other hand, cohesion makes use of 1Refer supplementary material for example. We have placed it there due to space constraints. 2353 Example Comments My favourite colour is blue. I like it because it is calming and it relaxes me. I often go outside in the summer and lie on the grass and look into the clear sky when I am stressed. For this reason, I’d have to say my favourite colour is blue. Coherent and cohesive. My favourite colour is blue. I’m calm and relaxed. In the summer I lie on the grass and look up. Coherent but not cohesive. There is no link between the sentences. However, the text makes sense due to a lot of implicit clues (blue, favourite, relaxing, look up (and see the blue sky)). My favourite colour is blue. Blue sports cars go very fast. Driving in this way is dangerous and can cause many car crashes. I had a car accident once and broke my leg. I was very sad because I had to miss a holiday in Europe because of the injury. Cohesive but not coherent. The sentences are linked by words (that are in italics or in bold) between adjacent sentences. As we can see, every pair of adjacent sentences are connected by words / phrases, but the text does not make sense, since it first starts with blue, and describes missing a holiday due to injury. Table 1: Examples of coherence and cohesion2. linguistic cues, such as references (demonstratives, pronouns, etc.), ellipsis (leaving out implicit words - Eg. Sam can type and I can [type] too), substitution (use of a word or phrase to replace something mentioned earlier - Eg. How’s the croissant? I’d like to have one too.), conjunction (and, but, therefore, etc.), cohesive nouns (problem, issue, investment, etc.) and lexis (linking different pieces of text by synonyms, hyponyms, lexical chains, etc.) (Halliday and Hasan, 1976). Using these properties, we model the overall text quality rating. We make use of a Likert scale (Likert, 1932) with a range of 1 to 4, for measuring each of these properties; the higher the score, the better is the text in terms of that property. We model the text quality rating on a scale of 1 to 10, using the three scores as input. In other words, Quality(T) = Org(T)+Chr(T)+Chs(T)−2, where Quality(T) is the text quality rating of the text T. Org(T), Chr(T), and Chs(T) correspond to the Organization, Coherence, and Cohesion scores respectively, for the text T, that are given by a reader. We subtract 2 to scale the scores from a range of 3 - 12, to a range of 1 - 10 for quality. Texts with poor organization and/or cohesion can force readers to regress i.e. go to previous sentences or paragraphs. Texts with poor coherence may lead readers to fixate more on different portions of text to understand them. In other words, such gaze behaviour indirectly captures the effort needed by human readers to comprehend the text (Just and Carpenter, 1980), which, in turn, may influence the ratings given by them. Hence, these 2We took the examples from this site for explaining coherence and cohesion: http: //gordonscruton.blogspot.in/2011/08/ what-is-cohesion-coherence-cambridge. html properties seem to be a good indicators for overall quality of texts. In this paper, we address the following question: Can information obtained from gaze behaviour help predict reader’s rating of quality of text by estimating text’s organization, coherence, and cohesion? Our work answers that question in the affirmative. We found that using gaze features does contribute in improving the prediction of qualitative ratings of text by users. Our work has the following contributions. Firstly, we propose a novel way to predict readers’ rating of text by recording their eye movements as they read the texts. Secondly, we show that if a reader has understood the text completely, their gaze behaviour is more reliable. Thirdly, we also release our dataset3 to help in further research in using gaze features in other tasks involving predicting the quality of texts. In this paper, we use the following terms related to eye tracking. The interest area (IA) is an area of the screen that is under interest. We mainly look at words as interest areas. A fixation takes place when the gaze is focused on a point of the screen. A saccade is the movement of gaze between two fixations. A regression is a special type of saccade in which the reader refers back to something that they had read earlier. The rest of the paper is organized as follows. Section 2 describes the motivation behind our work. Section 3 describes related work in this field. Section 4 describes the different features that we used. Sections 5 and 6 describes our experiments and results. Section 6 also contains analysis of our experiments. Section 7 concludes our paper and mentions future work. 3The dataset can be downloaded from http://www. cfilt.iitb.ac.in/cognitive-nlp/ 2354 Figure 1: Sample text showing fixations, saccades and regressions. This text was given scores of 4, 4, and 3 for organization, coherence and cohesion. The circles denote fixations, and the lines are saccades. Radius of the circles denote the duration of the fixation (in milliseconds), which is centred at the centre of the circle. This is the output from SR Research Data Viewer software. 2 Motivation Reader’s perception of text quality is subjective and varies from person to person. Using cognitive information from the reader can help in predicting the score he / she will assign to the text. A wellwritten text would not have people fixate too long on certain words, or regress a lot to understand, while a badly written text would do so. Figure 1 shows the gaze behaviour for a sample text. The circles denote fixations, and the arrows denote saccades. If we capture the gaze behaviour, as well as see how well the reader has understood the text, we believe that we can get a clearer picture of the quality rating of the text. One of the major concerns is How are we going to get the gaze data? This is because capability to gather eye-tracking data is not available to the masses. However, top mobile device manufacturers, like Samsung, have started integrating basic eye-tracking software into their smartphones (Samsung Smart Scroll) that are able to detect where the eye is fixated, and can be used in applications like scrolling through a web page. Start-ups, like Cogisen4, have started using gaze features in their applications, such as using gaze information to improve input to image processing systems. Recently, SR Research has come up with a portable eye-tracking system5. 4www.cogisen.com 5https://www.sr-research.com/products/ eyelink-portable-duo/ 3 Related Work A number of studies have been done showing how eye tracking can model aspects of text. Word length has been shown to be positively correlated with fixation count (Rayner, 1998) and fixation duration (Henderson and Ferreira, 1993). Word predictability (i.e. how well the reader can predict the next word in a sentence) was also studied by Rayner (1998), where he found that unpredictable words are less likely to be skipped than predictable words. Shermis and Burstein (2013) gives a brief overview of how text-based features are used in multiple aspects of essay grading, including grammatical error detection, sentiment analysis, shortanswer scoring, etc. Their work also describes a number of current essay grading systems that are available in the market like E-rater R⃝(Attali and Burstein, 2004). In recent years, there has been a lot of work done on evaluating the holistic scores of essays, using deep learning techniques (Alikaniotis et al., 2016; Taghipour and Ng, 2016; Dong and Zhang, 2016). There has been little work done to model text organization, such as Persing et al. (2010) (using machine learning) and Taghipour (2017) (using neural networks). However, there has been a lot of work done to model coherence and cohesion, using methods like lexical chains (Somasundaran et al., 2014), an entity grid (Barzilay and Lapata, 2005), etc. An interesting piece of work to model coherence was done by Soricut and Marcu (2006) 2355 where they used a machine translation-based approach to model coherence. Zesch et al. (2015) use topical overlap to model coherence for essay grading. Discourse connectors are used as a heuristic to model cohesion by Zesch et al. (2015) and Persing and Ng (2015). Our work is novel because it makes use of gaze behaviour to model and predict coherence and cohesion in text. In recent years, there has been some work in using eye-tracking to evaluate certain aspects of the text, like readability (Gonzalez-Gardu˜no and Søgaard, 2017; Mishra et al., 2017), grammaticality (Klerke et al., 2015), etc.. Our work uses eyetracking to predict the score given by a reader to a complete piece of text (rather than just a sentence as done by Klerke et al. (2015)) and show that the scoring is more reliable if the reader has understood the text. 4 Features In order to predict the scores of the different properties of the text, we use the following text and gaze features. 4.1 Text-based Features We use a set of text-based features to come up with a baseline system to predict the scores for different properties. The first set of features that we use are length and count-based features, such as word length, word count, sentence length, count of transition phrases6 etc. (Persing and Ng, 2015; Zesch et al., 2015). The next set of features that we use are complexity features, namely the degree of polysemy, coreference distance, and the Flesch Reading Ease Score (FRES) (Flesch, 1948). These features help in normalizing the gaze features for text complexity. These features were extracted using Stanford CoreNLP (Manning et al., 2014), and MorphAdorner (Burns, 2013). The third set of features that we use are stylistic features such as the ratios of the number of adjectives, nouns, prepositions, and verbs to the number of words in the text. These features are used to model the distributions of PoS tags in good and bad texts. These were extracted using NLTK7 (Loper and Bird, 2002). 6https://writing.wisc.edu/Handbook/ Transitions.html 7http://www.nltk.org/ The fourth set of features that we use are word embedding features. We use the average of word vectors of each word in the essay, using Google News word vectors (Mikolov et al., 2013). The word embeddings are 300 dimensions. We also calculate the mean and maximum similarities between the word vectors of the content words in adjacent sentences of the text, using GloVe word embeddings8 (Pennington et al., 2014). The fifth set of features that we use are language modeling features. We use the count of words that are absent in Google News word vectors and misspelled words using the PyEnchant9 library. In order to check the grammaticality of the text, we construct a 5-gram language model, using the Brown Corpus (Francis and Kucera, 1979). The sixth set of features are sequence features. These features are particularly useful in modeling organization (sentence and paragraph sequence similarity) (Persing et al., 2010), coherence and cohesion (PoS and lemma similarity). Pitler et al. (2010) showed that cosine similarity of adjacent sentences as one of the best predictors of linguistic quality. Hence, we also create vectors for the PoS tags and lemmas for each sentence in the text. The dimension of the vector is the number of distinct PoS tags / lemmas. The last set of features that we look at are entity grid features. We define entities as the nouns in the document, and do coreference resolution to resolve pronouns. We then construct an entity grid (Barzilay and Lapata, 2005) - a 1 or 0 grid that checks whether an entity is present or not in a given sentence. We take into account sequences of entities across sentences that possess at least one 1, that are either bigrams, trigrams or 4-grams. A sequence with multiple 1s denote entities that are close to each other, while sequences with a solitary 1 denote that an entity is just mentioned once and we do not come across it again for a number of sentences. 4.2 Gaze-based Features The gaze-based features are dependent on the gaze behaviour of the participant with respect to interest areas. 8We found that using GloVe here and Google News for the mean word vectors worked best. 9https://pypi.python.org/pypi/ pyenchant/ 2356 Fixation Features The First Fixation Duration (FFD) shows the time the reader fixates on a word when he / she first encounters it. An increased FFD intuitively could mean that the word is more complex and the reader spends more time in understanding the word (Mishra et al., 2016). The Second Fixation Duration (SFD) is the duration in which the reader fixates on a particular interest area the second time. This happens during a regression, when a reader is trying to link the word he / she just read with an earlier word. The Last Fixation Duration (LFD) is the duration in which the reader fixates on a particular interest area the final time. At this point, we believe that the interest area has been processed. The Dwell Time (DT) is the total time the reader fixates on a particular interest area. Like first fixation, this also measures the complexity of the word, not just by itself, but also with regard to the entire text (since it takes into account fixations when the word was regressed, etc.) The Fixation Count (FC) is the number of fixations on a particular interest area. A larger fixation count could mean that the reader frequently goes back to read that particular interest area. Regression Features IsRegression (IR) is the number of interest areas where a regression happened before reading ahead and IsRegressionFull (IRF) is the number of interest areas where a regression happened. The Regression Count (RC) is the total number of regressions. The Regression Time (RT) is the duration of the regressions from an interest area. These regression features could help in modeling semantic links for coherence and cohesion. Interest Area Features The Skip Count (SC) is the number of interest areas that have been skipped. The Run Count (RC) is the number of interest areas that have at least one fixation. A larger run count means that more interest areas were fixated on. Badly written texts would have higher run counts (and lower skip counts), as well as fixation counts, because the reader will fixate on these texts for a longer time to understand them. 5 Experiment Details In this section, we describe our experimental setup, creation of the dataset, evaluation metric, classifier details, etc. 5.1 Ordinal Classification vs. Regression For each of the properties - organization, coherence and cohesion, we make use of a Likert scale, with scores of 1 to 4. Details of the scores are given in Table 2. For scoring the quality, we use the formula described in the Introduction. Since we used a Likert scale, we make use of ordinal classification, rather than regression. This is because each of the grades is a discrete value that can be represented as an ordinal class (where 1 < 2 < 3 < 4), as compared to a continuous real number. 5.2 Evaluation Metric For the predictions of our experiments, we use Cohen’s Kappa with quadratic weights - quadratic weighted Kappa (QWK) (Cohen, 1968) because of the following reasons. Firstly, unlike accuracy and F-Score, Cohen’s Kappa takes into account whether or not agreements happen by chance. Secondly, weights (either linear or quadratic) take into account distance between the given score and the expected score, unlike accuracy and F-score where mismatches (either 1 vs. 4, or 1 vs.2) are penalized the same. Quadratic weights reward matches and penalize mismatches more than linear weights. To measure the Inter-Annotator Agreement of our raters, we make use of Gwet’s second-order agreement coefficient (Gwet’s AC2) as it can handle ordinal classes, weights, missing values, and multiple raters rating the same document (Gwet, 2014). 5.3 Creation of the Dataset In this subsection, we describe how we created our dataset. We describe the way we made the texts, the way they were annotated and the interannotator agreements for the different properties. Details of Texts To the best of our knowledge there isn’t a publicly available dataset with gaze features for textual quality. Hence, we decided to create our own. Our dataset consists of a diverse set of 30 texts, from Simple English Wikipedia (10 articles), English Wikipedia (8 articles), and online news articles (12 articles)10. We did not wish to overburden the readers, so we kept the size of texts to 10The sources for the articles were https://simple. wikipedia.org, https://en.wikipedia.org, and https://newsela.com 2357 Property Grade Guidelines Organization 1 Bad. There is no organization in the text. 2 OK. There is little / no link between the paragraphs, but they each describe an idea. 3 Good. Some paragraphs may be missing, but there is an overall link between them. 4 Very Good. All the paragraphs follow a flow from the Introduction to Conclusion. Coherence 1 Bad. The sentences do not make sense. 2 OK. Groups of sentences may make sense together, but the text still may not make sense. 3 Good. Most of the sentences make sense. The text, overall, makes sense. 4 Very Good. The sentences and overall text make sense. Cohesion 1 Bad. There is little / no link between any 2 adjacent sentences in the same paragraph. 2 OK. There is little / no link between adjacent paragraphs. However, each paragraph is cohesive 3 Good. All the sentences in a paragraph are linked to each other and contribute in understanding the paragraph. 4 Very Good. The text is well connected. All the sentences are linked to each other and help in understanding the text. Table 2: Annotation guidelines for different properties of text. approximately 200 words each. The original articles ranged from a couple hundred words (Simple English Wikipedia) to over a thousand words (English Wikipedia). We first summarized the longer articles manually. Then, for the many articles over 200 words, we removed a few of the paragraphs and sentences. In this way, despite all the texts being published, we were able to introduce some poor quality texts into our dataset. The articles were sampled from a variety of genres, such as History, Science, Law, Entertainment, Education, Sports, etc. Details of Annotators The dataset was annotated by 20 annotators in the age group of 20-25. Out of the 20 annotators, the distribution was 9 high school graduates (current college students), 8 college graduates, and 3 annotators with a post-graduate degree. In order to check the eyesight of the annotators, we had each annotator look at different parts of the screen. While they did that, we recorded how their fixations were being detected. Only if their fixations to particular parts of the screen tallied with our requests, would we let them participate in annotation. All the participants in the experiment were fluent speakers of English. A few of them scored over 160 in GRE Verbal test and/or over 110 in TOEFL. Irrespective of their appearance in such exams, each annotator was made to take an English test before doing the experiments. The participants had to read a couple of passages, answer comprehension questions and score them for organization, coherence and cohesion (as either good / medium / bad). In case they either got both comprehension questions wrong, or labeled a good passage bad (or vice versa), they failed the test11. 1125 annotators applied, but we chose only 20. 2 of the Property Full Overall Organization 0.610 0.519 Coherence 0.688 0.633 Cohesion 0.675 0.614 Table 3: Inter-Annotator Agreements (Gwet’s AC2) for each of the properties. In order to help the annotators, they were given 5 sample texts to differentiate between good and bad organization, coherence and cohesion. Table 1 has some of those texts12. Inter-Annotator Agreement Each of the properties were scored in the range of 1 to 4. In addition, we also evaluated the participant’s understanding of the text by asking them a couple of questions on the text. Table 3 gives the inter-annotator agreement for each of the 3 properties that they rated. The column Full shows the agreement only if the participant answered both the questions correct. The Overall column shows the agreement irrespective of the participant’s comprehension of the text. 5.4 System Details We conducted the experiment by following standard norms in eye-movement research (Holmqvist et al., 2011). The display screen is kept about 2 feet from the reader, and the camera is placed midway between the reader and the screen. The reader is seated and the position of his head is fixed using a chin rest. Before the text is displayed, we calibrate the camera by having the participant fixate on 13 rejected annotators failed the test, while the other 3 had bad eyesight. 12The texts for good and bad organization are too long to provide in this paper. They will be uploaded in supplementary material. 2358 points on the screen and validate the calibration so that the camera is able to predict the location of the eye on the screen accurately. After calibration and validation, the text is displayed on the screen in Times New Roman typeface with font size 23. The reader reads the text and while that happens, we record the reader’s eye movements. The readers were allowed to take as much time as they needed to finish the text. Once the reader has finished, the reader moves to the next screen. The next two screens each have a question that is based on the passage. These questions are used to verify that the reader did not just skim through the passage, but understood it as well. The questions were multiple choice, with 4 options13. The questions test literal comprehension (where the reader has to recall something they read), and interpretive comprehension (where the reader has to infer the answer from the text they read). After this, the reader scores the texts for organization, coherence and cohesion. The participants then take a short break (about 30 seconds to a couple of minutes) before proceeding with the next text. This is done to prevent reading fatigue over a period of time. After each break, we recalibrate the camera and validate the calibration again. For obtaining gaze features from a participant, we collect gaze movement patterns using an SR Research Eye Link 1000 eye-tracker (monocular stabilized head mode, sampling rate 500Hz). It is able to collect all the gaze details that we require for our experiments. Reports are generated for keyboard events (message report) and gaze behaviour (interest area report) using SR Research Data Viewer software. 5.5 Classification Details We also process the articles for obtaining the text features as described in Section 4. Given that we want to show the utility of gaze features, we ran each of the following classifiers with 3 feature sets - only text, only gaze, and all features. We split the data into a training - test split of sizes 70% and 30%. We used a Feed Forward Neural Network with 1 hidden layer containing 100 neurons (Bebis and Georgiopoulos, 1994)14. 13Example Passage Text: The text in Figure 1 Question: “How many states did Ronald Reagan win in both his Presidential campaigns?” Correct Answer: “93” (44+49) 14We also used other classifiers, like Naive Bayes, Logistic Regression and Random Forest. However, the neural network outperformed them. The size of the input vector was 361 features. Out of these, there were 49 text features, plus 300 dimension word embeddings features, 11 gaze features, and 1 class label. The data was split using stratified sampling, to ensure that there is a similar distribution of classes in each of the training and test splits. The Feed Forward Neural Network was implemented using TensorFlow (Abadi et al., 2015) in Python. We ran the neural network over 10,000 epochs, with a learning rate of 0.001 in 10 batches. The loss function that we used was the mean square error. In order to see how much the participant’s understanding of the text would reflect on their scoring, we also looked at the data based on how the participant scored in the comprehension questions after they read the article. We split the articles into 2 subsets here - Full, denoting that the participant answered both the questions correctly, and Partial, denoting that they were able to answer only one of the questions correctly. The readers showed Full understanding in 269 instances and Partial understanding in 261 instances. We used the same setup here (same training - test split, stratified sampling, and feed forward neural network). We omit the remaining 70 instances where the participant got none of the questions correct, as the participant could have scored the texts completely randomly. 6 Results and Analysis Table 4 shows the results of our experiments using the feed forward neural network classifier. The first column is the property being evaluated. The next 3 columns denote the results for the Text, Gaze and Text+Gaze feature sets. Property Text Gaze Text+Gaze Organization 0.237 0.394 0.563 Coherence 0.261 0.285 0.550 Cohesion 0.120 0.229 0.451 Quality 0.230 0.304 0.552 Table 4: QWK scores for the three feature sets on different properties. The QWK scores are the predictions which we obtain with respect to the scores of all the 30 documents, scored by all 20 raters. Textual features when augmented with gaze based features show significant improvement for all the properties. 2359 Figure 2: Relation between some of the different gaze features and the score. The gaze features are (a) RD, (b) SFD, (c) FC and (d) RC. For figures (a) and (b), the units on the y-axis are milliseconds. For figures (c) and (d) the numbers are a ratio to the number of interest areas in the text. The x-axis in all 4 graphs is the score given by the annotators. We check the statistical significance of improvement of adding gaze based features for the results in Table 4. To test our hypothesis - that adding gaze features make a statistically significant improvement - we run the t-test. Our null hypothesis: Gaze based features do not help in prediction, any more than text features themselves, and whatever improvements happen when gaze based features are added to the textual features, are not statistically significant. We choose a significance level of p < 0.001. For all the improvements, we found them to be statistically significant above this α level, rejecting our null hypothesis. We also evaluate how the participant’s understanding of the text affects the way they score the text. Table 5 shows the results of our experiments taking the reader’s comprehension into account. The first column is the property being evaluated. The second column is the level of comprehension - Full for the passages where the participant answered both the questions correctly, and Partial for the passages where the participant answered one question correctly. The next 3 columns show the results using the Text feature set, the Gaze feature set, and both (Text+Gaze) feature sets. From this table, we see that wherever the gaze features are used, there is far greater agreement for those with Full understanding as compared to Partial understanding. Property Comp. Text Gaze Text+Gaze Organization Full 0.319 0.319 0.563 Partial 0.115 0.179 0.283 Coherence Full 0.255 0.385 0.601 Partial 0.365 0.343 0.446 Cohesion Full 0.313 0.519 0.638 Partial 0.161 0.155 0.230 Quality Full 0.216 0.624 0.645 Partial 0.161 0.476 0.581 Table 5: QWK scores for the three feature sets on different properties categorized on the basis of reader comprehension. Figure 2 shows a clear relationship between some of the gaze features and the scores given by readers for the properties - organization, cohesion and coherence. In all the charts, we see that texts with the lowest scores have the longest durations (regression / fixation) as well as counts (of fixations and interest areas fixated). Figure 3 shows the fixation heat maps for 3 texts whose quality scores were good (10), medium (6) and bad (3), read by the same participant. From these heat maps, we see that the text rated good has highly dense fixations for only a part of the text, 2360 (a) Good (rated 10) (b) Medium (rated 6) (c) Bad (rated 3) Figure 3: Fixation heatmap examples for one of the participants from SR Research Data Viewer software. as compared to the medium and bad texts. This shows that badly written texts force the readers to fixate a lot more than well-written texts. 6.1 Ablation Tests In order to see which of the gaze feature sets is important, we run a set of ablation tests. We ablate the fixations, regressions and interest area feature sets one at a time. We also ablated each of the individual gaze features. Property Fixation Regression Interest Areas Organization -0.102 -0.017 -0.103 Coherence -0.049 -0.077 -0.088 Cohesion -0.015 -0.040 0.037 Quality 0.002 0.016 -0.056 Table 6: Difference in QWK scores when ablating three gaze behaviour feature sets for different properties. Table 6 gives the result of our ablation tests on the three feature sets - fixation, regression and interest area feature sets. The first column is the property that we are measuring. The next 3 columns denote the difference between the predicted QWK that we got from ablating the fixation, regression and interest area feature sets. We found that the Interest Area feature set was the most important, followed by fixation and regression. Among the individual features, Run Count (RC) was found to be the most important for organization and quality. First Fixation Duration (FFD) was the most important feature for coherence, and IsRegressionFull (IRF) was the most important feature for cohesion. We believe that this is because the number of interest areas that are fixated on at least once and the number of interest areas that are skipped play an important role in determining how much of the text was read and how much was skipped. However, for cohesion, regression features are the most important, because they show a link between the cohesive clues (like lexis, references, etc.) in adjacent sentences. 7 Conclusion and Future Work We presented a novel approach to predict reader’s rating of texts. The approach estimates the overall quality on the basis of three properties - organization, coherence and cohesion. Although well defined, predicting the score of these properties for a text is quite challenging. It has been established that cognitive information such as gaze behaviour can help in such subjective tasks (Mishra et al., 2013, 2016). We hypothesized that gaze behavior will assist in predicting the scores of text quality. To evaluate this hypothesis, we collected gaze behaviour data and evaluated the predictions using only the text-based features. When we took gaze behaviour into account, we were able to significantly improve our predictions of organization, coherence, cohesion and quality. We found out that, in all cases, there was an improvement in the agreement scores when the participant who rated the text showed full understanding, as compared to partial understanding, using only the Gaze features and the Text+Gaze features. This indicated that gaze behaviour is more reliable when the reader has understood the text. To the best of our knowledge, our work is pioneering in using gaze information for predicting text quality rating. In future, we plan to use use approaches, like multi-task learning (Mishra et al., 2018), in estimating gaze features and using those estimated features for text quality prediction. Acknowledgements We’d like to thank all the anonymous reviewers for their constructive feedback in helping us improve our paper. We’d also like to thank Anoop Kunchukuttan, a research scholar from the Centre for Indian Language Technology, IIT Bombay for his valuable input. 2361 References Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. https://www.tensorflow.org/. Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 715–725. Yigal Attali and Jill Burstein. 2004. Automated essay scoring with e-rater R⃝v. 2.0. ETS Research Report Series 2004(2). Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: An entity-based approach. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05). Association for Computational Linguistics, Ann Arbor, Michigan, pages 141–148. https://doi.org/10.3115/1219840.1219858. George Bebis and Michael Georgiopoulos. 1994. Feed-forward neural networks. IEEE Potentials 13(4):27–31. Philip R Burns. 2013. Morphadorner v2: A java library for the morphological adornment of english language texts. Northwestern University, Evanston, IL . Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin 70(4):213. Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring – an empirical study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1072–1077. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology 32(3):221. W Nelson Francis and Henry Kucera. 1979. The brown corpus: A standard corpus of present-day edited american english. Providence, RI: Department of Linguistics, Brown University [producer and distributor] . Ana Valeria Gonzalez-Gardu˜no and Anders Søgaard. 2017. Using gaze to predict text readability. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, Copenhagen, Denmark, pages 438–443. http://www.aclweb.org/anthology/W17-5050. Kilem L Gwet. 2014. Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters. Advanced Analytics, LLC. Michael Alexander Kirkwood Halliday and Ruqaiya Hasan. 1976. Cohesion in english. Longman Group Ltd. John M Henderson and Fernanda Ferreira. 1993. Eye movement control during reading: Fixation measures reflect foveal but not parafoveal processing difficulty. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie exp´erimentale 47(2):201. Kenneth Holmqvist, Marcus Nystr¨om, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. OUP Oxford. Aditya Joshi, Abhijit Mishra, Nivvedan Senthamilselvan, and Pushpak Bhattacharyya. 2014. Measuring sentiment annotation complexity of text. In ACL (2). pages 36–41. Marcel A Just and Patricia A Carpenter. 1980. A theory of reading: From eye fixations to comprehension. Psychological review 87(4):329. Sigrid Klerke, H´ector Mart´ınez Alonso, and Anders Søgaard. 2015. Looking hard: Eye tracking for detecting grammaticality of automatically compressed sentences. In Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015). Link¨oping University Electronic Press, Sweden, Vilnius, Lithuania, pages 97– 105. http://www.aclweb.org/anthology/W15-1814. Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology . Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics-Volume 1. Association for Computational Linguistics, pages 63–70. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. pages 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010. 2362 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. pages 3111–3119. Abhijit Mishra, Pushpak Bhattacharyya, and Michael Carl. 2013. Automatically predicting sentence translation difficulty. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 346–351. Abhijit Mishra, Diptesh Kanojia, and Pushpak Bhattacharyya. 2016. Predicting readers’ sarcasm understandability by modeling gaze behavior. In AAAI. pages 3747–3753. Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2017. Scanpath complexity: Modeling reading effort using gaze information. In AAAI. pages 4429–4436. Abhijit Mishra, Srikanth Tamilselvam, Riddhiman Dasgupta, Seema Nagar, and Kuntal Dey. 2018. Cognition-cognizant sentiment analysis with multitask subjectivity summarization based on annotators gaze behavior . Ellis B Page. 1966. The imminence of... grading essays by computer. The Phi Delta Kappan 47(5):238–243. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1532–1543. Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, pages 229–239. Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, pages 543–552. http://www.aclweb.org/anthology/P15-1053. Robert Person. 2013. Blind truth: An examination of grading bias. Emily Pitler, Annie Louis, and Ani Nenkova. 2010. Automatic evaluation of linguistic quality in multidocument summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 544– 554. http://www.aclweb.org/anthology/P10-1056. Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological bulletin 124(3):372. Joe Cheri Ross, Abhijit Mishra, and Pushpak Bhattacharyya. 2016. Leveraging annotators gaze behaviour for coreference resolution. In Proceedings of the 7th Workshop on Cognitive Aspects of Computational Language Learning. pages 22–26. Mark D Shermis and Jill Burstein. 2013. Handbook of automated essay evaluation: Current applications and new directions. Routledge. Swapna Somasundaran, Jill Burstein, and Martin Chodorow. 2014. Lexical chaining for measuring discourse coherence quality in test-taker essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, Dublin, Ireland, pages 950–961. Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. Association for Computational Linguistics, Sydney, Australia, pages 803– 810. http://www.aclweb.org/anthology/P/P06/P062103. Kaveh Taghipour. 2017. Robust Trait-Specific Essay Scoring Using Neural Networks and Density Estimators. Ph.D. thesis. Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1882–1891. Teun Adrianus Van Dijk. 1980. Text and context explorations in the semantics and pragmatics of discourse . Torsten Zesch, Michael Wojatzki, and Dirk ScholtenAkoun. 2015. Task-independent features for automated essay grading. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, Denver, Colorado, pages 224– 232. http://www.aclweb.org/anthology/W15-0626.
2018
219
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 231–240 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 231 A Stylometric Inquiry into Hyperpartisan and Fake News Martin Potthast Johannes Kiesel Kevin Reinartz Janek Bevendorff Benno Stein Leipzig University [email protected] Bauhaus-Universität Weimar <first>.<last>@uni-weimar.de Abstract We report on a comparative style analysis of hyperpartisan (extremely one-sided) news and fake news. A corpus of 1,627 articles from 9 political publishers, three each from the mainstream, the hyperpartisan left, and the hyperpartisan right, have been fact-checked by professional journalists at BuzzFeed: 97% of the 299 fake news articles identified are also hyperpartisan. We show how a style analysis can distinguish hyperpartisan news from the mainstream (F1 =0.78), and satire from both (F1 =0.81). But stylometry is no silver bullet as style-based fake news detection does not work (F1 =0.46). We further reveal that left-wing and right-wing news share significantly more stylistic similarities than either does with the mainstream. This result is robust: it has been confirmed by three different modeling approaches, one of which employs Unmasking in a novel way. Applications of our results include partisanship detection and pre-screening for semi-automatic fake news detection. 1 Introduction The media and the public are currently discussing the recent phenomenon of “fake news” and its potential role in swaying elections, how it may affect society, and what can and should be done about it. Prone to misunderstanding and misue, the term “fake news” arose from the observation that, in social media, a certain kind of ‘news’ spreads much more successfully than others, and this kind of ‘news’ is typically extremely one-sided (hyperpartisan), inflammatory, emotional, and often riddled with untruths. Although traditional yellow press has been spreading ‘news’ of varying degrees of truthfulness long before the digital revolution, its amplification over real news within social media gives many people pause. The fake news hype caused a widespread disillusionment about social media, and many politicians, news publishers, IT companies, activists, and scientists concur that this is where to draw the line. For all their good intentions, however, it must be drawn very carefully (if at all), since nothing less than free speech is at stake—a fundamental right of every free society. Many favor a two-step approach where fake news items are detected and then countermeasures are implemented to foreclose rumors and to discourage repetition. While some countermeasures are already tried in practice, such as displaying warnings and withholding ad revenue, fake news detection is still in its infancy. At any rate, a nearreal time reaction is crucial: once a fake news item begins to spread virally, the damage is done and undoing it becomes arduous. Since knowledge-based and context-based approaches to fake news detection can only be applied after publication, i.e., as news events unfold and as social interactions occur, they may not be fast enough. We have identified style-based approaches as a viable alternative, allowing for instantaneous reactions, albeit not to fake news, but to hyperpartisanship. In this regard we contribute (1) a large news corpus annotated by experts with respect to veracity and hyperpartisanship, (2) extensive experiments on discriminating fake news, hyperpartisan news, and satire based solely on writing style, and (3) validation experiments to verify our finding that the writing style of the left and the right have more in common than any of the two have with the mainstream, applying Unmasking in a novel way. After a review of related work, Section 3 details the corpus and its construction, Section 4 introduces our methodology, and Section 5 reports the results of the aforementioned experiments. 232 2 Related Work Approaches to fake news detection divide into three categories (Figure 1): they can be knowledge-based (by relating to known facts), context-based (by analyzing news spread in social media), and stylebased (by analyzing writing style). Knowledge-based fake news detection. Methods from information retrieval have been proposed early on to determine the veracity of web documents. For example, Etzioni et al. (2008) propose to identify inconsistencies by matching claims extracted from the web with those of a document in question. Similarly, Magdy and Wanas (2010) measure the frequency of documents that support a claim. Both approaches face the challenges of web data credibility, namely expertise, trustworthiness, quality, and reliability (Ginsca et al., 2015). Other approaches rely on knowledge bases, including the semantic web and linked open data. Wu et al. (2014) “perturb” a claim in question to query knowledge bases, using the result variations as indicator of the support a knowledge base offers for the claim. Ciampaglia et al. (2015) use the shortest path between concepts in a knowledge graph, whereas Shi and Weninger (2016) use a link prediction algorithm. However, these approaches are unsuited for new claims without corresponding entries in a knowledge base, whereas knowledge bases can be manipulated (Heindorf et al., 2016). Context-based fake news detection. Here, fake news items are identified via meta information and spread patterns. For example, Long et al. (2017) show that author information can be a useful feature for fake news detection, and Derczynski et al. (2017) attempt to determine the veracity of a claim based on the conversation it sparks on Twitter as one of the RumourEval tasks. The Facebook analysis of Mocanu et al. (2015) shows that unsubstantiated claims spread as widely as well-established ones, and that user groups predisposed to conspiracy theories are more open to sharing the former. Similarly, Acemoglu et al. (2010), Kwon et al. (2013), Ma et al. (2017), and Volkova et al. (2017) model the spread of (mis-)information, while Budak et al. (2011) and Nguyen et al. (2012) propose algorithms to limit its spread. The efficacy of countermeasures like debunking sites is studied by Tambuscio et al. (2015). While achieving good results, context-based approaches suffer from working only a posteriori, requiring large amounts of data, and disregarding the actual news content. Knowledge-based (also called fact checking) Style-based Information retrieval Semantic web / LOD Text categorization Deception detection Context-based Social network analysis Fake news detection Long et al., 2017 Mocanu et al., 2015 Acemoglu et al., 2010 Kwon et al., 2013 Ma et al., 2017 Volkova et al., 2017 Budak et al., 2011 Nguyen et al. 2012 Derczynski et al., 2017 Tambuscio et al., 2015 Afroz et al., 2012 Badaskar et al., 2008 Rubin et al., 2016 Yang et al., 2017 Rashkin et al., 2017 Horne and Adali, 2017 Pérez-Rosas et al., 2017 Wei et al., 2013 Chen et al., 2015 Rubin et al., 2015 Wang et al., 2017 Bourgonje et al., 2017 Wu et al., 2014 Ciampaglia et al, 2015 Shi and Weninger, 2016 Etzioni et al., 2018 Magdy and Wanas, 2010 Ginsca et al., 2015 Figure 1: Taxonomy of paradigms for fake news detection alongside a selection of related work. Style-based fake news detection. Deception detection originates from forensic linguistics and builds on the Undeutsch hypothesis—a result from forensic psychology which asserts that memories of reallife, self-experienced events differ in content and quality from imagined events (Undeutsch, 1967). The hypothesis led to the development of forensic tools to assess testimonies at the statement level. Some approaches operationalize deception detection at scale to detect uncertainty in social media posts, for example Wei et al. (2013) and Chen et al. (2015). In this regard, Rubin et al. (2015) use rhetorical structure theory as a measure of story coherence and as an indicator for fake news. Recently, Wang (2017) collected a large dataset consisting of sentence-length statements along their veracity from the fact-checking site PolitiFact.com, and then used style features to detect false statements. A related task is stance detection, where the goal is to detect the relation between a claim about an article, and the article itself (Bourgonje et al., 2017). Most prominently, stance detection was the task of the Fake News Challenge1 which ran in 2017 and received 50 submissions, albeit hardly any participants published their approach. 1http://www.fakenewschallenge.org/ 233 Where deception detection focuses on single statements, style-based text categorization as proposed by Argamon-Engelson et al. (1998) assesses entire texts. Common applications are author profiling (age, gender, etc.) and genre classification. Though susceptible to authors who can modify their writing style, such obfuscations may be detectable (e.g., Afroz et al. (2012)). As an early precursor to fake news detection, Badaskar et al. (2008) train models to identify news items that were automatically generated. Currently, text categorization methods for fake news detection focus mostly on satire detection (e.g., Rubin et al. (2016), Yang et al. (2017)). Rashkin et al. (2017) perform a statistical analysis of the stylistic differences between real, satire, hoax, and propaganda news. We make use of their results by incorporating the bestperforming style features identified. Finally, two preprint papers have been recently shared. Horne and Adali (2017) use style features for fake news detection. However, the relatively high accuracies reported must be taken with a grain of salt: their two datasets comprise only 70 news articles each, whose ground-truth is based on where an article came from, instead of resulting from a per-article expert review as in our case; their final classifier uses only 4 features (number of nouns, type-token ratio, word count, number of quotes), which can be easily manipulated; and based on their experimental setup, it cannot be ruled out that the classifier simply differentiates news portals rather than fake and real articles. We avoid this problem by testing our classifiers on articles from portals which were not represented in the training data. Similarly, Pérez-Rosas et al. (2017) also report on constructing two datasets comprising around 240 and 200 news article excerpts (i.e., the 5-sentence lead) with a balanced distribution of fake vs. real. The former was collected via crowdsourcing, asking workers to write a fake news item based on a real news item, the latter was collected from the web. For style analysis, the former dataset may not be suitable, since the authors note themselves that “workers succeeded in mimicking the reporting style from the original news”. The latter dataset encompasses only celebrity news (i.e., yellow press), which introduces a bias. Their feature selection follows that of Rubin et al. (2016), which is covered by our experiments, but also incorporates topic features, rendering the resulting classifier not generalizable. 3 The BuzzFeed-Webis Fake News Corpus This section introduces the BuzzFeed-Webis Fake News Corpus 2016, detailing its construction and annotation by professional journalists employed at BuzzFeed, as well as key figures and statistics.2 3.1 Corpus Construction The corpus encompasses the output of 9 publishers on 7 workdays close to the US presidential elections 2016, namely September 19 to 23, 26, and 27. Table 1 gives an overview. Among the selected publishers are six prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream ones. All publishers earned Facebook’s blue checkmark , indicating authenticity and an elevated status within the network. Every post and linked news article has been fact-checked by 4 BuzzFeed journalists, including about 19% of posts forwarded from third parties. Having checked a total of 2,282 posts, 1,145 mainstream, 471 leftwing, and 666 right-wing, Silverman et al. (2016) reported key insights as a data journalism article. The annotations were published alongside the article.3 However, this data only comprises URLs to the original Facebook posts. To construct our corpus, we archived the posts, the linked articles, and attached media as well as relevant meta data to ensure long-term availability. Due to the rapid pace at which the publishers change their websites, we were able to recover only 1,627 articles, 826 mainstream, 256 left-wing, and 545 right-wing. Manual fact-checking. A binary distinction between fake and real news turned out to be infeasible, since hardly any piece of fake news is entirely false, and pieces of real news may not be flawless. Therefore, posts were rated “mostly true,” “mixture of true and false,” “mostly false,” or, if the post was opinion-driven or otherwise lacked a factual claim, “no factual content.” Four BuzzFeed journalists worked on the manual fact-checks of the news articles: to minimize costs, each article was reviewed only once and articles were assigned round robin. The ratings “mixture of true and false” and “mostly false” had to be justified, and, when in doubt about a rating, a second opinion was collected, whereas disagreements were resolved by a third one. Finally, all news rated “mostly false” underwent a final check to ensure the rating was justified, lest the respective publishers would contest it. 2Corpus download: https://doi.org/10.5281/zenodo.1239675 3http://github.com/BuzzFeedNews/2016-10-facebook-fact-check 234 The journalists were given the following guidance: Mostly true: The post and any related link or image are based on factual information and portray it accurately. The authors may interpret the event/info in their own way, so long as they do not misrepresent events, numbers, quotes, reactions, etc., or make information up. This rating does not allow for unsupported speculation or claims. Mixture of true and false (mix, for short): Some elements of the information are factually accurate, but some elements or claims are not. This rating should be used when speculation or unfounded claims are mixed with real events, numbers, quotes, etc., or when the headline of the link being shared makes a false claim but the text of the story is largely accurate. It should also only be used when the unsupported or false information is roughly equal to the accurate information in the post or link. Finally, use this rating for news articles that are based on unconfirmed information. Mostly false: Most or all of the information in the post or in the link being shared is inaccurate. This should also be used when the central claim being made is false. No factual content (n/a, for short): This rating is used for posts that are pure opinion, comics, satire, or any other posts that do not make a factual claim. This is also the category to use for posts that are of the “Like this if you think...” variety. 3.2 Limitations Given the significant workload (i.e., costs) required to carry out the aforementioned annotations, the corpus is restricted to the given temporal period and biased toward the US culture and political landscape, comprising only English news articles from a limited number of publishers. Annotations were recorded at the article level, not at statement level. For text categorization, this is sufficient. At the time of writing, our corpus is the largest of its kind that has been annotated by professional journalists. 3.3 Corpus Statistics Table 1 shows the fact-checking results and some key statistics per article. Unsurprisingly, none of the mainstream articles are mostly false, whereas 8 across all three publishers are a mixture of true and false. Disregarding non-factual articles, a little more than a quarter of all hyperpartisan left-wing articles were found faulty: 15 articles mostly false, and 51 a mixture of true and false. Publisher “The Other 98%” sticks out by achieving an almost perOrientation Fact-checking results Key statistics per article Publisher true mix false n/a Σ Paras. Links Words extern all quoted all Mainstream 806 8 0 12 826 20.1 2.2 3.7 18.1 692.0 ABC News 90 2 0 3 95 21.1 1.0 4.8 21.0 551.9 CNN 295 4 0 8 307 19.3 2.4 2.5 15.3 588.3 Politico 421 2 0 1 424 20.5 2.3 4.3 19.9 798.5 Left-wing 182 51 15 8 256 14.6 4.5 4.9 28.6 423.2 Addicting Info 95 25 8 7 135 15.9 4.4 4.5 30.5 430.5 Occupy Democrats 55 23 6 0 91 10.9 4.1 4.7 29.0 421.7 The Other 98% 32 3 1 1 30 20.2 6.4 7.2 21.2 394.5 Right-wing 276 153 72 44 545 14.1 2.5 3.1 24.6 397.4 Eagle Rising 107 47 25 36 214 12.9 2.6 2.8 17.3 388.3 Freedom Daily 48 24 22 4 99 14.6 2.2 2.3 23.5 419.3 Right Wing News 121 82 25 4 232 15.0 2.5 3.6 33.6 396.6 Σ 1264 212 87 64 1627 17.2 2.7 3.7 20.6 551.0 Table 1: The BuzzFeed-Webis Fake News Corpus 2016 at a glance (“Paras.” short for “paragraphs”). fect score. By contrast, almost 45% of the rightwing articles are a mixture of true and false (153) or mostly false (72). Here, publisher “Right Wing News” sticks out by supplying more than half of mixtures of true and false alone, whereas mostly false articles are equally distributed. Regarding key statistics per article, it is interesting that the articles from all mainstream publishers are on average about 20 paragraphs long with word counts ranging from 550 words on average at ABC News to 800 at Politico. Except for one publisher, left-wing articles and right-wing articles are shorter on average in terms of paragraphs as well as word count, averaging at about 420 words and 400 words, respectively. Left-wing articles quote on average about 10 words more than the mainstream, and right-wing articles 6 words more. When articles comprise links, they are usually external ones, whereas ABC News rather uses internal links, and only half of the links found at Politico articles are external. Left-wing news articles stick out by containing almost double the amount of links across publishers than mainstream and right-wing ones. 3.4 Operationalizing Fake News In our experiments, we operationalize the category of fake news by joining the articles that were rated mostly false with those rated a mixture of true and false. Arguably, the latter may not be exactly what is deemed “fake news” (as in: a complete fabrication), however, practice shows fake news are hardly ever devoid of truth. More often, true facts are misconstrued or framed badly. In our experiments, we hence call mostly true articles real news, mostly false plus mixtures of true and false—except for satire—fake news, and disregard all articles rated non-factual. 235 4 Methodology This section covers our methodology, including our feature set to capture writing style, and a brief recap of Unmasking by Koppel et al. (2007), which we employ for the first time to distinguish genre styles as opposed to author styles. For sake of reproducibility, all our code has been published.4 4.1 Style Features and Feature Selection Our writing style model incorporates common features as well as ones specific to the news domain. The former are n-grams, n in [1, 3], of characters, stop words, and parts-of-speech. Further, we employ 10 readability scores5 and dictionary features, each indicating the frequency of words from a tailor-made dictionary in a document, using the General Inquirer Dictionaries as a basis (Stone et al., 1966). The domain-specific features include ratios of quoted words and external links, the number of paragraphs, and their average length. In each of our experiments, we carefully select from the aforementioned features the ones worthwhile using: all features are discarded that are hardly represented in our corpus, namely word tokens that occur in less than 2.5% of the documents, and n-gram features that occur in less than 10% of the documents. Discarding these features prevents overfitting and improves the chances that our model will generalize. If not stated otherwise, our experiments share a common setup. In order to avoid biases from the respective training sets, we balance them using oversampling. Furthermore, we perform 3-fold cross-validation where each fold comprises one publisher from each orientation, so that the classifier does not learn a publisher’s style. For nonUnmasking experiments we use WEKA’s random forest implementation with default settings. 4.2 Unmasking Genre Styles Unmasking, as proposed by Koppel et al. (2007), is a meta learning approach for authorship verification. We study for the first time whether it can be used to assess the similarity of more broadly defined style categories, such as left-wing vs. rightwing vs. mainstream news. This way, we uncover relations between the writing styles that people may involuntarily adopt as per their political orientation. 4Code download: http://www.github.com/webis-de/ACL-18 5Automated Readability Index, Coleman Liau Index, Flesh Kincaid Grade Level and Reading Ease, Gunning Fog Index, LIX, McAlpine EFLAW Score, RIX, SMOG Grade, Strain Index Originally, Unmasking takes two documents as input and outputs its confidence whether they have been written by the same author. Three steps are taken to accomplish this: first, each document is chunked into a set of at least 500-word long chunks; second, classification errors are measured while iteratively removing the most discriminative features of a style model consisting of the 250 most frequent words, separating the two chunk sets with a linear classifier; and third, the resulting classification accuracy curves are analyzed with regard to their slope. A steep decrease is more likely than a shallow decrease if the two documents have been written by the same author, since there are presumably less discriminating features between documents written by the same author than between documents written by different authors. Training a classifier on many examples of error curves obtained from same-author document pairs and differentauthor document pairs yields an effective authorship verifier—at least for long documents that can be split up into a sufficient number of chunks. It turns out that what applies to the style of authors also applies to genre styles. We adapt Unmasking by skipping its first step and using two sets of documents (e.g., left-wing articles and rightwing articles) as input. When plotting classification error curves for visual inspection, steeper decreases in these plots, too, indicate higher style similarity of the two input document sets, just as with chunk sets of two documents written by the same author. 4.3 Baselines We employ four baseline models: a topic-based bag of words model, often used in the literature, but less practical since news topics change frequently and drastically; a model using only the domain-specific news style features to check whether the differences between categories measured as corpus statistics play a significant role; and naive baselines that classify all items into one of the categories in question, relating our results to the class distributions. 4.4 Performance Measures Classification performance is measured as accuracy, and class-wise precision, recall, and F1. We favor these measures over, e.g., areas under the ROC curve or the precision recall curve for simplicity sake. Also, the tasks we are tackling are new, so that little is known to date about user preferences. This is also why we chose the evenly-balanced F1. 236 5 Experiments We report on the results of two series of experiments that investigate style differences and similarities between hyperpartisan and mainstream news, and between fake, real, and satire news, shedding light on the following questions: 1. Can (left/right) hyperpartisanship be distinguished from the mainstream? 2. Is style-based fake news detection feasible? 3. Can fake news be distinguished from satire? Our first experiment addressing the first question uncovered an odd behavior of our classifier: it would often misjudge left-wing for right-wing news, while being much better at distinguishing both combined from the mainstream. To explain this behavior, we hypothesized that maybe the writing style of the hyperpartisan left and right are more similar to one another than to the mainstream. To investigate this hypothesis, we devised two additional validation experiments, yielding three sources of evidence instead of just one. 5.1 Hyperpartisanship vs. Mainstream A. Predicting orientation. Table 2 shows the classification performance of a ternary classifier trained to discriminate left, right, and mainstream—an obvious first experiment for our dataset. Separating the left and right orientation from the mainstream does not work too well: the topic baseline outperforms the style-based models with regard to accuracy, whereas the results for class-wise precision and recall are a mixed bag. The left-wing articles are apparently significantly more difficult to be identified compared to articles from the other two orientations. When we inspected the confusion matrix (not shown), it turned out that 66% of misclassifications of left-wing articles are falsely classified as right-wing articles, whereas 60% of all misclassified right-wing articles are classified as mainstream articles. Misclassified mainstream articles spread almost evenly across the other classes. The poor performance of the domain-specific news style features by themselves demonstrate that orientation cannot be discriminated based on the basic corpus characteristics observed with respect to paragraphs, quotations, and hyperlinks. This holds for all subsequent experiments. B. Predicting hyperpartisanship. Given the apparent difficulty of telling apart individual orientations, we did not frantically add features or switch classifiers to make it work. Rather, we trained a binary Features Accuracy Precision Recall F1 all left right main. left right main. left right main. Style 0.60 0.21 0.56 0.75 0.20 0.59 0.74 0.20 0.57 0.75 Topic 0.64 0.24 0.62 0.72 0.15 0.54 0.86 0.19 0.58 0.79 News style 0.39 0.09 0.35 0.59 0.14 0.36 0.49 0.11 0.36 0.53 All-left 0.16 0.16 1.00 0.0 0.0 0.27 All-right 0.33 0.33 0.0 1.00 0.0 0.50 All-main. 0.51 0.51 0.0 0.0 1.00 0.68 Table 2: Performance of predicting orientation. Features Accuracy Precision Recall F1 all hyp. main. hyp. main. hyp. main. Style 0.75 0.69 0.86 0.89 0.62 0.78 0.72 Topic 0.71 0.66 0.79 0.83 0.60 0.74 0.68 News style 0.56 0.54 0.58 0.65 0.47 0.59 0.52 All-hyp. 0.49 0.49 1.00 0.0 0.66 All-main. 0.51 0.51 0.0 1.00 0.68 Table 3: Performance of predicting hyperpartisanship. Features Left Right Trained on: right+main. all left+main. all Style 0.74 0.90 0.66 0.89 Topic 0.68 0.79 0.48 0.85 News style 0.52 0.61 0.47 0.66 Table 4: Ratio of left articles misclassified right when omitting left articles from training, and vice versa. classifier to discriminate hyperpartisanship in general from the mainstream. Table 3 shows the performance values. This time, the best classification accuracy of 0.75 at a remarkable 0.89 recall for the hyperpartisan class is achieved by the style-based classifier, outperforming the topic baseline. Comparing Table 2 and Table 3, we were left with a riddle: all other things being equal, how could it be that hyperpartisanship in general can be much better discriminated from the mainstream than individual orientation? Attempts to answer this question gave rise to our aforementioned hypothesis that, perhaps, the writing style of hyperpartisan left and right are not altogether different, despite their opposing agendas. Or put another way, if style and topic are orthogonal concepts, then being an extremist should not exert a different style dependent on political orientation. Excited, we sought ways to independently disprove the hypothesis, and found two: Experiments C and D. C. Validation using leave-out classification. If leftwing and right-wing articles have a more similar style than either of them compared to mainstream articles, then what class would a binary classifier assign to a left-wing article, if it were trained to distinguish only the right-wing from the mainstream, and vice versa? Table 4 shows the results of this experiment. As indicated by proportions well above 0.50, full style-based classifiers have a tendency of clas237 left vs right mainstream vs left mainstream vs right 0.0 0.2 0.4 0.6 Nomralized accuracy 0 3 6 9 12 15 Iterations Figure 2: Unmasking applied to pairs of political orientations. The steeper a curve, the more similar the respective styles. sifying left as right and right as left. The topic baseline, though, gets confused especially when omitting right articles from the training set with performance close to random. The fact that the topic baseline works better when omitting left from the training set may be explainable: leading up to the elections, the hyperpartisan left was often merely reacting to topics prompted by the hyperpartisan right, instead of bringing up their own. D. Validation using Unmasking. Based on Koppel et al.’s original approach in the context of authorship verification, for the first time, we generalize Unmasking to assess genre styles: just like author style similarity, genre style similarity will be characterized by the slope of a given Unmasking curve, where a steeper decrease indicates higher similarity. We apply Unmasking as described in Section 4.2 onto pairs of sets of left, right, and mainstream articles. Figure 2 shows the resulting Unmasking curves (Unmasking is symmetrical, hence three curves). The curves are averaged over 5 runs, where each run comprised sets of 100 articles from each orientation. In case of the left-wing orientation, where less than 500 articles are available in our corpus, once all of them had been used, they were shuffled again to select articles for the remainder of the runs. As can be seen, the curve comparing left vs. right has a distinctly steeper slope than either of the others. This result hence matches the findings of the previous experiments. With caution, we conclude that the evidence gained from our three independent experimental setups supports our hypothesis that the hyperpartisan left and the hyperpartisan right have more in common in terms of writing style than any of the two have with the mainstream. Another more tangible (e.g., practical) outcome of Experiment B is the finding that hyperpartisan news can apparently be 0.0 0.2 0.4 0.6 Nomralized accuracy 0 3 6 9 12 15 Iterations fake vs real fake vs satire real vs satire Figure 3: Unmasking applied to pairs of sets of news that are fake, real, and satire. discriminated well from the mainstream: in particular the high recall of 0.89 at a reasonable precision of 0.69 gives us confidence that, with some further effort, a practical classifier can be built that detects hyperpartisan news at scale and in real time, since an article’s style can be assessed immediately without referring to external information. 5.2 Fake vs. Real (vs. Satire) This series of experiments targets research questions (2) and (3). Again, we conduct three experiments, where the first is about predicting veracity, and the last two about discriminating satire. A. Predicting veracity. When taking into account that the mainstream news publishers in our corpus did not publish any news items that are mostly false, and only very few instances that are mixtures of true and false, we may safely disregard them for the task of fake news detection. A reliable classifier for hyperpartisan news can act as a prefilter for a subsequent, more in-depth fake news detection approach, which may in turn be tailored to a much more narrowly defined classification task. We hence use only the left-wing articles and the right-wing articles of our corpus for our attempt at a style-based fake news classifier. Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation. Although all classifiers outperform the naive baselines of classifying everything into one of the classes in terms of precision, the slight increase comes at the cost of a large decrease in recall. While the orientation-specific classifiers are slightly better for most metrics, none of them outperform the naive baselines regarding the FMeasure. We conclude that style-based fake news classification simply does not work in general. 238 Features Accuracy Precision Recall F1 all fake real fake real fake real Generic classifier Style 0.55 0.42 0.62 0.41 0.64 0.41 0.63 Topic 0.52 0.41 0.62 0.48 0.55 0.44 0.58 Orientation-specific classifier Style 0.55 0.43 0.64 0.49 0.59 0.46 0.61 Topic 0.58 0.46 0.65 0.45 0.66 0.46 0.66 All-fake 0.39 0.39 1.00 0.0 0.56 All-real 0.61 0.61 0.0 1.00 0.76 Table 5: Performance of predicting veracity. Features Accuracy Precision Recall F1 all sat. real sat. real sat. real Style 0.82 0.84 0.80 0.78 0.85 0.81 0.82 Topic 0.77 0.78 0.75 0.74 0.79 0.76 0.77 All-sat. 0.50 0.50 1.00 0.0 0.67 All-real 0.50 0.50 0.00 1.00 0.67 Rubin et al. n/a 0.90 n/a 0.84 n/a 0.87 n/a Table 6: Performance of predicting satire (sat.). B. Predicting satire. Yet, not all fake news are the same. One should distinguish satire from the rest, which takes the form of news but lies more or less obviously to amuse its readers. Regardless the problems that spreading fake news may cause, satire should never be filtered, but be discriminated from other fakes. Table 6 shows the performance values of our classifier in the satire-detection setting used by Rubin et al. (2016) (the S-n-L News DB corpus), distinguishing satire from real news. This setting uses a balanced 3:1 training-to-test set split over 360 articles (180 per class). As can be seen, our style-based model significantly outperforms all baselines across the board, achieving an accuracy of 0.82, and an F score of 0.81. It clearly improves over topic classification, but does not outperform Rubin et al.’s classifier, which includes features based on topic, absurdity, grammar, and punctuation. We argue that incorporating topic into satire detection is not appropriate, since the topics of satire change along the topics of news. A classifier with topic features therefore does not generalize. Apparently, a style-based model is competitive, and we believe that satire can be detected at scale this way, so as to prevent other fake news detection technology from falsely filtering it. C. Unmasking satire. Given the above results on stylistic similarities between left and right news, the question remains how satire fits into the picture. We assess the style similarity of satire from Rubin et al.’s corpus compared to fake news and real news from ours, again applying Unmasking to compare pairs of the three categories of news as described above. Figure 3 shows the resulting Unmasking curves. The curve for the pair of fake vs. real news drops faster compared to the other two pairs. Apparently, the style of fake news has more in common with that of real news than either of the two have with satire. These results are encouraging: satire is distinct enough from fake and real news, so that, just like with hyperpartisan news compared to mainstream news, it can be discriminated with reasonable accuracy. 6 Conclusion Fact-checking for fake news detection poses an interdisciplinary challenge: technology is required to extract factual statements from text, to match facts with a knowledge base, to dynamically retrieve and maintain knowledge bases from the web, to reliably assess the overall veracity of an entire article rather than individual statements, to do so in real time as news events unfold, to monitor the spread of fake news within and across social media, to measure the reputation of information sources, and to raise awareness in readers. These are only the most salient things that need be done to tackle the problem, and as our cross-section of related work shows, a large body of work must be covered. Notwithstanding the many attacks on fake news by developing one way or another of fact-checking, we believe it worthwhile to mount our attack from another angle: writing style. We show that news articles conveying a hyperpartisan world view can be distinguished from more balanced news by writing style alone. Moreover, for the first time, we found quantifiable evidence that the writing styles of news of the two opposing orientations are in fact very similar: there appears to be a common writing style of left and right extremism. We further show that satire can be distinguished well from other news, ensuring that humor will not be outcast by fake news detection technology. All of these results offer new, tangible, short-term avenues of development, lest large-scale fact-checking is still far out of reach. Employed as pre-filtering technologies to separate hyperpartisan news from mainstream news, our approach allows for directing the attention of human fact checkers to the most likely sources of fake news. Acknowledgements We thank Craig Silverman, Lauren Strapagiel, Hamza Shaban, Ellie Hall, and Jeremy Singer-Vine from BuzzFeed for making their data available, enabling our research. 239 References Daron Acemoglu, Asuman Ozdaglar, and Ali ParandehGheibi. 2010. Spread of (Mis)Information in Social Networks. Games and Economic Behavior, 70(2):194–227. Sadia Afroz, Michael Brennan, and Rachel Greenstadt. 2012. Detecting Hoaxes, Frauds, and Deception in Writing Style Online. In 2012 IEEE Symposium on Security and Privacy, pages 461–475. Shlomo Argamon-Engelson, Moshe Koppel, and Galit Avneri. 1998. Style-based text categorization: What newspaper am i reading. In Proc. of the AAAI Workshop on Text Categorization, pages 1–4. Sameer Badaskar, Sachin Agarwal, and Shilpa Arora. 2008. Identifying real or fake articles: Towards better language modeling. In Third International Joint Conference on Natural Language Processing, IJCNLP 2008, Hyderabad, India, January 7-12, 2008, pages 817–822. The Association for Computer Linguistics. Peter Bourgonje, Julián Moreno Schneider, and Georg Rehm. 2017. From clickbait to fake news detection: An approach based on detecting the stance of headlines to articles. In Proceedings of the 2017 Workshop: Natural Language Processing meets Journalism, NLPmJ@EMNLP, Copenhagen, Denmark, September 7, 2017, pages 84–89. Ceren Budak, Divyakant Agrawal, and Amr El Abbadi. 2011. Limiting the spread of misinformation in social networks. In Proceedings of the 20th International Conference on World Wide Web, WWW ’11, pages 665–674, New York, NY, USA. ACM. Yimin Chen, Niall J. Conroy, and Victoria L. Rubin. 2015. News in an Online World: The Need for an "Automatic Crap Detector". In Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, ASIST ’15, pages 81:1–81:4, Silver Springs, MD, USA. American Society for Information Science. Giovanni Luca Ciampaglia, Prashant Shiralkar, Luis M Rocha, Johan Bollen, Filippo Menczer, and Alessandro Flammini. 2015. Computational Fact Checking from Knowledge Networks. PloS one, 10(6):e0128193. Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017, pages 69–76. Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S. Weld. 2008. Open Information Extraction from the Web. Commun. ACM, 51(12):68–74. Alexandru L. Ginsca, Adrian Popescu, and Mihai Lupu. 2015. Credibility in Information Retrieval. Found. Trends Inf. Retr., 9(5):355–475. Stefan Heindorf, Martin Potthast, Benno Stein, and Gregor Engels. 2016. Vandalism Detection in Wikidata. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management (CIKM 16), pages 327–336. ACM. Benjamin D. Horne and Sibel Adali. 2017. This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. CoRR, abs/1703.09398. Moshe Koppel, Jonathan Schler, and Elisheva Bonchek-Dokow. 2007. Measuring differentiability: Unmasking pseudonymous authors. J. Mach. Learn. Res., 8:1261–1276. Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, and Yajun Wang. 2013. Prominent Features of Rumor Propagation in Online Social Media. In Data Mining (ICDM), 2013 IEEE 13th International Conference on, pages 1103–1108. IEEE. Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, and Chu-Ren Huang. 2017. Fake news detection through multi-perspective speaker profiles. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017, Volume 2: Short Papers, pages 252–256. Jing Ma, Wei Gao, and Kam-Fai Wong. 2017. Detect rumors in microblog posts using propagation structure via kernel learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 708–717. Amr Magdy and Nayer Wanas. 2010. Web-based Statistical Fact Checking of Textual Documents. In Proceedings of the 2Nd International Workshop on Search and Mining User-generated Contents, SMUC ’10, pages 103–110, New York, NY, USA. ACM. Delia Mocanu, Luca Rossi, Qian Zhang, Marton Karsai, and Walter Quattrociocchi. 2015. Collective Attention in the Age of (Mis)Information. Comput. Hum. Behav., 51(PB):1198–1204. Nam P. Nguyen, Guanhua Yan, My T. Thai, and Stephan Eidenbenz. 2012. Containment of Misinformation Spread in Online Social Networks. In Proceedings of the 4th Annual ACM Web Science Conference, WebSci ’12, pages 213–222, New York, NY, USA. ACM. Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. 2017. Automatic detection of fake news. CoRR, abs/1708.07104. Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2931–2937. 240 Victoria Rubin, Niall Conroy, and Yimin Chen. 2015. Towards News Verification: Deception Detection Methods for News Discourse. In Proceedings of the Hawaii International Conference on System Sciences (HICSS48) Symposium on Rapid Screening Technologies, Deception Detection and Credibility Assessment Symposium, Kauai, Hawaii, USA. Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell. 2016. Fake News or Truth? Using Satirical Cues to Detect Potentially Misleading News. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection, pages 7–17, San Diego, California. Association for Computational Linguistics. Baoxu Shi and Tim Weninger. 2016. Fact Checking in Heterogeneous Information Networks. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion, pages 101–102, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Craig Silverman, Lauren Strapagiel, Hamza Shaban, Ellie Hall, and Jeremy Singer-Vine. 2016. Hyperpartisan Facebook Pages are Publishing False and Misleading Information at an Alarming Rate. https://www.buzzfeed.com/craigsilverman/partisan-fbpages-analysis. BuzzFeed. Philip J. Stone, Dexter C. Dunphy, and Marshall S. Smith. 1966. The General Inquirer: A Computer Approach to Content Analysis. MIT press. Marcella Tambuscio, Giancarlo Ruffo, Alessandro Flammini, and Filippo Menczer. 2015. Fact-checking Effect on Viral Hoaxes: A Model of Misinformation Spread in Social Networks. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15 Companion, pages 977–982, New York, NY, USA. ACM. Udo Undeutsch. 1967. Beurteilung der glaubhaftigkeit von aussagen. Handbuch der Psychologie, 11:26–181. Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Oken Hodas. 2017. Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, pages 647–653. William Yang Wang. 2017. "liar, liar pants on fire": A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, pages 422–426. Zhongyu Wei, Junwen Chen, Wei Gao, Binyang Li, Lanjun Zhou, Yulan He, and Kam-Fai Wong. 2013. An empirical study on uncertainty identification in social media context. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 58–62, Sofia, Bulgaria. Association for Computational Linguistics. You Wu, Pankaj K. Agarwal, Chengkai Li, Jun Yang, and Cong Yu. 2014. Toward Computational Fact-checking. Proc. VLDB Endow., 7(7):589–600. Fan Yang, Arjun Mukherjee, and Eduard Constantin Dragut. 2017. Satirical news detection and analysis using attention mechanism and linguistic features. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1979–1989.
2018
22
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2363–2372 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2363 Multi-Input Attention for Unsupervised OCR Correction Rui Dong David A. Smith College of Computer Information and Science Northeastern University {dongrui, dasmith}@ccs.neu.edu Abstract We propose a novel approach to OCR post-correction that exploits repeated texts in large corpora both as a source of noisy target outputs for unsupervised training and as a source of evidence when decoding. A sequence-to-sequence model with attention is applied for single-input correction, and a new decoder with multi-input attention averaging is developed to search for consensus among multiple sequences. We design two ways of training the correction model without human annotation, either training to match noisily observed textual variants or bootstrapping from a uniform error model. On two corpora of historical newspapers and books, we show that these unsupervised techniques cut the character and word error rates nearly in half on single inputs and, with the addition of multi-input decoding, can rival supervised methods. 1 Introduction Optical character recognition (OCR) software has made vast quantities of printed material available for retrieval and analysis, but severe recognition errors in corpora with low quality of printing and scanning or physical deterioration often hamper accessibility (Chiron et al., 2017). Many digitization projects have employed manual proofreading to further correct OCR output (Holley, 2009), but this is time consuming and depends on fostering a community of volunteer workers. These problems with OCR are exacerbated in library-scale digitization by commercial (e.g., Google Books, Newspapers.com), government (e.g., Library of Congress, Biblioth`eque nationale de France), and nonprofit (e.g., Internet Archive) organizations. The scale of these projects not only makes it difficult to adapt OCR models to their diverse layouts and typefaces but also makes it impractical to present any OCR output other than a single-best transcript. Existing methods for automatic OCR postcorrection are mostly supervised methods that correct recognition errors in a single OCR output (Kolak and Resnik, 2002; Kolak et al., 2003; Yamazoe et al., 2011). Those systems are not scalable since human annotations are expensive to acquire, and they are not capable of utilizing complementary sources of information. Another line of work is ensemble methods (Lund et al., 2013, 2014) combining OCR results from multiple scans of the same document. Most of these ensemble methods, however, require aligning multiple OCR outputs (Lund and Ringger, 2009; Lund et al., 2011), which is intractable in general and might introduce noise into the later correction stage. Furthermore, voting-based ensemble methods (Lund and Ringger, 2009; Wemhoener et al., 2013; Xu and Smith, 2017) only work where the correct output exists in one of the inputs, while classification methods (Boschetti et al., 2009; Lund et al., 2011; Al Azawi et al., 2015) are also trained on human annotations. To address these challenges, we propose an unsupervised OCR post-correction framework both to correct single input text sequences and also to exploit multiple candidate texts by simultaneously aligning, correcting, and voting among input sequences. Our proposed method is based on the observation that significant number of duplicate and near-duplicate documents exist in many corpora (Xu and Smith, 2017), resulting in OCR output containing repeated texts with various quality. As shown by the example in Table 1, different errors (characters in red) are introduced when the OCR system scans the same text in multiple editions, each with its own layout, fonts, etc. For ex2364 ample, in is recognized as m in the first output and a is recognized as u in the third output, while the second output is correctly recognized. Therefore, duplicated texts with diverse errors could serve as complementary information sources for each other. OCR eor**y that I have been slam in battle, for 1 Output sorry that I have been slain in battle, for I sorry tha’ I have been s uin in battle, f r I Original sorry that I have been slain in battle, for I Text Table 1: Example duplicate texts in OCR’d digital corpora In this paper, we aim to train an unsupervised correction model via utilizing the duplication in OCR output. We propose to map each erroneous OCR’d text unit to either its high-quality duplication or a consensus correction among its duplications via bootstrapping from an uniform error model. The baseline correction system is a sequence-to-sequence model with attention (Bahdanau et al., 2015), which has been shown to be effective in text correction tasks (Chollampatt et al., 2016; Xie et al., 2016). We also seek to improve the correction performance for duplicated texts by integrating multiple inputs. Previous work on combining multiple inputs in neural translation deal with data from different domains, e.g., multilingual (Zoph and Knight, 2016) or multimodal (Libovick´y and Helcl, 2017) data. Therefore, their models need to be trained on multiple inputs to learn parameters to combine inputs from each domain. Given that the inputs of our task are all from the same domain, our model is trained on a single input and introduces multi-input attention to generate a consensus result merely for decoding. It does not require learning extra parameters for attention combination and thus is more efficient to train. Furthermore, average attention combination, a simple multi-input attention mechanism, is proposed to improve both the effectiveness and efficiency of multi-input combination on the OCR post-correction task. We experiment with both supervised and unsupervised training and with single- and multiinput decoding on data from two manually transcribed collections in English with diverse typefaces, genres, and time periods: newspaper articles from the Richmond (Virginia) Daily Dispatch (RDD) from 1860–1865 and books from 1500– 1800 from the Text Creation Partnership (TCP). For both collections, which were manually transcribed by other researchers and are in the public domain, we aligned the one-best output of an OCR system to the manual transcripts. We also aligned the OCR in the training and evaluation sets to other public-domain newspaper issues (from the Library of Congress) and books (from the Internet Archive) to find multiple duplicates as “witnesses”, where available, for each line. Experimental results on both datasets show that our proposed averarge attention combination mechanism is more effective than existing methods in integrating multiple inputs. Moreover, our noisy error correction model achieves comparable performance with the supervised model via multiple-input decoding on duplicated texts. In summary, our contributions are: (1) a scalable framework needing no supervision from human annotations to train the correction model; (2) a multi-input attention mechanism incorporating aligning, correcting, and voting on multiple sequences simultaneously for consensus decoding, which is more efficient and effective than existing ensemble methods; and (3) a method that corrects text either with or without duplicated versions, while most existing methods can only deal with one of these cases. 2 Data Collection We perform experiments on one-best OCR output from two sources: two million issues from the Chronicling America collection of historic U.S. newspapers, which is the largest public-domain full-text collection in the Library of Congress;1 and three million public-domain books in the Internet Archive.2 For supervised training and for evaluation, we aligned manually transcribed texts to these onebest OCR transcripts: 1384 issues of the Richmond (Virginia) Daily Dispatch from 1860–1865 (RDD)3 and 934 books from 1500–1800 from the 1chroniclingamerica.loc.gov: Historical newspapers also constitute the largest digitized text collections in the Australian National Library (Trove) and the Europeana consortium. 2https://archive.org/details/texts. Google Books and the Hathi Trust consortium also hold many in-copyright books and require licensing agreements to access public-domain materials. 3dlxs.richmond.edu/d/ddr/: the transcription from the University of Richmond includes all articles but only some advertisements. 2365 Text Creation Partnership (TCP).4 Both of these manually transcribed collections, which were produced independently from the current authors, are in the public domain and in English, although both Chronicling America and the Internet Archive also contain much non-English text. To get more evidence for the correct reading of an OCR’d line, we aligned each OCR’d RDD issue to other issues of the RDD and other newspapers from Chronicling America and aligned each OCR’d TCP page to other pre-1800 books in the Internet Archive. To perform these alignments between noisy OCR transcripts efficiently, we used methods from our earlier work on text-reuse analysis (Smith et al., 2014; Wilkerson et al., 2015). An inverted index of hashes of word 5-grams was produced, and then all pairs from different pages in the same posting list were extracted. Pairs of pages with more than five shared hashed 5-grams were aligned with the Smith-Waterman algorithm with equal costs for insertion, deletion, and substitution, which returns a maximally aligned subsequence in each pair of pages (Smith and Waterman, 1981). Aligned passages that were at least five lines long in the target RDD or TCP text were output. For each target OCR line—i.e., each line in the training or test set—there are thus, in addition to the ground-truth manual transcript, zero or more witnesses from similar texts, to use the term from textual criticism. In our experiments on OCR correction, each training and test example is a line of text following the layout of the scanned image documents5. The average number of characters per line is 42.4 for the RDD newspapers and 53.2 for the TCP books. Table 2 lists statistics for the number of OCR’d text lines with manual transcriptions and additional witnesses. 43% of the manually transcribed lines have witnesses in the RDD newspapers, and 64% of them have witnesses in the TCP books. In the full Chronicling America data, 44% of lines align to at least one other witness. Although not all OCR collections will have this level of repetition, it is notable that these collections, which are some of the largest public-domain digital libraries, do exhibit this kind of reprinting. Similarly, at least 25% of the pages in Google’s web crawls are duplicates (Henzinger, 2006). Although we exploit text reuse, where available, to 4www.textcreationpartnership.org 5The datasets can be downloaded from http://www. ccs.neu.edu/home/dongrui/ocr.html improve decoding and unsupervised training, we also show (Table 5) significant improvements to OCR accuracy with only a single transcript. Dataset # Lines # Lines w/manual w/manual & witnesses RDD 2.2M 0.95M (43%) TCP 8.6M 5.5M (64%) Table 2: Statistics for the number of OCR’d lines in million (M) from the Richmond Dispatch and TCP Books with manual transcriptions (Column 1) or with both transcriptions and multiple witnesses (Column 2). 3 Methods In this section, we first define our problem in §3.1, followed by model description. In general, we train an OCR error correction model via an attention-based RNN encoder-decoder, which takes a single erroneous OCR’d line as input and outputs the corrected text (§3.2). At decoding time, multi-input attention combination strategies are introduced to allow the decoder to integrate information from multiple inputs (§3.3). Finally, we discuss several unsupervised settings for training the correction model in §3.4. 3.1 Problem Definition Given a line of OCR’d text x, comprising the sequence of characters [x1, · · · , xTS], our goal is to map it to an error-free text y = [y1, · · · , yTT ] via modeling p(y|x). Given p(y|x), we also seek to model p(y|X) to search for consensus among duplicated texts X, where X = [x1, · · · , xN] are duplicated lines of OCR’d text. 3.2 Attention-based Seq2Seq Model Similar to previous work (Bahdanau et al., 2015), the encoder is a bidrectional RNN (Schuster and Paliwal, 1997) that converts source sequence x = [x1, · · · , xTS] into a sequence of RNN states h = [h1, · · · , hTS], where hi = [−→h i, ←−h i] is a concatenation of both forward and backward hidden states at time step i(1 ≤i ≤TS). We have −→h i = f(xi, −→h i−1); ←−h i = f(xi, ←−h i+1), (1) here f is the dynamic function, e.g., LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Cho et al., 2014). The decoder RNN predicts the output sequence y = [y1, · · · , yTT ], through the following dynamics and prediction model: st = f(yt−1, st−1, ct); (2) 2366 p(yt|y<t, x) = g(yt−1, st, ct), (3) where st is the RNN state and ct is the context vector at time t. yt is the predicted symbol from the target vocabulary at time t via prediction function g(·). The context vector is given as a linear combination of the encoder hidden states: ct = TS X i=1 αt,ihi; αt,i = eη(st−1,hi) P τ eη(st−1,hτ) (4) where αt,i is the weight for each hidden state hi and η is the function that computes the strength of each encoder hidden state according to current decoder hidden state. The loss function is the crossentropy loss per time step summed over the output sequence y: L(x, y) = − TS X t=1 log p(yt|x, y<t) (5) 3.3 Multi-input Attention Given a trained Seq2Seq model p(y|x), our goal is to combine multiple input sequences X to generate the target sequence y, i.e., to utilize information from multiple sources at decoding time. Assume that N relevant source sequences X = [x1, · · · , xN] are observed, where each sequence xl = [xl,1, · · · , xl,Tl] (1 ≤l ≤N) and Tl is the length of the lth sequence. Then, a sequence of hidden states hl = [hl,1, · · · , hl,Tl] is generated by the encoder network for each input sequence xl. At each decoding time step t, the decoder searches through encoder hidden states H = [h1, · · · , hN] to compute a global context vector ct. Different strategies to combine attention from multiple encoders are described as follows. Flat Attention Combination. Flat attention combination assigns a weight αt,l,i to each encoder hidden state hl,i for each input sequence xl as: αt,l,i = eη(st−1,hl,i) PN l′=1 PTl′ τ=1 eη(st−1,hl′,τ) . (6) Therefore, the global context vector is given by ct = N X l=1 Tl X i=1 αt,l,ihl,i. (7) Flat attention combination is similar to singleinput decoding in that it concatenates all inputs into a long sequence, except that the encoder hidden states are computed independently for each input. Hierarchical Attention Combination. The structure of hierarchical attention combination is presented in Figure 1. We first compute a context vector for each input as: ct,l = Tl X i=1 αt,l,ihl,i; αt,l,i = eη(st−1,hl,i) PTl τ=1 eη(st−1,hl,τ) . (8) Then a global context vector ct is computed as a weighted sum of all the context vectors: ct = N X i=1 βt,lct,l, (9) where βt,l is the weight assigned to each context vector ct,l and computed in different ways as follows: (a) Weighted Attention Combination. In weighted attention combination, the weight for each context vector is given by its dot product with the decoder state in the transformed common space: βt,l = eη(st−1,ct,l) PN l′=1 eη(st−1,ct,l′) . (10) (b) Average Attention Combination. In average attention combination, each input sequence is treated as equally weighted. Thus βt,l = 1 N for each input sequence xl. It is more efficient than the weighted attention combination in that it does not need to compute a weight for each input. Figure 1: Hierarchical attention combination. These attention-combination methods do not have parameters trained on multiple inputs and are only introduced at decoding time. In contrast, Libovick´y and Helcl (2017) and Zoph and Knight (2016) introduce parameters for each type of input and require training and decoding with the same number of inputs. 2367 3.4 Training Settings In this section, we introduce different settings for training our correction model, a single-input attention-based Seq2Seq model (§3.2), which transforms each OCR’d text line into a corrected version generated via different mechanisms. Supervised Training. In this setting, the correction model is trained to map each OCR’d line into the corresponding manual transcription, i.e., the human annotation. We call the correction model trained in this setting Seq2Seq-Super. Unsupervised Training. In the absence of ground truth transcriptions, we can use different methods to generate a noisy corrected version for each OCR’d line. (a) Noisy Training. In this setting, the correction model is trained to transform each OCR’d text line to a selected high-quality witness. The quality of the witnesses is measured by a 5-gram character language model built on the New York Time Corpus (Sandhaus, 2008) with KenLM toolkit (Heafield, 2011). For each OCR’d line with multiple witnesses, a score is assigned to each witness by the language model, divided by the number of characters in it to reduce the effect of the length of a witness. Then a witness with the highest score is chosen as the noisy ground truth for each line. Those lines with low score for all witnesses are removed. We call the correction model trained in this setting Seq2Seq-Noisy. (b) Synthetic Training. In this setting, the error correction model is trained to recover a manually corrupted out-of-domain corpus. We construct the synthetic dataset by injecting uniformly distributed insertion, deletion and substitution errors into the New York Times corpus. Firstly, the news articles are split into lines with random length between [1, 70] following a Gaussian distribution N(45, 5), which is similar to that of the real world dataset. Then, a certain number of lines are randomly selected and injected with equal number of insertion, deletion and substitution errors. The correction model is then trained to recover the original line from each corrupted line. We call this model Seq2Seq-Syn. (c) Synthetic Training with Bootstrapping. In this setting, we propose to further improve the performance of synthetic training via bootstrapping. The correction model trained on synthetic dataset does not perform well when correcting a given input from real world dataset, due to their difference in error distributions. But it achieves comparable performance with the supervised model when decoding lines with multiple witnesses, since the model could further benefit from jointly aligning and voting among multiple inputs. Thus, with the multi-input attention mechanism introduced in §3.3, we first generate a high-quality consensus correction for each OCR’d line with witnesses via the correction model trained on synthetic data. Then, the a bootstrapped model is trained to transform those lines into their consensus correction results. We call the correction model trained in this setting Seq2Seq-Bootstrap. 4 Experiments In this section, we first introduce the details of our experimental setup (§4.1). Then, the results of preliminary experiments comparing the performance of different options for the single-input Seq2Seq model and the multi-input attention combination strategies are presented in §4.2. The main experimental results for evaluating the correction model trained in different training settings and decoded with/without multi-input attention are reported and explained in §4.3. Further discussions of our model are described in §4.4. 4.1 Experimental Setup We begin by describing the data split, training details, baseline systems, and evaluation metrics. 4.1.1 Training Details For both RDD newspapers and TCP books, we randomly split the OCR’d lines into 80% training and 20% test either by the date of the newspaper or by the name of the books. For the RDD newspapers, we have 1.7M training lines and 0.44M test lines. For the TCP books, 2.8M lines are randomly sampled from the whole training set for different training settings to conduct a fair comparison with noisy training, and about 1.6M lines are used for testing. Both the encoder and decoder of our model has 3 layers with 400 hidden units for each layer, where GRU is applied as the dynamic function. Adam optimizer with a learning rate of 0.0003 and default decay rates is used to train the correction model . We train up to 40 epochs with a minibatch size of 128 and select the model with the lowest perplexity on the development set. The decoder implements beam search with a beam width of 100. 2368 4.1.2 Baselines and Comparisons In preliminary experiments, we first compare the neural translation model (§3.2) with a commonly used Seq2Seq model, pruned conditional random fields (PCRF) (Schnober et al., 2016) on the single-input correction task. CRF models have been shown to be very competitve on tasks such as OCR post-correction, spelling correction, and lemmatization. After that, we compare the different multi-input attention strategies introduced in §3.3 on multi-input correction task to choose the best strategy for the main experiments. In the main experiment, we compare the performance of correction models trained in different training settings and decode with and without multiple witnesses. Two ensembles methods, language model ranking (LMR) and majority vote (Xu and Smith, 2017), are also considered as unsupervised baseline methods. LMR chooses a single high-quality witness for each OCR’d line by a language model as the correction for that line. Majority vote first aligns multiple input sequences using a greedy pairwise algorithm (since multiple sequence alignment is intractable) and then votes on each position in the alignment, with a slight advantage given to the original OCR output in case of ties. We also tried to use an exact unsupervised method for consensus decoding based on dual decomposition (Paul and Eisner, 2012). Their implementation, unfortunately, turned out not to return a certificate of completion on most lines in our data even after thousands of iterations. 4.1.3 Evaluation Metrics Word error rate (WER) and character error rate (CER) are used to compare the performance of each method. Case is ignored. Lattice word error rate (LWER) and lattice character error rate (LCER) are also computed as the oracle performance for each method, which could reveal the capability of each model to be applied to downstream tasks taking lattices as input, e.g., reranking or retrieval of the correction hypotheses (Taghva et al., 1996; Lam-Adesina and Jones, 2006). We compute the macro average for each type of error rate, which allows us to use a paired permutation significance test. 4.2 Preliminary Experiments In this section, we conduct two preliminary experiments to study different options for both the single-input correction models and the multi-input attention combination strategies. 4.2.1 Single Input Correction Model Model CER WER None 0.18133 0.41780 PCRF(order=5,w=4) 0.11403 0.25116 PCRF(order=5,w=6) 0.11535 0.25617 Attn 0.11028* 0.23405* Table 3: CER and WER on single-input correction for PCRF and Attn-Seq2Seq on RDD newspapers. Results from Attn-Seq2Seq that are significantly better than the PCRF are highlighted with *(p < 0.05, paired permutation test). The best result for each column is in bold. We first compare the attention-based Seq2Seq (Attn-Seq2Seq) model, with a traditional Seq2Seq model, PCRF, on single input correction task. As the PCRF implementation of Schnober et al. (2016) is highly memory and time consuming for training on long sequences, we compare it with Attn-Seq2Seq model on a smaller dataset with 100K lines randomly sampled from RDD newspapers training set. The trained correction model is then applied to correct the full test set. CER and WER of the correction results from both models are listed in Table 3. We can find that the Attn-Seq2Seq neural translation model works significantly better than the PCRF when trained on a dataset of the same size. The performance of the Attn-Seq2seq model could be further improved by including more training data or by multi-input decoding for duplicated texts, while the PCRF could only be trained on limited data and is not able to work on multiple inputs. Thus, we choose AttnSeq2Seq as our error correction model. 4.2.2 Multi-input Attention Combination We also compare different attention combination strategies on a multi-input decoding task. The results from Table 4 reveal that average attention combination performs best among all the decoding strategies on RDD newspapers and TCP books datasets. It reduces the CER of single input decoding by 41.5% for OCR’d lines in RDD newspapers and 9.76% for TCP books. The comparison between two hierarchical attention combination strategies shows that averaging evidence from each input works better than a weighted summation mechanism. Flat attention combination, which merges all the inputs into a long sequence when computing the strength of each encoder hidden state, obtains the worst performance in terms 2369 Decode RDD Newspapers TCP Books CER LCER WER LWER CER LCER WER LWER None 0.15149 0.04717 0.37111 0.13799 0.10590 0.07666 0.30549 0.23495 Single 0.07199 0.03300 0.14906 0.06948 0.04508 0.01407 0.11283 0.03392 Flat 0.07238 0.02904* 0.15818 0.06241* 0.05554 0.01727 0.13487 0.04079 Weighted 0.06882* 0.02145* 0.15221 0.05375 0.05516 0.01392* 0.1330 0.03669 Average 0.04210* 0.01399 * 0.09397 0.02863* 0.04072* 0.01021* 0.09786* 0.02092* Table 4: Results of correcting lines in the RDD newspapers and TCP books with multiple witnesses when decoding with different strategies using the same supervised model. Attention combination strategies that statistically significantly outperform single-input decoding are highlighted with * (p < 0.05, paired-permutation test). Best result for each column is in bold. of both CER and WER. 4.3 Main Results We now present results on the full training and test sets for the Richmond Daily Dispatch newspapers and Text Creation Partnership books. All results are on the same test set. The multi-input decoding experiments have access to additional witnesses for each line, where available, but fall back to single-input decoding when no additional witnesses are present for a given line. Table 5 presents the results for our model trained in different training settings as well as the baseline language model reranking (LMR) and majority vote methods. Multiple input decoding performs better than single input decoding for every training setting, and the model trained in supervised mode with multi-input decoding achieves the best performance. The majority vote baseline, which works only on more than two inputs, performs worst on both the TCP books and RDD newspapers. Our proposed unsupervised framework Seq2Seq-Noisy and Seq2SeqBoots achieves performance comparable with the supervised model via multi-input decoding on the RDD newspaper dataset. The performance of Seq2Seq-Noisy is worse on the TCP Books than the RDD newspapers, since those old books contain the character long s 6, which is formerly used where s occurred in the middle or at the beginning of a word. These characters are recognized as f in all the witnesses because of similar shape. Thus, the model trained on noisy data are unable to correct them into s. Nonetheless, by removing the factor of long s, i.e., replacing the long s in the ground truth with f, Seq2Seq-Noisy could achieve a CER of 0.062 for single-input decoding and 0.058 for multi-input decoding on the TCP books. Both Seq2Seq-Syn and Seq2Seq-Boots work better on the RDD newspapers than the TCP books 6https://en.wikipedia.org/wiki/Long_s dataset. We conjecture that it is because the synthetic dataset is trained on (modern) newspapers, which are more similar to the nineteenth-century RDD newspapers. The long s problem also makes it more difficult for the model trained on synthetic data to work on the TCP books. 4.4 Discussion In this section, we provide further analysis on different aspects of our method. Does Corruption Rate Affect Synthetic Training? We first examine how the corruption rate of the synthetic dataset would affect the performance of the correction model. Figure 2 presents the results of single-input correction and multi-input correction tasks on the RDD newspapers and TCP books when trained on synthetic data corrupted with different error rate: 0.9, 0.12, 0.15. For both tasks, the character error rate increases a little bit when the correction model is trained to recover the synthetic date with higher corruption rate. However, the performance is more stable on the RDD newspapers than the TCP books when more errors are introduced. (a) RDD Newspapers (b) TCP Books Figure 2: Performance of Seq2Seq-Syn trained on synthetic data with different corruption rates. Does Number of Witnesses Matter for Multiple-Input Decoding? Here we want to study the impact of the number of witnesses on 2370 Decode Model RDD Newspapers TCP Books CER LCER WER LWER CER LCER WER LWER None 0.18133 0.13552 0.41780 0.31544 0.10670 0.08800 0.31734 0.27227 Single Seq2Seq-Super 0.09044 0.04469 0.17812 0.09063 0.04944 0.01498 0.12186 0.03500 Seq2Seq-Noisy 0.10524 0.05565 0.20600 0.11416 0.08704 0.05889 0.25994 0.15725 Seq2Seq-Syn 0.16136 0.11986 0.35802 0.26547 0.09551 0.06160 0.27845 0.18221 Seq2Seq-Boots 0.11037 0.06149 0.22750 0.13123 0.07196 0.03684 0.21711 0.11233 Multi LMR 0.15507 0.13552 0.34653 0.31544 0.10862 0.08800 0.33983 0.27227 Majority Vote 0.16285 0.13552 0.40063 0.31544 0.11096 0.08800 0.34151 0.27227 Seq2Seq-Super 0.07731 0.03634 0.15393 0.07269 0.04668 0.01252 0.11236 0.02667 Seq2Seq-Noisy 0.09203* 0.04554* 0.17940 0.09269 0.08317 0.05588 0.24824 0.14885 Seq2Seq-Syn 0.12948 0.09112 0.28901 0.19977 0.08506 0.05002 0.24942 0.15169 Seq2Seq-Boots 0.09435 0.04976 0.19681 0.10604 0.06824* 0.03343* 0.20325* 0.09995* Table 5: Results from model trained under different settings on single-input decoding and multiple-input decoding for both the RDD newspapers and TCP books. All training is unsupervised except for supervised results in italics. Unsupervised training settings with multi-input decoding that are significantly better than other unsupervised counterparts are highlighted with * (p < 0.05, paired-permutation test). Best result among unsupervised training in each column is in bold. (a) RDD Newspapers (b) TCP Books Figure 3: Performance of different models on multiple decoding of lines with different number of witnesses. the performance of multiple-input decoding. The test set is divided into subgroups with varying size according to their number of witnesses. Figure 3 presents the performance of multi-input correction on subgroups with different number of witnesses. We can see that supervised training achieves the best performance on each subgroup for both datasets. On the RDD newspapers, the performance of each training setting is significantly improved when the number of witnesses increases from 0 to 2, then the error rate tends to be flat when more witnesses are observed. For the TCP books, the character error rate for both Seq2Seq-Syn and Seq2Seq-Boots decreases with small fluctuation when the number of witnesses increases. Seq2Seq-Noisy performs the worst almost on all subgroups on the TCP books since all the witnesses suffers from the long s problem. Can More Training Data Benefit Learning? Figure 4 shows the test results for our correction model trained on datasets of different size. As the size of the training set increases, the CER of our model decreases consistently for both single and multiple input correction on the RDD newspapers. However, the performance curve of correction model on TCP books dataset is flatter since it is larger overall than RDD newspapers. (a) RDD Newspapers (b) TCP Books Figure 4: Performance of the supervised correction model trained on different proportions of the RDD newspapers and TCP books dataset. 2371 5 Related Work Multi-Input OCR Correction. Ensemble methods have been shown to be effective in OCR postcorrection by combining OCR output from multiple scans of the same document (Lopresti and Zhou, 1997; Klein and Kopel, 2002; Cecotti and Bela¨ıd, 2005; Lund et al., 2013). Existing methods aim at generating consensus results by aligning multiple inputs, followed by supervised methods such as classification (Boschetti et al., 2009; Lund et al., 2011; Al Azawi et al., 2015), or unsupervised methods such as dictionary-based selection (Lund and Ringger, 2009) and voting (Wemhoener et al., 2013; Xu and Smith, 2017). While supervised ensemble methods require human annotation for training, unsupervised selection methods work only when the correct word or character exists in one of the inputs. Furthermore, those methods could not correct single inputs. Multi-Input Attention. Multi-input attention has already been explored in tasks such as machine translation (Zoph and Knight, 2016; Libovick´y and Helcl, 2017) and summarization (Wang and Ling, 2016). Wang and Ling (2016) propose to concatenate multiple inputs to generate a summary; this flat attention combination model might be affected by the order of input sequences. Zoph and Knight (2016) aims at developing a multisource translation model on a trilingual corpus where the encoder for each language is combined to pass to the decoder; however, it requires the same number of inputs at training and decoding time since the parameters depend on the number of inputs. Libovick´y and Helcl (2017) explore different attention combination strategies for multiple information sources such as image and text. In contrast, our method does not require multiple inputs for training, and the attention combination strategies are used to integrate multiple inputs when decoding. 6 Conclusions We have proposed an unsupervised framework for OCR error correction, which can handle both single-input and multi-input correction tasks. An attention-based sequence-to-sequence model is applied for single-input correction, based on which a strategy of multi-input attention combination is designed to correct multiple input sequences simultaneously. The proposed strategy naturally incorporates aligning, correcting, and voting among multiple sequences, and is thus effective in improving the correction performance for corpora containing duplicated text. We propose two ways of training the correction model without human annotation by exploiting the duplication in the corpus. Experimental results on historical books and newspapers show that these unsupervised approaches significantly improve OCR accuracy and, when multiple inputs are available, achieve performance comparable to supervised methods. Acknowledgements This work was supported by NIH grant 2R01DC009834-06A1, the Andrew W. Mellon Foundation’s Scholarly Communications and Information Technology program, and a Google Faculty Research Award. Any views, findings, conclusions, or recommendations expressed do not necessarily reflect those of the NIH, Mellon, or Google. We would like to thank the anonymous reviewers for their valuable comments. References Mayce Al Azawi, Marcus Liwicki, and Thomas M. Breuel. 2015. Combination of multiple aligned recognition outputs using WFST and LSTM. In ICDAR, pages 31–35. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Federico Boschetti, Matteo Romanello, Alison Babeu, David Bamman, and Gregory Crane. 2009. Improving OCR accuracy for classical critical editions. In JCDL, pages 156–167. Hubert Cecotti and Abdel Bela¨ıd. 2005. Hybrid OCR combination approach complemented by a specialized ICR applied on ancient documents. In ICDAR, pages 1045–1049. Guillaume Chiron, Antoine Doucet, Mickael Coustaty, Muriel Visani, and Jean-Philippe Moreux. 2017. Impact of OCR errors on the use of digital libraries: Towards a better access to information. In JCDL. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP. Shamil Chollampatt, Kaveh Taghipour, and Hwee Tou Ng. 2016. Neural network translation models for grammatical error correction. In IJCAI. 2372 Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proc. Workshop on Statistical Machine Translation, pages 187–197. Monika Henzinger. 2006. Finding near-duplicate web pages: A large-scale evaluation of algorithms. In SIGIR, pages 284–291. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Rose Holley. 2009. Many hands make light work: Public collaborative OCR text correction in Australian historic newspapers. Technical report, National Library of Australia. Shmuel T. Klein and Miri Kopel. 2002. A voting system for automatic OCR correction. In Proc. SIGIR Workshop on Information Retrieval and OCR. Okan Kolak, William Byrne, and Philip Resnik. 2003. A generative probabilistic OCR model for NLP applications. In HLT-NAACL, pages 55–62. Okan Kolak and Philip Resnik. 2002. OCR error correction using a noisy channel model. In HLT, pages 257–262. Adenike M. Lam-Adesina and Gareth J. F. Jones. 2006. Examining and improving the effectiveness of relevance feedback for retrieval of scanned text documents. Information Processing & Management, 42(3):633–649. Jindˇrich Libovick´y and Jindˇrich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In ACL. Daniel Lopresti and Jiangying Zhou. 1997. Using consensus sequence voting to correct OCR errors. Computer Vision and Image Understanding, 67(1):39– 47. William B. Lund, Douglas J. Kennard, and Eric K. Ringger. 2013. Combining multiple thresholding binarization values to improve OCR output. In Proc. Document Recognition and Retrieval (DRR). William B. Lund and Eric K. Ringger. 2009. Improving optical character recognition through efficient multiple system alignment. In JCDL, pages 231– 240. William B Lund, Eric K Ringger, and Daniel D Walker. 2014. How well does multiple OCR error correction generalize? In Proc. Document Recognition and Retrieval (DRR). William B. Lund, Daniel D. Walker, and Eric K. Ringger. 2011. Progressive alignment and discriminative error correction for multiple OCR engines. In ICDAR, pages 764–768. Michael J. Paul and Jason Eisner. 2012. Implicitly intersecting weighted automata using dual decomposition. In NAACL, pages 232–242. Evan Sandhaus. 2008. The New York Times annotated corpus. Linguistic Data Consortium, 6(12):e26752. Carsten Schnober, Steffen Eger, Erik-Lˆan Do Dinh, and Iryna Gurevych. 2016. Still not there? comparing traditional sequence-to-sequence models to encoderdecoder neural networks on monotone string translation tasks. In COLING. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. David A. Smith, Ryan Cordell, Elizabeth Maddock Dillon, Nick Stramp, and John Wilkerson. 2014. Detecting and modeling local text reuse. In JCDL. T. F. Smith and M. S. Waterman. 1981. Identification of common molecular subsequences. Journal of Molecular Biology, 147(1):195–197. Kazem Taghva, Julie Borsack, and Allen Condit. 1996. Effects of ocr errors on ranking and feedback using the vector space model. Information Processing & Management, 32(3):317–327. Lu Wang and Wang Ling. 2016. Neural network-based abstract generation for opinions and arguments. In NAACL, pages 47–57. David Wemhoener, Ismet Zeki Yalniz, and R. Manmatha. 2013. Creating an improved version using noisy OCR from multiple editions. In ICDAR, pages 160–164. John Wilkerson, David A. Smith, and Nick Stramp. 2015. Tracing the flow of policy ideas on legislatures: A text reuse approach. American Journal of Political Science. Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, and Andrew Y Ng. 2016. Neural language correction with character-based attention. arXiv preprint arXiv:1603.09727. Shaobin Xu and David A. Smith. 2017. Retrieving and combining repeated passages to improve OCR. In JCDL. Takafumi Yamazoe, Minoru Etoh, Takeshi Yoshimura, and Kousuke Tsujino. 2011. Hypothesis preservation approach to scene text recognition with weighted finite-state transducer. In ICDAR, pages 359–363. Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In NAACL.
2018
220
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2373–2383 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2373 Building Language Models for Text with Named Entities Md Rizwan Parvez University of California Los Angeles [email protected] Baishakhi Ray Columbia University [email protected] Saikat Chakraborty University of Virginia [email protected] Kai-Wei Chang University of California Los Angeles [email protected] Abstract Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2% better perplexity in recipe generation and 22.06% on code generation than the stateof-the-art language models. 1 Introduction Language model is a fundamental component in Natural Language Processing (NLP) and it supports various applications, including document generation (Wiseman et al., 2017), text autocompletion (Arnold et al., 2017), spelling correction (Brill and Moore, 2000), and many others. Recently, language models are also successfully used to generate software source code written in programming languages like Java, C, etc. (Hindle et al., 2016; Yin and Neubig, 2017; Hellendoorn and Devanbu, 2017; Rabinovich et al., 2017). These models have improved the language generation tasks to a great extent, e.g., (Mikolov et al., 2010; Galley et al., 2015). However, while generating text or code with a large number of named entities (e.g., different variable names in source code), these models often fail to predict the entity names properly due to their wide variations. For instance, consider building a language model for generating recipes. There are numerous similar, yet slightly different cooking ingredients (e.g., olive oil, canola oil, grape oil, etc.—all are different varieties of oil). Such diverse vocabularies of the ingredient names hinder the language model from predicting them properly. To address this problem, we propose a novel language model for texts with many entity names. Our model learns the probability distribution over all the candidate words by leveraging the entity type information. For example, oil is the type for named entities like olive oil, canola oil, grape oil, etc.1 Such type information is even more prevalent for source code corpus written in statically typed programming languages (Bruce, 1993), since all the variables are by construct associated with types like integer, float, string, etc. Our model exploits such deterministic type information of the named entities and learns the probability distribution over the candidate words by decomposing it into two sub-components: (i) Type Model. Instead of distinguishing the individual names of the same type of entities, we first consider all of them equal and represent them by their type information. This reduces the vocab size to a great extent and enables to predict the type of each entity more accurately. (ii) Entity Composite Model. Using the entity type as a prior, we learn the conditional probability distribution of the actual entity names at inference time. We depict our model in Fig. 1. To evaluate our model, we create two benchmark datasets that involve many named entities. One is a cooking recipe corpus2 where each recipe contains a number of ingredients which are cate1Entity type information is often referred as category information or group information. In many applications, such information can be easily obtained by an ontology or by a pre-constructed entity table. 2 Data is crawled from http://www.ffts.com/ recipes.htm. 2374 place proteins in center of a dish with vegetables on each side . place chicken in center of a dish with broccoli on each side . entity name w P(w|proteins) P(w) q chicken 0.43 0.35 x 0.43 q beef 0.19 0.35 x 0.19 q .. .. .. Language Model (type model) Language Model (entity composite type model) type P(type) q proteins 0.35 q vegetables 0.11 q .. .. type P(type) q vegetables 0.52 q fruits 0.22 q .. .. entity name w P(w|vegetables) P(w) q broccoli 0.26 0.52 x 0.26 q potatoes 0.21 0.52 x 0.21 q .. .. .. Figure 1: An example illustrates the proposed model. For a given context (i.e., types of context words as input), the type model (in bottom red block) generates the type of the next word (i.e., the probability of the type of the next word as output). Further, for a given context and type of each candidate (i.e., context words, corresponding types of the context words, and type of the next word generated by the type model as input), the entity composite model (in upper green block) predicts the next word (actual entity name) by estimating the conditional probability of the next word as output. The proposed approach conducts joint inference over both models to leverage type information for generating text. gorized into 8 super-ingredients (i.e., type); e.g., “proteins”, “vegetables”, “fruits”, “seasonings”, “grains”, etc. Our second dataset comprises a source code corpus of 500 open-source Android projects collected from GitHub. We use an Abstract Syntax Tree (AST) (Parsons, 1992) based approach to collect the type information of the code identifiers. Our experiments show that although state-ofthe-art language models are, in general, good to learn the frequent words with enough training instances, they perform poorly on the entity names. A simple addition of type information as an extra feature to a neural network does not guarantee to improve the performance because more features may overfit or need more model parameters on the same data. In contrast, our proposed method significantly outperforms state-of-the-art neural network based language models and also the models with type information added as an extra feature. Overall, followings are our contributions: • We analyze two benchmark language corpora where each consists of a reasonable number of entity names. While we leverage an existing corpus for recipe, we curated the code corpus. For both datasets, we created auxiliary corpora with entity type information. All the code and datasets are released.3 • We design a language model for text consisting of many entity names. The model learns to mention entities names by leveraging the entity type information. • We evaluate our model on our benchmark datasets and establish a new baseline performance which significantly outperforms stateof-the-art language models. 2 Related Work and Background Class Based Language Models. Building language models by leveraging the deterministic or probabilistic class properties of the words (a.k.a, class-based language models) is an old idea (Brown et al., 1992; Goodman, 2001). However, the objective of our model is different from the existing class-based language models. The key differences are two-folds: 1) Most existing class-based language models (Brown et al., 1992; Pereira et al., 1993; Niesler et al., 1998; Baker and McCallum, 1998; Goodman, 2001; Maltese et al., 2001) are generative n-gram models whereas ours is a discriminative language model based on neural networks. The modeling principle and assumptions are very different. For example, we cannot calculate the conditional probability by statistical occurrence counting as these papers did. 2) Our approaches consider building two models and perform joint inference which makes our framework general and easy to extend. In Section 4, we demonstrate that our model can be easily incorporated with the state-of-art language model. The closest work in this line is hierarchical neural language models (Morin and Bengio, 2005), which model language with word clusters. However, their approaches do not focus on dealing with named entities as our model does. A recent work (Ji et al., 2017) studied the problem of building up a dynamic representation of named entity by updating the representation for every contextualized mention of that entity. Nonetheless, their approach does not deal with the sparsity issue and their goal is different from ours. Language Models for Named Entities. In some generation tasks, recently developed language models address the problem of predict3https://github.com/uclanlp/NamedEntityLanguageModel 2375 ing entity names by copying/matching the entity names from the reference corpus. For example, Vinyals et al. (2015) calculates the conditional probability of discrete output token sequence corresponding to positions in an input sequence. Gu et al. (2016) develops a seq2seq alignment mechanism which directly copies entity names or long phrases from the input sequence. Wiseman et al. (2017) generates document from structured table like basketball statistics using copy and reconstruction method as well. Another related code generation model (Yin and Neubig, 2017) parses natural language descriptions into source code considering the grammar and syntax in the target programming language (e.g., Python). Kiddon et al. (2016) generates recipe for a given goal, and agenda by making use of items on the agenda. While generating the recipe it continuously monitors the agenda coverage and focus on increasing it. All of them are sequence-to-sequence learning or end-to-end systems which differ from our general purpose (free form) language generation task (e.g., text auto-completion, spelling correction). Code Generation. The way developers write codes is not only just writing a bunch of instructions to run a machine, but also a form of communication to convey their thought. As observed by Donald E. Knuth (Knuth, 1992), “The practitioner of literate programming can be regarded as an essayist, whose main concern is exposition and excellence of style. Such an author, with thesaurus in hand, chooses the names of variables carefully and explains what such variable means.” Such comprehensible software corpora show surprising regularity (Ray et al., 2015; Gabel and Su, 2010) that is quite similar to the statistical properties of natural language corpora and thus, amenable to large-scale statistical analysis (Hindle et al., 2012). (Allamanis et al., 2017) presented a detailed survey. Although similar, source code has some unique properties that differentiate it from natural language. For example, source code often shows more regularities in local context due to common development practices like copy-pasting (Gharehyazie et al., 2017; Kim et al., 2005). This property is successfully captured by cache based language models (Hellendoorn and Devanbu, 2017; Tu et al., 2014). Code is also less ambiguous than natural language so that it can be interpreted by a compiler. The constraints for generating correct code is implemented by combining language model and program analysis technique (Raychev et al., 2014). Moreover, code contains open vocabulary—developers can coin new variable names without changing the semantics of the programs. Our model aims to addresses this property by leveraging variable types and scope. LSTM Language Model. In this paper, we use LSTM language model as a running example to describe our approach. Our language model uses the LSTM cells to generate latent states for a given context which captures the necessary features from the text. At the output layer of our model, we use Softmax probability distribution to predict the next word based on the latent state. Merity et al. (2017) is a LSTM-based language model which achieves the state-of-the-art performance on Penn Treebank (PTB) and WikiText2 (WT2) datasets. To build our recipe language model we use this as a blackbox and for our code generation task we use the simple LSTM model both in forward and backward direction. A forward directional LSTM starts from the beginning of a sentence and goes from left to right sequentially until the sentence ends, and vice versa. However, our approach is general and can be applied with other types of language models. 3 A Probabilistic Model for Text with Named Entities In this section, we present our approach to build a language model for text with name entities. Given previous context ¯w = {w1, w2, .., wt−1}, the goal of a language model is to predict the probability of next word P(wt| ¯w) at time step t, where wt ∈V text and V text is a fixed vocabulary set. Because the size of vocabulary for named entities is large and named entities often occur less frequently in the training corpus, the language model cannot generate these named entities accurately. For example, in our recipe test corpus the word “apple” occurs only 720 times whereas any kind of “fruits” occur 27,726 times. Existing approaches often either only generate common named entities or omit entities when generating text (Jozefowicz et al., 2016). To overcome this challenge, we propose to leverage the entity type information when modeling text with many entities. We assume each entity is associated with an entity type in a finite set of categories S = {s1, s2, .., si, .., sk}. Given a 2376 word w, s(w) reflects its entity type. If the word is a named entity, then we denote s(w) ∈S; otherwise the type function returns the words itself (i.e, s(w) = w). To simplify the notations, we use s(w) ̸∈S to represent the case where the word is not an entity. The entity type information given by s(w) is an auxiliary information that we can use to improve the language model. We use s( ¯w) to represent the entity type information of all the words in context ¯w and use w to represent the current word wt. Below, we show that a language model for text with typed information can be decomposed into the following two models: 1) a type model θt that predicts the entity type of the next word and 2) an entity composite model θv that predicts the next word based on a given entity type. Our goal is to model the probability of next word w given previous context ¯w: P (w| ¯w; θt, θv) , (1) where θt and θv are the parameters of the two aforementioned models. As we assume the typed information is given on the data, Eq. (1) is equivalent to P (w, s(w)| ¯w, s( ¯w); θt, θv) . (2) A word can be either a named entity or not; therefore, we consider the following two cases. Case 1: next word is a named entity. In this case, Eq. (2) can be rewritten as P (s(w) = s| ¯w, s( ¯w); θt, θv) × P (w| ¯w, s( ¯w), s(w) = s; θv, θt) (3) based on the rules of conditional probability. We assume the type of the next token s(w) can be predicted by a model θt using information of s( ¯w), and we can approximate the first term in Eq. (3) P(s(w)| ¯w, s( ¯w); θt, θv) ≈P(s(w)|s( ¯w), θt) (4) Similarly, we can make a modeling assumption to simplify the second term as P(w| ¯w, s( ¯w), s(w), θv, θt) ≈P(w| ¯w, s( ¯w), s(w), θv). (5) Case 2: next word is not a named entity. In this case, we can rewrite Eq. (2) to be P (s(w) ̸∈S| ¯w, s( ¯w), θt) × P (w| ¯w, s( ¯w), s(w) ̸∈S, θv) . (6) The first term in Eq. (6) can be modeled by 1 − X s∈S P(s(w) = s|s( ¯w), θt), which can be computed by the type model4. The second term can be again approximated by (5) and further estimated by an entity composition model. Typed Language Model. Combine the aforementioned equations, the proposed language model estimates P(w| ¯w; θt, θv) by P(w| ¯w, s( ¯w), s(w), θv)× ( P(s(w)|s( ¯w), θt) if s(w) ∈S (1−P s∈S P(s(w)=s|s( ¯w), θt)) if s(w) ̸∈S (7) The first term can be estimated by an entity composite model and the second term can be estimated by a type model as discussed below. 3.1 Type model The type model θt estimates the probability of P(s(w)|s( ¯w), θt). It can be viewed as a language model builds on a corpus with all entities replaced by their type. That is, assume the training corpus consists of x = {w1, w2, .., wn}. Using the type information provided in the auxiliary source, we can replace each word w with their corresponding type s(w) and generate a corpus of T = {s(wi), s(w2), .., s(wn)}. Note that if wi is not an named entity (i.e., s(w) ̸∈S), s(w) = w and the vocabulary on T is V text ∪S.5 Any language modeling technique can be used in modeling the type model on the modified corpus T . In this paper, we use the state-of-the-art model for each individual task. The details will be discussed in the experiment section. 3.2 Entity Composite Model The entity composite model predicts the next word based on modeling the conditional probability P(w| ¯w, s( ¯w), s(w), θv), which can be derived by P(w| ¯w, s( ¯w); θv) P ws∈Ω(s(w)) P(ws| ¯w, s( ¯w); θv), (8) 4Empirically for the non-entity words, P s∈S P(s(w) = s|s( ¯w) ≈0 5In a preliminary experiment, we consider putting all words with s(w) ̸∈S in a category “N/A”. However, because most words on the training corpus are not named entities, the type “N/A” dominates others and hinder the type model to make accurate predictions. 2377 where Ω(s(w)) is the set of words of the same type with w. To model the types of context word s( ¯w) in P(w| ¯w, s( ¯w); θv), we consider learning a type embedding along with the word embedding by augmenting each word vector with a type vector when learning the underlying word representation. Specifically, we represent each word w as a vector of [vw(w)T ; vt(s(w))T ]T , where vw(·) and vt(·) are the word vectors and type vectors learned by the model from the training corpus, respectively. Finally, to estimate Eq. (8) using θv, when computing the Softmax layer, we normalize over only words in Ω(s(w)). In this way, the conditional probability P(w| ¯w, s( ¯w), s(w), θv) can be derived. 3.3 Training and Inference Strategies We learn model parameters θt and θv independently by training two language models type model and entity composite model respectively. Given the context of type, type model predicts the type of the next word. Given the context and the type information of the all candidate words, entity composite model predicts the conditional actual word (e.g., entity name) as depicted in Fig 1. At inference time the generated probabilities from these two models are combined according to conditional probability (i.e., Eq. (7)) which gives the final probability distribution over all candidate words6. Our proposed model is flexible to any language model, training strategy, and optimization. As per our experiments, we use ADAM stochastic minibatch optimization (Kingma and Ba, 2014). In Algorithm 1, we summarize the language generation procedure. 4 Experiments We evaluate our proposed model on two different language generation tasks where there exist a lot of entity names in the text. In this paper, we release all the codes and datasets. The first task is recipe generation. For this task, we analyze a cooking recipe corpus. Each instance in this corpus is an individual recipe and consists of many ingredi6While calculating the final probability distribution over all candidate words, with our joint inference schema, a strong state-of-art language model, without the type information, itself can work sufficiently well and replace the entity composite model. Our experiments using (Merity et al., 2017) in Section 4.1 validate this claim. Algorithm 1: Language Generation Input: Language corpus X = {w1, w2, .., wn}, type s(w) of the words, integer number m. Output: θt, θv, {W1, W2, .., Wm} 1 Training Phase: 2 Generate T = { s(w1), s(w2), .., s(wn)} 3 Train type model θt on T 4 Train entity composite model θv on X using [wi; s(wi)] as input 5 Test Phase (Generation Phase): 6 for i = 1 to m do 7 for w ∈Vtext do 8 Compute P(s(w)|s( ¯w), θt) 9 Compute P(w| ¯w, s( ¯w), s(w), θv) 10 Compute P(w| ¯w; θt, θv) using Eq.(7) 11 end 12 Wi ←argmaxwP(w| ¯w; θt, θv) 13 end ents’. Our second task is code generation. We construct a Java code corpus where each instance is a Java method (i.e., function). These tasks are challenging because they have the abundance of entity names and state-of-the-art language models fail to predict them properly as a result of insufficient training observations. Although in this paper, we manually annotate the types of the recipe ingredients, in other applications it can be acquired automatically. For example: in our second task of code generation, the types are found using Eclipse JDT framework. In general, using DBpedia ontology (e.g., “Berlin” has an ontology “Location”), Wordnet hierarchy (e.g., “Dog” is an “Animal”), role in sports (e.g., “Messi” plays in “Forward”; also available in DBpedia7), Thesaurus (e.g., “renal cortex”, “renal pelvis”, “renal vein”, all are related to “kidney”), Medscape (e.g., “Advil” and “Motrin” are actually “Ibuprofen”), we can get the necessary type information. As for the applications where the entity types cannot be extracted automatically by these frameworks (e.g., recipe ingredients), although there is no exact strategy, any reasonable design can work. Heuristically, while annotating manually in our first task, we choose the total number of types in such a way that each type has somewhat balanced (similar) size. We use the same dimensional word embedding 7 http://dbpedia.org/page/Lionel Messi 2378 (400 for recipe corpus, 300 for code corpus) to represent both of the entity name (e.g., “apple”) and their entity type (e.g., “fruits”) in all the models. Note that in our approach, the type model only replaces named entities with entity type when it generates next word. If next word is not a named entity, it will behave like a regular language model. Therefore, we set both models with the same dimensionality. Accordingly, for the entity composite model which takes the concatenation of the entity name and the entity type, the concatenated input dimension is 800 and 600 respectively for recipe and code corpora. 4.1 Recipe Generation Recipe Corpus Pre-processing: Our recipe corpus collection is inspired by (Kiddon et al., 2016). We crawl the recipes from “Now Youre Cooking! Recipe Software” 8. Among more than 150,000 recipes in this dataset, we select similarly structured/formatted (e.g, title, blank line then ingredient lists followed by a recipe) 95,786 recipes. We remove all the irrelevant information (e.g., author’s name, data source) and keep only two information: ingredients and recipes. We set aside the randomly selected 20% of the recipes for testing and from the rest, we keep randomly selected 80% for the training and 20% for the development. Similar to (Kiddon et al., 2016), we preprocess the dataset and filter out the numerical values, special tokens, punctuation, and symbols.9 Quantitatively, the data we filter out is negligible; in terms of words, we keep 9,994,365 words out of 10,231,106 and the number of filter out words is around ∼2%. We release both of the raw and cleaned data for future challenges. As the ingredients are the entity names in our dataset, we process it separately to get the type information. Retrieving Ingredient Type: As per our type model, for each word w, we require its type s(w). We only consider ingredient type for our experiment. First, we tokenize the ingredients and consider each word as an ingredient. We manually classify the ingredients into 8 super-ingredients: “fruits”, “proteins”, “sides”, “seasonings”, “vegetables”, “dairy”, “drinks”, and “grains”. Some8http://www.ffts.com/recipes.htm 9For example, in our crawled raw dataset, we find that some recipes have lines like “===MMMMM===” which are totally irrelevant to our task. For the words with numerical values like “100 ml”, we only remove the “100” and keep the “ml” since our focus is not to predict the exact number. times, ingredients are expressed using multiple words; for such ingredient phrase, we classify each word in the same group (e.g., for “boneless beef” both “boneless” and “beef” are classified as “proteins”). We classify the most frequent 1,224 unique ingredients, 10 which cover 944,753 out of 1,241,195 mentions (top 76%) in terms of frequency of the ingredients. In our experiments, we omit the remainder 14,881 unique ingredients which are less frequent and include some misspelled words. The number of unique ingredients in the 8 super ingredients is 110, 316, 140, 180, 156, 80, 84, and 158 respectively. We prepare the modified type corpus by replacing each actual ingredient’s name w in the original recipe corpus by the type (i.e., super ingredients s(w)) to train the type model. Recipe Statistics: In our corpus, the total number of distinct words in vocabulary is 52,468; number of unique ingredients (considering splitting phrasal ingredients also) is 16,105; number of tokens is 8,716,664. In number of instances train/dev/test splits are 61,302/15,326/19,158. The average instance size of a meaningful recipe is 91 on the corpus. Configuration: We consider the state-of-the art LSTM-based language model proposed in (Merity et al., 2017) as the basic component for building the type model, and entity composite model. We use 400 dimensional word embedding as described in Section 4. We train the embedding for our dataset. We use a minibatch of 20 instances while training and back-propagation through time value is set to 70. Inside of this (Merity et al., 2017) language model, it uses 3 layered LSTM architecture where the hidden layers are 1150 dimensional and has its own optimization and regularization mechanism. All the experiments are done using PyTorch and Python 3.5. Baselines: Our first baseline is ASGD WeightDropped LSTM (AWD LSTM) (Merity et al., 2017), which we also use to train our models (see ’Configuration’ in 4.1). This model achieves the state-of-the-art performance on benchmark Penn Treebank (PTB), and WikiText-2 (WT2) language corpus. Our second baseline is the same language model (AWD LSTM) with the type information added as an additional feature (i.e., same as entity composite model). 10We consider both singular and plural forms. The number of singular formed annotated ingredients are 797. 2379 Model Dataset Vocabulary Perplexity (Recipe Corpus) Size AWD LSTM original 52,472 20.23 AWD LSTM modified type 51,675 17.62 type model AWD LSTM original 52,472 18.23 with type feature our model original 52,472 9.67 Table 1: Comparing the performance of recipe generation task. All the results are on the test set of the corresponding corpus. AWD LSTM (type model) is our type model implemented with the baseline language model AWD LSTM (Merity et al., 2017). Our second baseline is the same language model (AWD LSTM) with the type information added as an additional feature for each word. Results of Recipe Generation. We compare our model with the baselines using perplexity metric—lower perplexity means the better prediction. Table 1 summarizes the result. The 3rd row shows that adding type as a simple feature does not guarantee a significant performance improvement while our proposed method significantly outperforms both baselines and achieves 52.2% improvement with respect to baseline in terms of perplexity. To illustrate more, we provide an example snippet of our test corpus: “place onion and ginger inside chicken . allow chicken to marinate for hour .”. Here, for the last mention of the word “chicken”, the standard language model assigns probability 0.23 to this word, while ours assigns probability 0.81. 4.2 Code Generation Code Corpus Pre-processing. We crawl 500 Android open source projects from GitHub11. GitHub is the largest open source software forge where anyone can contribute (Ray et al., 2014). Thus, GitHub also contains trivial projects like student projects, etc. In our case, we want to study the coding practices of practitioners so that our model can learn to generate quality code. To ensure this, we choose only those Android projects from GitHub that are also present in Google Play Store12. We download the source code of these projects from GitHub using an off the shelf tool GitcProc (Casalnuovo et al., 2017). Since real software continuously evolves to cater new requirements or bug fixes, to make our modeling task more realistic, we further study dif11https://github.com 12https://play.google.com/store?hl=en ferent project versions. We partition the codebase of a project into multiple versions based on the code commit history retrieved from GitHub; each version is taken at an interval of 6 months. For example, anything committed within the first six months of a project will be in the first version, and so on. We then build our code suggestion task mimicking how a developer develops code in an evolving software—based on the past project history, developers add new code. To implement that we train our language model on past project versions and test it on the most recent version, at method granularity. However, it is quite difficult for any language model to generate a method from the scratch if the method is so new that even the method signature (i.e., method declaration statement consisting of method name and parameters) is not known. Thus, during testing, we only focus on the methods that the model has seen before but some new tokens are added to it. This is similar to the task when a developer edits a method to implement a new feature or bug-fix. Since we focus on generating the code for every method, we train/test the code prediction task at method level—each method is similar to a sentence and each token in the method is equivalent to a word. Thus, we ignore the code outside the method scope like global variables, class declarations, etc. We further clean our dataset by removing user-defined “String” tokens as they increase the diversity of the vocabularies significantly, although having the same type. For example, the word sequences “Hello World!” and “Good wishes for ACL2018!!” have the same type java.lang.String.VAR. Retrieving Token Type: For every token w in a method, we extract its type information s(w). A token type can be Java built-in data types (e.g., int, double, float, boolean etc.,) or user or framework defined classes (e.g., java.lang.String, io.segment.android.flush.FlushThread etc.). We extract such type information for each token by parsing the Abstract Syntax Tree (AST) of the source code13. We extract the AST type information of each token using Eclipse JDT framework14. Note that, language keywords like for, if, etc. are not associated with any type. Next, we prepare the type corpus by replacing the 13AST represents source code as a tree by capturing its abstract syntactic structure, where each node represents a construct in the source code. 14https://www.eclipse.org/jdt/ 2380 variable names with corresponding type information. For instance, if variable var is of type java.lang.Integer, in the type corpus we replace var by java.lang.Integer. Since multiple packages might contain classes of the same name, we retain the fully qualified name for each type15. Code Corpus Statistics: In our corpus, the total number of distinct words in vocabulary is 38,297; the number of unique AST type (including all user-defined classes) is 14,177; the number of tokens is 1,440,993. The number of instances used for train and testing is 26,600 and 3,546. Among these 38,297 vocabulary words, 37,411 are seen at training time while the rests are new. Configuration: To train both type model and entity composite model, we use forward and backward LSTM (See Section 2) and combine them at the inference/generation time. We train 300dimensional word embedding for each token as described in Section 4 initialized by GLOVE (Pennington et al., 2014). Our LSTM is single layered and the hidden size is 300. We implement our model on using PyTorch and Python 3.5. Our training corpus size 26,600 and we do not split it further into smaller train and development set; rather we use them all to train for one single epoch and record the result on the test set. Baselines: Our first baseline is standard LSTM language model which we also use to train our modules (see ‘Configuration’ in 4.2). Similar to our second baseline for recipe generation we also consider LSTM with the type information added as more features16 as our another baseline. We further compare our model with state-of-the-art token-based language model for source code SLPCore (Hellendoorn and Devanbu, 2017). Results of Code Generation: Table 2 shows that adding type as simple features does not guarantee a significant performance improvement while our proposed method significantly outperforms both forward and backward LSTM baselines. Our approach with backward LSTM has 40.3% better perplexity than original backward LSTM and forward has 63.14% lower (i.e., better) perplexity than original forward LSTM. With respect to SLP-Core performance, our model is 22.06% better in perplexity. We compare our model with SLP-Core details in case study-2. 15Also the AST type of a very same variable may differ in two different methods. Hence, the context is limited to each method. 16LSTM with type is same as entity composite model. Model Dataset Vocabulary Perplexity (Code Corpus) Size SLP-Core original 38,297 3.40 fLSTM original 38,297 21.97 fLSTM [type model] modified type 14,177 7.94 fLSTM with type feature original 38,297 20.05 our model (fLSTM) original 38,297 12.52 bLSTM original 38,297 7.19 bLSTM [type model] modified type 14,177 2.58 bLSTM with type feature original 38,297 6.11 our model (bLSTM) original 38,297 2.65 Table 2: Comparing the performance of code generation task. All the results are on the test set of the corresponding corpus. fLSTM, bLSTM denotes forward and backward LSTM respectively. SLP-Core refers to (Hellendoorn and Devanbu, 2017). 5 Quantitative Error Analysis To understand the generation performance of our model and interpret the meaning of the numbers in Table 1 and 2, we further perform the following case studies. 5.1 Case Study-1: Recipe Generation As the reduction of the perplexity does not necessarily mean the improvement of the accuracy, we design a “fill in the blank task” task to evaluate our model. A blank place in this task will contain an ingredient and we check whether our model can predict it correctly. In particular, we choose six ingredients from different frequency range (low, mid, high) based on how many times they have appeared in the training corpus. Following Table shows two examples with four blanks (underlined with the true answer). Example fill in the blank task 1. Sprinkle chicken pieces lightly with salt. 2. Mix egg and milk and pour over bread. We further evaluate our model on a multiple choice questioning (MCQ) strategy where the fill in the blank problem remains same but the options for the correct answers are restricted to the six ingredients. Our intuition behind this case-study is to check when there is an ingredient whether our model can learn it. If yes, we then quantify the learning using standard accuracy metric and compare with the state-of-the-art model to evaluate how much it improves the performance. We also measure how much the accuracy improvement depends on the training frequency. Table 3 shows the result. Our model outperforms the fill in the blank task for both cases, 2381 Accuracy Ingredient Train Freq. #Blanks Free-Form MCQ AWD LSTM Our AWD LSTM Our Milk 14, 136 4,001 26.94 59.34 80.83 94.90 Salt 33,906 9,888 37.12 62.47 89.29 95.75 Apple 7,205 720 1.94 30.28 37.65 89.86 Bread 11,673 3,074 32.43 52.64 78.85 94.53 Tomato 12,866 1,815 2.20 35.76 43.53 88.76 Chicken 19,875 6,072 22.50 45.24 77.70 94.63 Table 3: Performance of fill in the blank task. i.e., without any options (free-form) and MCQ. Note that, the percentage of improvement is inversely proportional to the training frequencies of the ingredients—less-frequent ingredients achieve a higher accuracy improvement (e.g., “Apple” and “Tomato”). This validates our intuition of learning to predict the type first more accurately with lower vocabulary set and then use conditional probability to predict the actual entity considering the type as a prior. 5.2 Case Study-2: Code Generation Programming language source code shows regularities both in local and global context (e.g., variables or methods used in one source file can also be created or referenced from another library file). SLP-Core (Hellendoorn and Devanbu, 2017) is a state-of-the-art code generation model that captures this global and local information using a nested cache based n-gram language model. They further show that considering such code structure into account, a simple n-gram based SLP-Core outperforms vanilla deep learning based models like RNN, LSTM, etc. In our case, as our example instance is a Java method, we only have the local context. Therefore, to evaluate the efficiency of our proposed model, we further analyze that exploiting only the type information are we even learning any global code pattern? If yes, then how much in comparison to the baseline (SLP-Core)? To investigate these questions, we provide all the full project information to SLP-Core (Hellendoorn and Devanbu, 2017) corresponding to our train set. However, at test-time, to establish a fair comparison, we consider the perplexity metric for the same methods. SLP-Core achieves a perplexity 3.40 where our backward LSTM achieves 2.65. This result shows that appropriate type information can actually capture many inherent attributes which can be exploited to build a good language model for programming language. 6 Conclusion Language model often lacks in performance to predict entity names correctly. Applications with lots of named entities, thus, obviously suffer. In this work, we propose to leverage the type information of such named entities to build an effective language model. Since similar entities have the same type, the vocabulary size of a type based language model reduces significantly. The prediction accuracy of the type model increases significantly with such reduced vocabulary size. Then, using the entity type information as prior we build another language model which predicts the true entity name according to the conditional probability distribution. Our evaluation and case studies confirm that the type information of the named entities captures inherent text features too which leads to learn intrinsic text pattern and improve the performance of overall language model. Acknowledgments We thank the anonymous reviewers for their insightful comments. We also thank Wasi Uddin Ahmad, Peter Kim, Shou-De Lin, and Paul Mineiro for helping us implement, annotate, and design the experiments. This work was supported in part by National Science Foundation Grants IIS1760523, CCF-16-19123, CNS-16-18771 and an NVIDIA hardware grant. References Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2017. A survey of machine learning for big code and naturalness. arXiv preprint arXiv:1709.06182 . Kenneth C. Arnold, Kai-Wei Chang, and Adam Kalai. 2017. Counterfactual language model adaptation for suggesting phrases. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017. pages 49–54. L Douglas Baker and Andrew Kachites McCallum. 1998. Distributional clustering of words for text classification. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 96–103. Eric Brill and Robert C Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 286–293. 2382 Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics 18(4):467–479. Kim B Bruce. 1993. Safe type checking in a statically-typed object-oriented programming language. In Proceedings of the 20th ACM SIGPLANSIGACT symposium on Principles of programming languages. ACM, pages 285–298. Casey Casalnuovo, Yagnik Suchak, Baishakhi Ray, and Cindy Rubio-Gonz´alez. 2017. Gitcproc: a tool for processing and classifying github commits. ACM, pages 396–399. Mark Gabel and Zhendong Su. 2010. A study of the uniqueness of source code. In Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering. ACM, pages 147–156. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltableu: A discriminative metric for generation tasks with intrinsically diverse targets. arXiv preprint arXiv:1506.06863 . Mohammad Gharehyazie, Baishakhi Ray, and Vladimir Filkov. 2017. Some from here, some from there: cross-project code reuse in github. In Proceedings of the 14th International Conference on Mining Software Repositories. IEEE Press, pages 291–301. Joshua Goodman. 2001. Classes for fast maximum entropy training. CoRR cs.CL/0108006. http://arxiv.org/abs/cs.CL/0108006. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. CoRR abs/1603.06393. http://arxiv.org/abs/1603.06393. Vincent J. Hellendoorn and Premkumar Devanbu. 2017. Are deep neural networks the best choice for modeling source code? In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. ACM, New York, NY, USA, ESEC/FSE 2017, pages 763–773. https://doi.org/10.1145/3106237.3106290. Abram Hindle, Earl T. Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. 2016. On the naturalness of software. Commun. ACM 59(5):122–131. https://doi.org/10.1145/2902362. Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. 2012. On the naturalness of software. In Software Engineering (ICSE), 2012 34th International Conference on. IEEE, pages 837– 847. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic entity representations in neural language models. arXiv preprint arXiv:1708.00781 . Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410 . Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 329–339. Miryung Kim, Vibha Sazawal, David Notkin, and Gail Murphy. 2005. An empirical study of code clone genealogies. In ACM SIGSOFT Software Engineering Notes. ACM, volume 30, pages 187–196. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980. Donald E Knuth. 1992. Literate programming. CSLI Lecture Notes, Stanford, CA: Center for the Study of Language and Information (CSLI), 1992 . Giulio Maltese, P Bravetti, Hubert Cr´epy, BJ Grainger, M Herzog, and Francisco Palou. 2001. Combining word-and class-based language models: A comparative study in several languages using automatic and manual word-clustering techniques. In Seventh European Conference on Speech Communication and Technology. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and Optimizing LSTM Language Models. arXiv preprint arXiv:1708.02182 . Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Aistats. Citeseer, volume 5, pages 246–252. Thomas R Niesler, Edward WD Whittaker, and Philip C Woodland. 1998. Comparison of partof-speech and automatically derived category-based language models for speech recognition. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on. IEEE, volume 1, pages 177–180. Thomas W Parsons. 1992. Introduction to compiler construction. Computer Science Press New York. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 2383 Conference on Empirical Methods in Natural Language Processing. pages 1532–1543. Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of english words. In Proceedings of the 31st annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, pages 183–190. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. CoRR abs/1704.07535. http://arxiv.org/abs/1704.07535. Baishakhi Ray, Meiyappan Nagappan, Christian Bird, Nachiappan Nagappan, and Thomas Zimmermann. 2015. The uniqueness of changes: Characteristics and applications. ACM, MSR ’15. Baishakhi Ray, Daryl Posnett, Vladimir Filkov, and Premkumar Devanbu. 2014. A large scale study of programming languages and code quality in github. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, pages 155–165. Veselin Raychev, Martin Vechev, and Eran Yahav. 2014. Code completion with statistical language models. In Acm Sigplan Notices. ACM, volume 49, pages 419–428. Zhaopeng Tu, Zhendong Su, and Premkumar Devanbu. 2014. On the localness of software. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, pages 269–280. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, Curran Associates, Inc., pages 2692–2700. http://papers.nips.cc/paper/5866pointer-networks.pdf. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-todocument generation. CoRR abs/1707.08052. http://arxiv.org/abs/1707.08052. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. CoRR abs/1704.01696. http://arxiv.org/abs/1704.01696.
2018
221
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2384–2394 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2384 hyperdoc2vec: Distributed Representations of Hypertext Documents Jialong Han♠, Yan Song♠, Wayne Xin Zhao♦, Shuming Shi♠, Haisong Zhang♠ ♠Tencent AI Lab ♦School of Information, Renmin University of China {jialonghan,batmanfly}@gmail.com,{clksong,shumingshi,hansonzhang}@tencent.com Abstract Hypertext documents, such as web pages and academic papers, are of great importance in delivering information in our daily life. Although being effective on plain documents, conventional text embedding methods suffer from information loss if directly adapted to hyper-documents. In this paper, we propose a general embedding approach for hyper-documents, namely, hyperdoc2vec, along with four criteria characterizing necessary information that hyper-document embedding models should preserve. Systematic comparisons are conducted between hyperdoc2vec and several competitors on two tasks, i.e., paper classification and citation recommendation, in the academic paper domain. Analyses and experiments both validate the superiority of hyperdoc2vec to other models w.r.t. the four criteria. 1 Introduction The ubiquitous World Wide Web has boosted research interests on hypertext documents, e.g., personal webpages (Lu and Getoor, 2003), Wikipedia pages (Gabrilovich and Markovitch, 2007), as well as academic papers (Sugiyama and Kan, 2010). Unlike independent plain documents, a hypertext document (hyper-doc for short) links to another hyper-doc by a hyperlink or citation mark in its textual content. Given this essential distinction, hyperlinks or citations are worth specific modeling in many tasks such as link-based classification (Lu and Getoor, 2003), web retrieval (Page et al., 1999), entity linking (Cucerzan, 2007), and citation recommendation (He et al., 2010). To model hypertext documents, various efforts (Cohn and Hofmann, 2000; Kataria et al., 2010; Perozzi et al., 2014; Zwicklbauer et al., 2016; Wang et al., 2016) have been made to depict networks of hyper-docs as well as their content. Among potential techniques, distributed representation (Mikolov et al., 2013; Le and Mikolov, 2014) tends to be promising since its validity and effectiveness are proven for plain documents on many natural language processing (NLP) tasks. Conventional attempts on utilizing embedding techniques in hyper-doc-related tasks generally fall into two types. The first type (Berger et al., 2017; Zwicklbauer et al., 2016) simply downcasts hyper-docs to plain documents and feeds them into word2vec (Mikolov et al., 2013) (w2v for short) or doc2vec (Le and Mikolov, 2014) (d2v for short). These approaches involve downgrading hyperlinks and inevitably omit certain information in hyper-docs. However, no previous work investigates the information loss, and how it affects the performance of such downcasting-based adaptations. The second type designs sophisticated embedding models to fulfill certain tasks, e.g., citation recommendation (Huang et al., 2015b), paper classification (Wang et al., 2016), and entity linking (Yamada et al., 2016), etc. These models are limited to specific tasks, and it is yet unknown whether embeddings learned for those particular tasks can generalize to others. Based on the above facts, we are interested in two questions: • What information should hyper-doc embedding models preserve, and what nice property should they possess? • Is there a general approach to learning taskindependent embeddings of hyper-docs? To answer the two questions, we formalize the hyper-doc embedding task, and propose four criteria, i.e., content awareness, context awareness, newcomer friendliness, and context intent aware2385 ness, to assess different models. Then we discuss simple downcasting-based adaptations of existing approaches w.r.t. the above criteria, and demonstrate that none of them satisfy all four. To this end, we propose hyperdoc2vec (h-d2v for short), a general embedding approach for hyperdocs. Different from most existing approaches, h-d2v learns two vectors for each hyper-doc to characterize its roles of citing others and being cited. Owning to this, h-d2v is able to directly model hyperlinks or citations without downgrading them. To evaluate the learned embeddings, we employ two tasks in the academic paper domain1, i.e., paper classification and citation recommendation. Experimental results demonstrate the superiority of h-d2v. Comparative studies and controlled experiments also confirm that h-d2v benefits from satisfying the above four criteria. We summarize our contributions as follows: • We propose four criteria to assess different hyper-document embedding models. • We propose hyperdoc2vec, a general embedding approach for hyper-documents. • We systematically conduct comparisons with competing approaches, validating the superiority of h-d2v in terms of the four criteria. 2 Related Work Network representation learning is a related topic to ours since a collection of hyper-docs resemble a network. To embed nodes in a network, Perozzi et al. (2014) propose DeepWalk, where nodes and random walks are treated as pseudo words and texts, and fed to w2v for node vectors. Tang et al. (2015b) explicitly embed second-order proximity via the number of common neighbors of nodes. Grover and Leskovec (2016) extend DeepWalk with second-order Markovian walks. To improve classification tasks, Tu et al. (2016) explore a semi-supervised setting that accesses partial labels. Compared with these models, h-d2v learns from both documents’ connections and contents while they mainly focus on network structures. Document embedding for classification is another focused area to apply document embeddings. 1Although limited in tasks and domains, we expect that our embedding approach can be potentially generalized to, or serve as basis to more sophisticated methods for, similar tasks in the entity domain, e.g., Wikipedia page classification and entity linking. We leave them for future work. Le and Mikolov (2014) employ learned d2v vectors to build different text classifiers. Tang et al. (2015a) apply the method in (Tang et al., 2015b) on word co-occurrence graphs for word embeddings, and average them for document vectors. For hyper-docs, Ganguly and Pudi (2017) and Wang et al. (2016) target paper classification in unsupervised and semi-supervised settings, respectively. However, unlike h-d2v, they do not explicitly model citation contexts. Yang et al. (2015)’s approach also addresses embedding hyper-docs, but involves matrix factorization and does not scale. Citation recommendation is a direct downstream task to evaluate embeddings learned for a certain kind of hyper-docs, i.e., academic papers. In this paper we concentrate on context-aware citation recommendation (He et al., 2010). Some previous studies adopt neural models for this task. Huang et al. (2015b) propose Neural Probabilistic Model (NPM) to tackle this problem with embeddings. Their model outperforms non-embedding ones (Kataria et al., 2010; Tang and Zhang, 2009; Huang et al., 2012). Ebesu and Fang (2017) also exploit neural networks for citation recommendation, but require author information as additional input. Compared with h-d2v, these models are limited in a task-specific setting. Embedding-based entity linking is another topic that exploits embeddings to model certain hyperdocs, i.e., Wikipedia (Huang et al., 2015a; Yamada et al., 2016; Sun et al., 2015; Fang et al., 2016; He et al., 2013; Zwicklbauer et al., 2016), for entity linking (Shen et al., 2015). It resembles citation recommendation in the sense that linked entities highly depend on the contexts. Meanwhile, it requires extra steps like candidate generation, and can benefit from sophisticated techniques such as collective linking (Cucerzan, 2007). 3 Preliminaries We introduce notations and definitions, then formally define the embedding problem. We also propose four criteria for hyper-doc embedding models w.r.t their appropriateness and informativeness. 3.1 Notations and Definitions Let w ∈W be a word from a vocabulary W, and d ∈D be a document id (e.g., web page URLs and paper DOIs) from an id collection D. After filtering out non-textual content, a hyper-document H is reorganized as a sequence of words and doc ids, 2386 (Koehn et al., 2007) (Zhao and Gildea, 2010) (Papineni et al., 2002) Original Source doc ݀௦ Context words ܥ Target doc ݀௧ … We also evaluate our model by computing the machine translation BLEU score (Papineni et al., 2002) using the Moses system (Koehn et al., 2007) … … … (a) Hyper-documents. Citation as word BLEU evaluate (Papineni et al., 2012) “Word” Vectors … w2v …We also evaluate our model by computing the machine translation BLEU score (Papineni et al., 2002) using the Moses system (Koehn et al., 2007)… … … (b) Citation as word. Context as content BLEU evaluate (Zhao and Gildea, 2010) … … Word Vectors Doc Vectors (Papineni et al., 2002) d2v (Koehn et al., 2007) (Zhao and Gildea, 2010) (Papineni et al., 2002) …We also evaluate our model by computing the machine translation BLEU score using the Moses system … … machine translation BLEU score … … Moses system … (c) Context as content. Figure 1: An example of Zhao and Gildea (2010) citing Papineni et al. (2002) and existing approaches. i.e., W ∪D. For example, web pages could be simplified as streams of words and URLs, and papers are actually sequences of words and cited DOIs. If a document id dt with some surrounding words C appear in the hyper-doc of ds, i.e., Hds, we stipulate that a hyper-link ⟨ds, C, dt⟩is formed. Herein ds, dt ∈D are ids of the source and target documents, respectively; C ⊆W are context words. Figure 1(a) exemplifies a hyperlink. 3.2 Problem Statement Given a corpus of hyper-docs {Hd}d∈D with D and W, we want to learn document and word embedding matrices D ∈Rk×|D| and W ∈Rk×|W| simultaneously. The i-th column di of D is a kdimensional embedding vector for the i-th hyperdoc with id di. Similarly, wj, the j-th column of W, is the vector for word wj. Once embeddings for hyper-docs and words are learned, they can facilitate applications like hyper-doc classification and citation recommendation. 3.3 Criteria for Embedding Models A reasonable model should learn how contents and hyperlinks in hyper-docs impact both D and W. We propose the following criteria for models: • Content aware. Content words of a hyperdoc play the main role in describing it, so the document representation should depend on its own content. For example, the words in Zhao and Gildea (2010) should affect and contribute to its embedding. • Context aware. Hyperlink contexts usually provide a summary for the target document. Therefore, the target document’s vector should be impacted by words that others use to summarize it, e.g., paper Papineni et al. (2002) and the word “BLEU” in Figure 1(a). • Newcomer friendly. In a hyper-document network, it is inevitable that some documents are not referred to by any hyperlink in other hyper-docs. If such “newcomers” do not get embedded properly, downstream tasks involving them are infeasible or deteriorated. • Context intent aware. Words around a hyperlink, e.g., “evaluate ...by” in Figure 1(a), normally indicate why the source hyper-doc makes the reference, e.g., for general reference or to follow/oppose the target hyperdoc’s opinion or practice. Vectors of those context words should be influenced by both documents to characterize such semantics or intents between the two documents. We note that the first three criteria are for hyperdocs, while the last one is desired for word vectors. 4 Representing Hypertext Documents In this section, we first give the background of two prevailing techniques, word2vec and doc2vec. Then we present two conversion approaches for hyper-documents so that w2v and d2v can be applied. Finally, we address their weaknesses w.r.t. the aforementioned four criteria, and propose our hyperdoc2vec model. In the remainder of this paper, when the context is clear, we mix the use of terms hyper-doc/hyperlink with paper/citation. 4.1 word2vec and doc2vec w2v (Mikolov et al., 2013) has proven effective for many NLP tasks. It integrates two models, i.e., cbow and skip-gram, both of which learn two types of word vectors, i.e., IN and OUT vectors. cbow sums up IN vectors of context words and make it predictive of the current word’s OUT vector. skip-gram uses the IN vector of the current word to predict its context words’ OUT vectors. As a straightforward extension to w2v, d2v also has two variants: pv-dm and pv-dbow. pv-dm works in a similar manner as cbow, except that the IN vector of the current document 2387 Desired Property Impacts Task? Addressed by Approach? Classification Citation Recommendation w2v d2v-nc d2v-cac h-d2v Context aware ✓ ✓ ✓ × ✓ ✓ Content aware ✓ ✓ × ✓ ✓ ✓ Newcomer friendly ✓ ✓ × ✓ ✓ ✓ Context intent aware × ✓ × × × ✓ Table 1: Analysis of tasks and approaches w.r.t. desired properties. Model Output DI DO WI WO w2v ✓ ✓ ✓ ✓ d2v (pv-dm) ✓ × ✓ ✓ d2v (pv-dbow) ✓ × × ✓ h-d2v ✓ ✓ ✓ ✓ Table 2: Output of models. is regarded as a special context vector to average. Analogously, pv-dbow uses IN document vector to predict its words’ OUT vectors, following the same structure of skip-gram. Therefore in pv-dbow, words’ IN vectors are omitted. 4.2 Adaptation of Existing Approaches To represent hyper-docs, a straightforward strategy is to convert them into plain documents in a certain way and apply w2v and d2v. Two conversions following this strategy are illustrated below. Citation as word. This approach is adopted by Berger et al. (2017).2 As Figure 1(b) shows, document ids D are treated as a collection of special words. Each citation is regarded as an occurrence of the target document’s special word. After applying standard word embedding methods, e.g., w2v, we obtain embeddings for both ordinary words and special “words”, i.e., documents. In doing so, this approach allows target documents interacting with context words, thus produces context-aware embeddings for them. Context as content. It is often observed in academic papers when citing others’ work, an author briefly summarizes the cited paper in its citation context. Inspired by this, we propose a contextas-content approach as in Figure 1(c). To start, we remove all citations. Then all citation contexts of a target document dt are copied into dt as additional contents to make up for the lost information. Finally, d2v is applied to the augmented documents to generate document embeddings. With this approach, the generated document embeddings are both context- and content-aware. 4.3 hyperdoc2vec Besides citation-as-word with w2v and contextas-content with d2v (denoted by d2v-cac for short), there is also an alternative using d2v on documents with citations removed (d2v-nc for 2It is designed for document visualization purposes. short). We made a comparison of these approaches in Table 1 in terms of the four criteria stated in Section 3.3. It is observed that none of them satisfy all criteria, where the reasons are as follows. First, w2v is not content aware. Following our examples in the academic paper domain, consider the paper (hyper-doc) Zhao and Gildea (2010) in Figure 1(a), from w2v’s perspective in Figure 1(b), “...computing the machine translation BLEU . . . ” and other text no longer have association with Zhao and Gildea (2010), thus not contributing to its embedding. In addition, for papers being just published and having not obtained citations yet, they will not appear as special “words” in any text. This makes w2v newcomerunfriendly, i.e., unable to produce embeddings for them. Second, being trained on a corpus without citations, d2v-nc is obviously not context aware. Finally, in both w2v and d2v-cac, context words interact with the target documents without treating the source documents as backgrounds, which forces IN vectors of words with context intents, e.g., “evaluate” and “by” in Figure 1(a), to simply remember the target documents, rather than capture the semantics of the citations. The above limitations are caused by the conversions of hyper-docs where certain information in citations is lost. For a citation ⟨ds, C, dt⟩, citationas-word only keeps the co-occurrence information between C and dt. Context-as-content, on the other hand, mixes C with the original content of dt. Both approaches implicitly downgrade citations ⟨ds, C, dt⟩to ⟨C, dt⟩for adaptation purposes. To learn hyper-doc embeddings without such limitations, we propose hyperdoc2vec. In this model, two vectors of a hyper-doc d, i.e., IN and OUT vectors, are adopted to represent the document of its two roles. The IN vector dI characterizes d being a source document. The OUT vector dO encodes its role as a target document. We note that learning those two types of vectors is advantageous. It enables us to model citations and con2388 (Zhaoand Gildea,2010) DI also evaluate … (Papineni etal.,2002) Classifier Average Document Matrix BLEU WI WI WI DO (ZhaoandGildea,2010) “…alsoevaluate…BLEU…” (Papineni etal.,2002) Acitationۦ݀௦ǡ ܥǡ ݀௧ۧ Figure 2: The hyperdoc2vec model. tents simultaneously without sacrificing information on either side. Next, we describe the details of h-d2v in modeling citations and contents. To model citations, we adopt the architecture in Figure 2. It is similar to pv-dm, except that documents rather than words are predicted at the output layer. For a citation ⟨ds, C, dt⟩, to allow context words C interacting with both vectors, we average dI s of ds with word vectors of C, and make the resulted vector predictive of dO t of dt. Formally, for all citations C = {⟨ds, C, dt⟩}, we aim to optimize the following average log probability objective: max DI,DO,WI 1 |C|  ⟨ds,C,dt⟩∈C log P(dt|ds, C) (1) To model the probability P(dt|ds, C) where dt is cited in ds with C, we average their IN vectors x = 1 1 + |C|  dI s +  w∈C wI  (2) and use x to compose a multi-class softmax classifier on all OUT document vectors P(dt|ds, C) = exp(x⊤dO t )  d∈D exp(x⊤dO) (3) To model contents’ impact on document vectors, we simply consider an additional objective function that is identical to pv-dm, i.e., enumerate words and contexts, and use the same input architecture as Figure 2 to predict the OUT vector of the current word. Such convenience owes to the fact that using two vectors makes the model parameters compatible with those of pv-dm. Note that combining the citation and content objectives leads to a joint learning framework. To facilitate easier and faster training, we adopt an alternative pre-training/fine-tuning or retrofitting framework (Faruqui et al., 2015). We initialize with a predefined number of pv-dm iterations, and then optimize Eq. 1 based on the initialization. Dataset Docs Citations Years NIPS Train 1,590 512 Up to 1998 Test 150 89 1999 Total 1,740 601 Up to 1999 ACL Train 18,845 91,792 Up to 2012 Test 1,563 16,937 2013 Total 20,408 108,729 Up to 2013 DBLP Train 593,378 2,565,625 Up to 2009 Test 55,736 308,678 From 2010 Total 649,114 2,874,303 All years Table 3: The statistics of three datasets. Finally, similar to w2v (Mikolov et al., 2013) and d2v (Le and Mikolov, 2014), to make training efficient, we adopt negative sampling: log σ(x⊤dO t ) + n  i=1 Edi∼PN(d) log σ(−x⊤dO i ) (4) and use it to replace every log P(dt|ds, C). Following Huang et al. (2015b), we adopt a uniform distribution on D as the distribution PN(d). Unlike the other models in Table 1, h-d2v satisfies all four criteria. We refer to the example in Figure 2 to make the points clear. First, when optimizing Eq. 1 with the instance in Figure 2, the update to dO of Papineni et al. (2002) depends on wI of context words such as “BLEU”. Second, we pre-train dI with contents, which makes the document embeddings content aware. Third, newcomers can depend on their contents for dI, and update their OUT vectors when they are sampled3 in Eq. 4. Finally, the optimization of Eq. 1 enables mutual enhancement between vectors of hyper-docs and context intent words, e.g., “evaluate by”. Under the background of a machine translation paper Zhao and Gildea (2010), the above two words help point the citation to the BLEU paper (Papineni et al., 2002), thus updating its OUT vector. The intent “adopting tools/algorithms” of “evaluate by” is also better captured by iterating over many document pairs with them in between. 5 Experiments In this section, we first introduce datasets and basic settings used to learn embeddings. We then discuss additional settings and present experimental results of the two tasks, i.e., document classification and citation recommendation, respectively. 3Given a relatively large n. 2389 Model Original w/ DeepWalk Macro Micro Macro Micro DeepWalk 61.67 69.89 61.67 69.89 w2v (I) 10.83 41.84 31.06 50.93 w2v (I+O) 9.36 41.26 25.92 49.56 d2v-nc 70.62 77.86 70.64 78.06 d2v-cac 71.83 78.09 71.57 78.59 h-d2v (I) 68.81 76.33 73.96 79.93 h-d2v (I+O) 72.89 78.99 73.24 79.55 Table 4: F1 scores on DBLP. Model Content Aware/ Original w/ DeepWalk Newcomer Friendly Macro Micro Macro Micro DeepWalk 66.57 76.56 66.57 76.56 w2v (I) × / × 19.77 47.32 59.80 72.90 w2v (I+O) × / × 15.97 45.66 50.77 70.08 d2v-nc ✓/ ✓ 61.54 73.73 69.37 78.22 d2v-cac ✓/ ✓ 65.23 75.93 70.43 78.75 h-d2v (I) ✓/ ✓ 58.59 69.79 66.99 75.63 h-d2v (I+O) ✓/ ✓ 66.64 75.19 68.96 76.61 Table 5: F1 on DBLP when newcomers are discarded. 5.1 Datasets and Experimental Settings We use three datasets from the academic paper domain, i.e., NIPS4, ACL anthology5 and DBLP6, as shown in Table 3. They all contain full text of papers, and are of small, medium, and large size, respectively. We apply ParsCit7 (Councill et al., 2008) to parse the citations and bibliography sections. Each identified citation string referring to a paper in the same dataset, e.g., [1] or (Author et al., 2018), is replaced by a global paper id. Consecutive citations like [1, 2] are regarded as multiple ground truths occupying one position. Following He et al. (2010), we take 50 words before and after a citation as the citation context. Gensim ( ˇReh˚uˇrek and Sojka, 2010) is used to implement all w2v and d2v baselines as well as h-d2v. We use cbow for w2v and pv-dbow for d2v, unless otherwise noted. For all three baselines, we set the (half) context window length to 50. For w2v, d2v, and the pv-dm-based initialization of h-d2v, we run 5 epochs following Gensim’s default setting. For h-d2v, its iteration is set to 100 epochs with 1000 negative samples. The dimension size k of all approaches is 100. All other parameters in Gensim are kept as default. 5.2 Document Classification In this task, we classify the research fields of papers given their vectors learned on DBLP. To obtain labels, we use Cora8, a small dataset of Computer Science papers and their field categories. We keep the first levels of the original categories, 4https://cs.nyu.edu/ roweis/data.html 5http://clair.eecs.umich.edu/aan/index.php (2013 release) 6http://zhou142.myweb.cs.uwindsor.ca/academicpaper.html This page has been unavailable recently. They provide a larger CiteSeer dataset and a collection of DBLP paper ids. To better interpret results from the Computer Science perspective, we intersect them and obtain the DBLP dataset. 7https://github.com/knmnyn/ParsCit 8http://people.cs.umass.edu/˜mccallum/data.html e.g., “Artificial Intelligence” of “Artificial Intelligence - Natural Language Processing”, leading to 10 unique classes. We then intersect the dataset with DBLP, and obtain 5,975 labeled papers. For w2v and h-d2v outputing both IN and OUT document vectors, we use IN vectors or concatenations of both vectors as features. For newcomer papers without w2v vectors, we use zero vectors instead. To enrich the features with network structure information, we also try concatenating them with the output of DeepWalk (Perozzi et al., 2014), a representative network embedding model. The model is trained on the citation network of DBLP with an existing implementation9 and default parameters. An SVM classifier with RBF kernel is used. We perform 5-fold cross validation, and report Macro- and Micro-F1 scores. 5.2.1 Classification Performance In Table 4, we demonstrate the classification results. We have the following observations. First, adding DeepWalk information almost always leads to better classification performance, except for Macro-F1 of the d2v-cac approach. Second, owning to different context awareness, d2v-cac consistently outperforms d2v-nc in terms of all metrics and settings. Third, w2v has the worst performance. The reason may be that w2v is neither content aware nor newcomer friendly. We will elaborate more on the impacts of the two properties in Section 5.2.2. Finally, no matter whether DeepWalk vectors are used, h-d2v achieves the best F1 scores. However, when OUT vectors are involved, h-d2v with DeepWalk has slightly worse performance. A possible explanation is that, when h-d2v IN and DeepWalk vectors have enough information to train the SVM classifiers, adding another 100 features (OUT vectors) only increase the parameter 9https://github.com/phanein/deepwalk 2390 Model NIPS ACL Anthology DBLP Rec MAP MRR nDCG Rec MAP MRR nDCG Rec MAP MRR nDCG w2v (cbow, I4I) 5.06 1.29 1.29 2.07 12.28 5.35 5.35 6.96 3.01 1.00 1.00 1.44 w2v (cbow, I4O) 12.92 6.97 6.97 8.34 15.68 8.54 8.55 10.23 13.26 7.29 7.33 8.58 d2v-nc (pv-dbow, cosine) 14.04 3.39 3.39 5.82 21.09 9.65 9.67 12.29 7.66 3.25 3.25 4.23 d2v-cac (same as d2v-nc) 14.61 4.94 4.94 7.14 28.01 11.82 11.84 15.59 15.67 7.34 7.36 9.16 NPM (Huang et al., 2015b) 7.87 2.73 3.13 4.03 12.86 5.98 5.98 7.59 6.87 3.28 3.28 4.07 h-d2v (random init, I4O) 3.93 0.78 0.78 1.49 30.98 16.76 16.77 20.12 17.22 8.82 8.87 10.65 h-d2v (pv-dm retrofitting, I4O) 15.73 6.68 6.68 8.80 31.93 17.33 17.34 20.76 21.32 10.83 10.88 13.14 Table 6: Top-10 citation recommendation results (dimension size k = 100). space of the classifiers and the training variance. For w2v with or without DeepWalk, it is also the case. This may be because information in w2v’s IN and OUT vectors is fairly redundant. 5.2.2 Impacts of Content Awareness and Newcomer Friendliness Because content awareness and newcomer friendliness are highly correlated in Table 1, to isolate and study their impacts, we decouple them as follows. In the 5,975 labeled papers, we keep 2,052 with at least one citation, and redo experiments in Table 4. By carrying out such controlled experiments, we expect to remove the impact of newcomers, and compare all approaches only with respect to different content awareness. In Table 5, we provide the new scores obtained. By comparing Tables 4 and 5, we observe that w2v benefits from removing newcomers with zero vectors, while all newcomer friendly approaches get lower scores because of fewer training examples. Even though the change, w2v still cannot outperform the other approaches, which reflects the positive impact of content awareness on the classification task. It is also interesting that DeepWalk becomes very competitive. This implies that structure-based methods favor networks with better connectivity. Finally, we note that Table 5 is based on controlled experiments with intentionally skewed data. The results are not intended for comparison among approaches in practical scenarios. 5.3 Citation Recommendation When writing papers, it is desirable to recommend proper citations for a given context. This could be achieved by comparing the vectors of the context and previous papers. We use all three datasets for this task. Embeddings are trained on papers before 1998, 2012, and 2009, respectively. The remaining papers in each dataset are used for testing. We compare h-d2v with all approaches in Section 4.2, as well as NPM10 (Huang et al., 2015b) mentioned in Section 2, the first embedding-based approach for the citation recommendation task. Note that the inference stage involves interactions between word and document vectors and is nontrivial. We describe our choices as below. First, for w2v vectors, Nalisnick et al. (2016) suggest that the IN-IN similarity favors word pairs with similar functions (e.g., “red” and “blue”), while the IN-OUT similarity characterizes word co-occurrence or compatibility (e.g., “red” and “bull”). For citation recommendation that relies on the compatibility between context words and cited papers, we hypothesize that the IN-for-OUT (or I4O for short) approach will achieve better results. Therefore, for w2v-based approaches, we average IN vectors of context words, then score and and rank OUT document vectors by dot product. Second, for d2v-based approaches, we use the learned model to infer a document vector d for the context words, and use d to rank IN document vectors by cosine similarity. Among multiple attempts, we find this choice to be optimal. Third, for h-d2v, we adopt the same scoring and ranking configurations as for w2v. Finally, for NPM, we adopt the same ranking strategy as in Huang et al. (2015b). Following them, we focus on top-10 results and report the Recall, MAP, MRR, and nDCG scores. 5.3.1 Recommendation Performance In Table 6, we report the citation recommendation results. Our observations are as follows. First, among all datasets, all methods perform relatively well on the medium-sized ACL dataset. This is because the smallest NIPS dataset provides 10Note that the authors used n = 1000 for negative sampling, and did not report the number of training epoches. After many trials, we find that setting the number of both the negative samples and epoches at 100 to be relatively effective and affordable w.r.t. training time. 2391 100 200 300 400 500 Dimension 5 10 15 20 25 30 Rec@10 (%) w2v (I4O) d2v-nc d2v-cac NPM h-d2v Figure 3: Varying k on DBLP. The scores of w2v keeps increasing to 26.63 at k = 1000, and then begins to drop. Although at the cost of a larger model and longer training/inference time, it still cannot outperform h-d2v of 30.37 at k = 400. too few citation contexts to train a good model. Moreover, DBLP requires a larger dimension size k to store more information in the embedding vectors. We increase k and report the Rec@10 scores in Figure 3. We see that all approaches have better performance when k increases to 200, though d2v-based ones start to drop beyond this point. Second, the I4I variant of w2v has the worst performance among all approaches. This observation validates our hypothesis in Section 5.3. Third, the d2v-cac approach outperforms its variant d2v-nc in terms of all datasets and metrics. This indicates that context awareness matters in the citation recommendation task. Fourth, the performance of NPM is sandwiched between those of w2v’s two variants. We have tried our best to reproduce it. Our explanation is that NPM is citation-as-word-based, and only depends on citation contexts for training. Therefore, it is only context aware but neither content aware nor newcomer friendly, and behaves like w2v. Finally, when retrofitting pv-dm, h-d2v generally has the best performance. When we substitute pv-dm with random initialization, the performance is deteriorated by varying degrees on different datasets. This implies that content awareness is also important, if not so important than context awareness, on the citation recommendation task. 5.3.2 Impact of Newcomer Friendliness Table 7 analyzes the impact of newcomer friendliness. Opposite from what is done in Section 5.2.2, we only evaluate on testing examples where at least a ground-truth paper is a newcomer. Please note that newcomer unfriendly approaches do not Model Newcomer Friendly Rec MAP MRR nDCG w2v (I4O) × 3.64 3.23 3.41 2.73 NPM × 1.37 1.13 1.15 0.92 d2v-nc ✓ 6.48 3.52 3.54 3.96 d2v-cac ✓ 8.16 5.13 5.24 5.21 h-d2v ✓ 6.41 4.95 5.21 4.49 Table 7: DBLP results evaluated on 63,342 citation contexts with newcomer ground-truth. Category Description Weak Weakness of cited approach CoCoGM Contrast/Comparison in Goals/Methods (neutral) CoCoWork stated to be superior to cited work CoCoR0 Contrast/Comparison in Results (neutral) CoCoXY Contrast between 2 cited methods PBas Author uses cited work as basis or starting point PUse Author uses tools/algorithms/data/definitions PModi Author adapts or modifies tools/algorithms/data PMot This citation is positive about approach used or problem addressed (used to motivate work in current paper) PSim Author’s work and cited work are similar PSup Author’s work and cited work are compatible/provide support for each other Neut Neutral description of cited work, or not enough textual evidence for above categories, or unlisted citation function Table 8: Annotation scheme of citation functions in Teufel et al. (2006). necessarily get zero scores. The table shows that newcomer friendly approaches are superior to unfriendly ones. Note that, like Table 5, this table is also based on controlled experiments and not intended for comparing approaches. 5.3.3 Impact of Context Intent Awareness In this section, we analyze the impact of context intent awareness. We use Teufel et al. (2006)’s 2,824 citation contexts11 with annotated citation functions, e.g., emphasizing weakness (Weak) or using tools/algorithms (PBas) of the cited papers. Table 8 from Teufel et al. (2006) describes the full annotating scheme. Teufel et al. (2006) also use manual features to evaluate citation function classification. To test all models on capturing context intents, we average all context words’ IN vectors (trained on DBLP) as features. Noticing that pv-dbow does not output IN word vectors, and OUT vectors do not provide reasonable results, we use pv-dm here instead. We use SVM with RBF 11The number is 2,829 in the original paper. The inconsistency may be due to different regular expressions we used. 2392 Query and Ground Truth Result Ranking of w2v Result Ranking of d2v-cac Result Ranking of h-d2v . . . We also evaluate our model by computing the machine translation BLEU score (Papineni et al., 2002) using the Moses system (Koehn et al., 2007)... (Papineni et al., 2002) BLEU: a Method for Automatic Evaluation of Machine Translation (Koehn et al., 2007) Moses: Open Source Toolkit for Statistical Machine Translation 1. HMM-Based Word Alignment in Statistical Translation 2. Indirect-HMM-based Hypothesis Alignment for Combining Outputs from Machine Translation Systems 3. The Alignment Template Approach to Statistical Machine Translation ... 9. Moses: Open Source Toolkit for Statistical Machine Translation 57. BLEU: a Method for Automatic Evaluation of Machine Translation 1. Discriminative Reranking for Machine Translation 2. Learning Phrase-Based Head Transduction Models for Translation of Spoken Utterances 3. Cognates Can Improve Statistical Translation Models . . . 6. BLEU: a Method for Automatic Evaluation of Machine Translation 29. Moses: Open Source Toolkit for Statistical Machine Translation 1. BLEU: a Method for Automatic Evaluation of Machine Translation 2. Statistical Phrase-Based Translation 3. Improved Statistical Alignment Models 4. HMM-Based Word Alignment in Statistical Translation 5. Moses: Open Source Toolkit for Statistical Machine Translation Table 9: Papers recommended by different approaches for a citation context in Zhao and Gildea (2010). Weak CoCoGM CoCoR0 CoCoCoCoXY PBas PUse PModi PMot PSim PSup Neut 0 20 40 60 80 100 F1 (%) EMNLP'06 (Macro: 57.00 Micro: 77.00) w2v (Macro: 44.86 Micro: 74.43) d2v-cac (Macro: 24.19 Micro: 70.64) h-d2v (Macro: 54.37 Micro: 75.39) Figure 4: F1 of citation function classification. kernels and default parameters. Following Teufel et al. (2006), we use 10-fold cross validation. Figure 4 depicts the F1 scores. Scores of Teufel et al. (2006)’s approach are from the original paper. We omit d2v-nc because it is very inferior to d2v-cac. We have the following observations. First, Teufel et al. (2006)’s feature-engineeringbased approach has the best performance. Note that we cannot obtain their original cross validation split, so the comparison may not be fair and is only for consideration in terms of numbers. Second, among all embedding-based methods, h-d2v has the best citation function classification results, which is close to Teufel et al. (2006)’s. Finally, the d2v-cac vectors are only good at Neutral, the largest class. On the other classes and global F1, they are outperformed by w2v vectors. To study how citation function affects citation recommendation, we combine the 2,824 labeled citation contexts and another 1,075 labeled contexts the authors published later to train an SVM, and apply it to the DBLP testing set to get citation functions. We evaluate citation recommendation performance of w2v (I4O), d2v-cac, and h-d2v on a per-citation-function basis. In Figure 5, we break down Rec@10 scores on citation functions. On the six largest classes (marked by solid dots), h-d2v outperforms all competitors. 100 102 104 106 Count Weak CoCoGM CoCoR0 CoCoCoCoXY PBas PUse PModi PMot PSim PSup Neut 10 20 30 40 Rec@10 (%) w2v d2v-cac h-d2v Category Size Figure 5: Rec@10 w.r.t. citation functions. To better investigate the impact of context intent awareness, Table 9 shows recommended papers of the running example of this paper. Here, Zhao and Gildea (2010) cited the BLEU metric (Papineni et al., 2002) and Moses tools (Koehn et al., 2007) of machine translation. However, the additional words “machine translation” lead both w2v and d2v-cac to recommend many machine translation papers. Only our h-d2v manages to recognize the citation function “using tools/algorithms (PBas)”, and concentrates on the citation intent to return the right papers in top-5 results. 6 Conclusion We focus on the hyper-doc embedding problem. We propose that hyper-doc embedding algorithms should be content aware, context aware, newcomer friendly, and context intent aware. To meet all four criteria, we propose a general approach, hyperdoc2vec, which assigns two vectors to each hyper-doc and models citations in a straightforward manner. In doing so, the learned embeddings satisfy all criteria, which no existing model is able to. For evaluation, paper classification and citation recommendation are conducted on three academic paper datasets. Results confirm the effectiveness of our approach. Further analyses also demonstrate that possessing the four properties helps h-d2v outperform other models. 2393 References Matthew Berger, Katherine McDonough, and Lee M. Seversky. 2017. cite2vec: Citation-driven document exploration via word embeddings. IEEE Trans. Vis. Comput. Graph. 23(1):691–700. David A. Cohn and Thomas Hofmann. 2000. The missing link - A probabilistic model of document content and hypertext connectivity. In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000. pages 430–436. Isaac G. Councill, C. Lee Giles, and Min-Yen Kan. 2008. Parscit: an open-source CRF reference string parsing package. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In EMNLPCoNLL 2007, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 708–716. Travis Ebesu and Yi Fang. 2017. Neural citation network for context-aware citation recommendation. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. pages 1093–1096. Wei Fang, Jianwen Zhang, Dilin Wang, Zheng Chen, and Ming Li. 2016. Entity disambiguation by knowledge and text jointly embedding. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016. pages 260–269. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1606–1615. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipediabased explicit semantic analysis. In IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence. pages 1606–1611. Soumyajit Ganguly and Vikram Pudi. 2017. Paper2vec: Combining graph and text information for scientific paper representation. In Advances in Information Retrieval - 39th European Conference on IR Research, ECIR 2017. pages 383–395. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 855–864. Qi He, Jian Pei, Daniel Kifer, Prasenjit Mitra, and C. Lee Giles. 2010. Context-aware citation recommendation. In Proceedings of the 19th International Conference on World Wide Web, WWW 2010. pages 421–430. Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, Volume 2: Short Papers. pages 30–34. Hongzhao Huang, Larry P. Heck, and Heng Ji. 2015a. Leveraging deep neural networks and knowledge graphs for entity disambiguation. CoRR abs/1504.07678. Wenyi Huang, Saurabh Kataria, Cornelia Caragea, Prasenjit Mitra, C. Lee Giles, and Lior Rokach. 2012. Recommending citations: translating papers into references. In 21st ACM International Conference on Information and Knowledge Management, CIKM’12. pages 1910–1914. Wenyi Huang, Zhaohui Wu, Liang Chen, Prasenjit Mitra, and C. Lee Giles. 2015b. A neural probabilistic model for context based citation recommendation. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. pages 2404–2410. Saurabh Kataria, Prasenjit Mitra, and Sumit Bhatia. 2010. Utilizing context in generative bayesian models for linked corpus. In Proceedings of the TwentyFourth AAAI Conference on Artificial Intelligence, AAAI 2010. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014. pages 1188–1196. Qing Lu and Lise Getoor. 2003. Link-based classification. In Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003). pages 496–503. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013.. pages 3111–3119. 2394 Eric T. Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana. 2016. Improving document ranking with dual word embeddings. In Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Companion Volume. pages 83–84. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. . Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. pages 311–318. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: online learning of social representations. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14. pages 701–710. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. ELRA, Valletta, Malta, pages 45–50. http://is.muni.cz/ publication/884893/en. Wei Shen, Jianyong Wang, and Jiawei Han. 2015. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Trans. Knowl. Data Eng. 27(2):443–460. Kazunari Sugiyama and Min-Yen Kan. 2010. Scholarly paper recommendation via user’s recent research interests. In Proceedings of the 2010 Joint International Conference on Digital Libraries, JCDL 2010. pages 29–38. Yaming Sun, Lei Lin, Duyu Tang, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2015. Modeling mention, context and entity with neural networks for entity disambiguation. In Proceedings of the TwentyFourth International Joint Conference on Artificial Intelligence, IJCAI 2015. pages 1333–1339. Jian Tang, Meng Qu, and Qiaozhu Mei. 2015a. PTE: predictive text embedding through large-scale heterogeneous text networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 1165–1174. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015b. LINE: large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, WWW 2015. pages 1067–1077. Jie Tang and Jing Zhang. 2009. A discriminative approach to topic-based citation recommendation. In Advances in Knowledge Discovery and Data Mining, 13th Pacific-Asia Conference, PAKDD 2009. pages 572–579. Simone Teufel, Advaith Siddharthan, and Dan Tidhar. 2006. Automatic classification of citation function. In EMNLP 2007, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. pages 103–110. Cunchao Tu, Weicheng Zhang, Zhiyuan Liu, and Maosong Sun. 2016. Max-margin deepwalk: Discriminative learning of network representation. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016. pages 3889–3895. Suhang Wang, Jiliang Tang, Charu C. Aggarwal, and Huan Liu. 2016. Linked document embedding for classification. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016. pages 115–124. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016. pages 250–259. Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y. Chang. 2015. Network representation learning with rich text information. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015. pages 2111–2117. Shaojun Zhao and Daniel Gildea. 2010. A fast fertility hidden markov model for word alignment using MCMC. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP 2010. pages 596–605. Stefan Zwicklbauer, Christin Seifert, and Michael Granitzer. 2016. Robust and collective entity disambiguation through semantic embeddings. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, SIGIR 2016. pages 425–434.
2018
222
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2395–2405 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2395 Entity-Duet Neural Ranking: Understanding the Role of Knowledge Graph Semantics in Neural Information Retrieval Zhenghao Liu1 Chenyan Xiong2 Maosong Sun1 ∗ Zhiyuan Liu1 1State Key Laboratory of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Department of Computer Science and Technology, Tsinghua University, Beijing, China 2Language Technologies Institute, Carnegie Mellon University Abstract This paper presents the Entity-Duet Neural Ranking Model (EDRM), which introduces knowledge graphs to neural search systems. EDRM represents queries and documents by their words and entity annotations. The semantics from knowledge graphs are integrated in the distributed representations of their entities, while the ranking is conducted by interaction-based neural ranking networks. The two components are learned end-to-end, making EDRM a natural combination of entityoriented search and neural information retrieval. Our experiments on a commercial search log demonstrate the effectiveness of EDRM. Our analyses reveal that knowledge graph semantics significantly improve the generalization ability of neural ranking models. 1 Introduction The emergence of large scale knowledge graphs has motivated the development of entity-oriented search, which utilizes knowledge graphs to improve search engines. The recent progresses in entity-oriented search include better text representations with entity annotations (Xiong et al., 2016; Raviv et al., 2016), richer ranking features (Dalton et al., 2014), entity-based connections between query and documents (Liu and Fang, 2015; Xiong and Callan, 2015), and soft-match query and documents through knowledge graph relations or embeddings (Xiong et al., 2017c; Ensan and Bagheri, 2017). These approaches bring in entities and semantics from knowledge graphs and have greatly improved the effectiveness of feature-based search systems. ∗Corresponding author: M. Sun ([email protected]) Another frontier of information retrieval is the development of neural ranking models (neuralIR). Deep learning techniques have been used to learn distributed representations of queries and documents that capture their relevance relations (representation-based) (Shen et al., 2014), or to model the query-document relevancy directly from their word-level interactions (interactionbased) (Guo et al., 2016a; Xiong et al., 2017b; Dai et al., 2018). Neural-IR approaches, especially the interaction-based ones, have greatly improved the ranking accuracy when large scale training data are available (Dai et al., 2018). Entity-oriented search and neural-IR push the boundary of search engines from two different aspects. Entity-oriented search incorporates human knowledge from entities and knowledge graph semantics. It has shown promising results on feature-based ranking systems. On the other hand, neural-IR leverages distributed representations and neural networks to learn more sophisticated ranking models form large-scale training data. However, it remains unclear how these two approaches interact with each other and whether the entity-oriented search has the same advantage in neural-IR methods as in feature-based systems. This paper explores the role of entities and semantics in neural-IR. We present an EntityDuet Neural Ranking Model (EDRM) that incorporates entities in interaction-based neural ranking models. EDRM first learns the distributed representations of entities using their semantics from knowledge graphs: descriptions and types. Then it follows a recent state-of-the-art entity-oriented search framework, the word-entity duet (Xiong et al., 2017a), and matches documents to queries with both bag-of-words and bag-of-entities. Instead of manual features, EDRM uses interactionbased neural models (Dai et al., 2018) to match query and documents with word-entity duet rep2396 resentations. As a result, EDRM combines entityoriented search and the interaction based neuralIR; it brings the knowledge graph semantics to neural-IR and enhances entity-oriented search with neural networks. One advantage of being neural is that EDRM can be learned end-to-end. Given a large amount of user feedback from a commercial search log, the integration of knowledge graph semantics to neural ranker, is learned jointly with the modeling of query-document relevance in EDRM. It provides a convenient data-driven way to leverage external semantics in neural-IR. Our experiments on a Sogou query log and CNDBpedia demonstrate the effectiveness of entities and semantics in neural models. EDRM significantly outperforms the word-interaction-based neural ranking model, K-NRM (Xiong et al., 2017a), confirming the advantage of entities in enriching word-based ranking. The comparison with Conv-KNRM (Dai et al., 2018), the recent stateof-the-art neural ranker that models phrase level interactions, provides a more interesting observation: Conv-KNRM predicts user clicks reasonably well, but integrating knowledge graphs using EDRM significantly improves the neural model’s generalization ability on more difficult scenarios. Our analyses further revealed the source of EDRM’s generalization ability: the knowledge graph semantics. If only treating entities as ids and ignoring their semantics from the knowledge graph, the entity annotations are only a cleaner version of phrases. In neural-IR systems, the embeddings and convolutional neural networks have already done a decent job in modeling phraselevel matches. However, the knowledge graph semantics brought by EDRM can not yet be captured solely by neural networks; incorporating those human knowledge greatly improves the generalization ability of neural ranking systems. 2 Related Work Current neural ranking models can be categorized into two groups: representation based and interaction based (Guo et al., 2016b). The earlier works mainly focus on representation based models. They learn good representations and match them in the learned representation space of query and documents. DSSM (Huang et al., 2013) and its convolutional version CDSSM (Shen et al., 2014) get representations by hashing letter-tri-grams to a low dimension vector. A more recent work uses pseudo-labeling as a weak supervised signal to train the representation based ranking model (Dehghani et al., 2017). The interaction based models learn word-level interaction patterns from query-document pairs. ARC-II (Hu et al., 2014) and MatchPyramind (Pang et al., 2016) utilize Convolutional Neural Network (CNN) to capture complicated patterns from word-level interactions. The Deep Relevance Matching Model (DRMM) (Guo et al., 2016b) uses pyramid pooling (histogram) to summarize the word-level similarities into ranking models. K-NRM and Conv-KNRM use kernels to summarize wordlevel interactions with word embeddings and provide soft match signals for learning to rank. There are also some works establishing positiondependent interactions for ranking models (Pang et al., 2017; Hui et al., 2017). Interaction based models and representation based models can also be combined for further improvements (Mitra et al., 2017). Recently, large scale knowledge graphs such as DBpedia (Auer et al., 2007), Yago (Suchanek et al., 2007) and Freebase (Bollacker et al., 2008) have emerged. Knowledge graphs contain human knowledge about real-word entities and become an opportunity for search system to better understand queries and documents. There are many works focusing on exploring their potential for ad-hoc retrieval. They utilize knowledge as a kind of pseudo relevance feedback corpus (Cao et al., 2008) or weight words to better represent query according to well-formed entity descriptions. Entity query feature expansion (Dietz and Verga, 2014) uses related entity attributes as ranking features. Another way to utilize knowledge graphs in information retrieval is to build the additional connections from query to documents through related entities. Latent Entity Space (LES) builds an unsupervised model using latent entities’ descriptions (Liu and Fang, 2015). EsdRank uses related entities as a latent space, and performs learning to rank with various information retrieval features (Xiong and Callan, 2015). AttR-Duet develops a four-way interaction to involve cross matches between entity and word representations to catch more semantic relevance patterns (Xiong et al., 2017a). There are many other attempts to integrate 2397 knowledge graphs in neural models in related tasks (Miller et al., 2016; Gupta et al., 2017; Ghazvininejad et al., 2018). Our work shares a similar spirit and focuses on exploring the effectiveness of knowledge graph semantics in neuralIR. 3 Entity-Duet Neural Ranking Model This section first describes the standard architecture in current interaction based neural ranking models. Then it presents our Entity-Duet Neural Ranking Model, including the semantic entity representation which integrates the knowledge graph semantics, and then the entity-duet ranking framework. The overall architecture of EDRM is shown in Figure 1. 3.1 Interaction based Ranking Models Given a query q and a document d, interaction based models first build the word-level translation matrix between q and d (Berger and Lafferty, 1999). The translation matrix describes word pairs similarities using word correlations, which are captured by word embedding similarities in interaction based models. Typically, interaction based ranking models first map each word t in q and d to an L-dimensional embedding ⃗vt with an embedding layer Embw: ⃗vt = Embw(t). (1) It then constructs the interaction matrix M based on query and document embeddings. Each element Mij in the matrix, compares the ith word in q and the jth word in d, e.g. using the cosine similarity of word embeddings: M ij = cos(⃗vtq i ,⃗vtd j ). (2) With the translation matrix describing the term level matches between query and documents, the next step is to calculate the final ranking score from the matrix. Many approaches have been developed in interaction base neural ranking models, but in general, that would include a feature extractor φ() on M and then one or several ranking layers to combine the features to the ranking score. 3.2 Semantic Entity Representation EDRM incorporates the semantic information about an entity from the knowledge graphs into its representation. The representation includes three embeddings: entity embedding, description embedding, and type embedding, all in L dimension and are combined to generate the semantic representation of the entity. Entity Embedding uses an L-dimensional embedding layer Embe to get an entity embedding ⃗vemb e for e: ⃗vemb e = Embe(e). (3) Description Embedding encodes an entity description which contains m words and explains the entity. EDRM first employs the word embedding layer Embw to embed the description word w to ⃗vw. Then it combines all embeddings in text to an embedding matrix ⃗Vw. Next, it leverages convolutional filters to slide over the text and compose the h length n-gram as ⃗gj e: ⃗gj e = ReLu(WCNN · ⃗V j:j+h w +⃗bCNN), (4) where WCNN and ⃗bCNN are two parameters of the covolutional filter. Then we use max pooling after the convolution layer to generate the description embedding ⃗vdes e : ⃗vdes e = max(⃗g1 e, ...,⃗gj e, ...,⃗gm e ). (5) Type Embedding encodes the categories of entities. Each entity e has n kinds of types Fe = {f1, ..., fj, ..., fn}. EDRM first gets the fj embedding ⃗vfj through the type embedding layer Embtp: ⃗vemb fj = Embtp(e). (6) Then EDRM utilizes an attention mechanism to combine entity types to the type embedding ⃗vtype e : ⃗vtype e = n X j aj⃗vfj, (7) where aj is the attention score, calculated as: aj = exp(Pj) Pn l exp(Pl), (8) Pj = ( X i Wbow⃗vti) · ⃗vfj. (9) Pj is the dot product of the query or document representation and type embedding fj. We leverage bag-of-words for query or document encoding. Wbow is a parameter matrix. Combination. The three embeddings are combined by an linear layer to generate the semantic representation of the entity: ⃗vsem e = ⃗vemb e + We(⃗vdes e ⊕⃗vtype e )T +⃗be. (10) We is an L×2L matrix and⃗be is an L-dimensional vector. 2398 Query Document ... ... Enriched-entity Embedding N-gram Embedding Interaction Matrix CNN Attention Kernel Pooling ... ... ... ... Soft Match Feature Final Ranking Score Obama family tree ... ... Unigrams Bigrams Trigrams CNN Obama Description Type Family Tree Description Type Enriched-entity Embedding Figure 1: The architecture of EDRM. 3.3 Neural Entity-Duet Framework Word-entity duet (Xiong et al., 2017a) is a recently developed framework in entity-oriented search. It utilizes the duet representation of bag-of-words and bag-of-entities to match q-d with hand crafted features. This work introduces it to neural-IR. We first construct bag-of-entities qe and de with entity annotation as well as bag-of-words qw and dw for q and d. The duet utilizes a four-way interaction: query words to document words (qw-dw), query words to documents entities (qw-de), query entities to document words (qe-dw) and query entities to document entities (qe-de). Instead of features, EDRM uses a translation layer that calculates similarity between a pair of query-document terms: (⃗vi wq or ⃗vi eq) and (⃗vj wd or ⃗vj ed). It constructs the interaction matrix M = {Mww, Mwe, Mew, Mee}. And Mww, Mwe, Mew, Mee denote interactions of qwdw, qw-de, qe-dw, qe-de respectively. And elements in them are the cosine similarities of corresponding terms: M ij ww = cos(⃗vi wq,⃗vj wd); M ij ee = cos(⃗vi eq,⃗vj ed) M ij ew = cos(⃗vi eq,⃗vj wd); M ij we = cos(⃗vi wq,⃗vj ed). (11) The final ranking feature Φ(M) is a concatenation (⊕) of four cross matches (φ(M)): Φ(M) = φ(Mww) ⊕φ(Mwe) ⊕φ(Mew) ⊕φ(Mee), (12) where the φ can be any function used in interaction based neural ranking models. The entity-duet presents an effective way to cross match query and document in entity and word spaces. In EDRM, it introduces the knowledge graph semantics representations into neuralIR models. 4 Integration with Kernel based Neural Ranking Models The duet translation matrices provided by EDRM can be plugged into any standard interaction based neural ranking models. This section expounds special cases where it is integrated with K-NRM (Xiong et al., 2017b) and Conv-KNRM (Dai et al., 2018), two recent stateof-the-arts. K-NRM uses K Gaussian kernels to extract the matching feature φ(M) from the translation matrix M. Each kernel Kk summarizes the translation scores as soft-TF counts, generating a K-dimensional feature vector φ(M) = {K1(M), ..., KK(M)}: Kk(M) = X j exp(−M ij −µk 2δ2 k ). (13) µk and δk are the mean and width for the kth kernel. Conv-KNRM extend K-NRM incorporating hgram compositions ⃗gi h from text embedding ⃗VT using CNN: ⃗gi h = relu(Wh · ⃗V i:i+h T + ⃗vh). (14) Then a translation matrix Mhq,hd is constructed. Its elements are the similarity scores of h-gram 2399 pairs between query and document: Mhq,hd = cos(⃗gi hq,⃗gj hd). (15) We also extend word n-gram cross matches to word entity duet matches: Φ(M) = φ(M1,1)⊕...⊕φ(Mhq,hd)⊕...⊕φ(Mee). (16) Each ranking feature φ(Mhq,hd) contains three parts: query hq-gram and document hd-gram match feature (φ(Mwwhq,hd)), query entity and document hd-gram match feature (φ(Mew1,hd)), and query hq-gram and document entity match feature (φ(Mwwhq,1)): φ(Mhq,hd) = φ(Mwwhq,hd )⊕φ(Mew1,hd )⊕φ(Mwehq,1). (17) We then use learning to rank to combine ranking feature Φ(M) to produce the final ranking score: f(q, d) = tanh(ωT r Φ(M) + br). (18) ωr and br are the ranking parameters. tanh is the activation function. We use standard pairwise loss to train the model: l = X q X d+,d−∈D+,− q max(0, 1 −f(q, d+) + f(q, d−)), (19) where the d+ is a document ranks higher than d−. With sufficient training data, the whole model is optimized end-to-end with back-propagation. During the process, the integration of the knowledge graph semantics, entity embedding, description embeddings, type embeddings, and matching with entities-are learned jointly with the ranking neural network. 5 Experimental Methodology This section describes the dataset, evaluation metrics, knowledge graph, baselines, and implementation details of our experiments. Dataset. Our experiments use a query log from Sogou.com, a major Chinese searching engine (Luo et al., 2017). The exact same dataset and training-testing splits in the previous research (Xiong et al., 2017b; Dai et al., 2018) are used. They defined the ad-hoc ranking task in this dataset as re-ranking the candidate documents provided by the search engine. All Chinese texts are segmented by ICTCLAS (Zhang et al., 2003), after that they are treated the same as English. (a) Statistic of queries (b) Statistic of documents Figure 2: Query and document distributions. Queries and documents are grouped by the number of entities. Prior research leverages clicks to model user behaviors and infer reliable relevance signals using click models (Chuklin et al., 2015). DCTR and TACM are two click models: DCTR calculates the relevance scores of a query-document pair based on their click through rates (CTR); TACM (Wang et al., 2013) is a more sophisticated model that uses both clicks and dwell times. Following previous research (Xiong et al., 2017b), both DCTR and TACM are used to infer labels. DCTR inferred relevance labels are used in training. Three testing scenarios are used: Testing-SAME, Testing-DIFF and Testing-RAW. Testing-SAME uses DCTR labels, the same as in training. Testing-DIFF evaluates models performance based on TACM inferred relevance labels. Testing-RAW evaluates ranking models through user clicks, which tests ranking performance for the most satisfying document. Testing-DIFF and Testing-RAW are harder scenarios that challenge the generalization ability of all models, because their training labels and testing labels are generated differently (Xiong et al., 2017b). Evaluation Metrics. NDCG@1 and NDCG@10 are used in Testing-SAME and Testing-DIFF. MRR is used for Testing-Raw. Statistic significances are tested by permutation test with P< 0.05. All are the same as in previous research (Xiong et al., 2017b). Knowledge Graph. We use CN-DBpedia (Xu et al., 2017), a large scale Chinese knowledge graph based on Baidu Baike, Hudong Baike, and Chinese Wikipedia. CN-DBpedia contains 10,341,196 entities and 88,454,264 relations. The query and document entities are annotated by CMNS, the commonness (popularity) based entity linker (Hasibi et al., 2017). CN-DBpedia and CMNS provide good coverage on our queries and 2400 Table 1: Ranking accuracy of EDRM-KNRM, EDRM-CKNRM and baseline methods. Relative performances compared with K-NRM are in percentages. †, ‡, §, ¶, ∗indicate statistically significant improvements over DRMM†, CDSSM‡, MP§, K-NRM¶ and Conv-KNRM∗respectively. Testing-SAME Testing-DIFF Testing-RAW Method NDCG@1 NDCG@10 NDCG@1 NDCG@10 MRR BM25 0.1422 −46.24% 0.2868 −31.67% 0.1631 −45.63% 0.3254 −23.04% 0.2280 −33.86% RankSVM 0.1457 −44.91% 0.3087 −26.45% 0.1700 −43.33% 0.3519 −16.77% 0.2241 −34.99% Coor-Ascent 0.1594 −39.74% 0.3547 −15.49% 0.2089 −30.37% 0.3775 −10.71% 0.2415 −29.94% DRMM 0.1367 −48.34% 0.3134 −25.34% 0.2126‡ −29.14% 0.3592§ −15.05% 0.2335 −32.26% CDSSM 0.1441 −45.53% 0.3329 −20.69% 0.1834 −38.86% 0.3534 −16.41% 0.2310 −33.00% MP 0.2184†‡ −17.44% 0.3792†‡ −9.67% 0.1969 −34.37% 0.3450 −18.40% 0.2404 −30.27% K-NRM 0.2645 – 0.4197 – 0.3000 – 0.4228 – 0.3447 – Conv-KNRM 0.3357†‡§¶ +26.90% 0.4810†‡§¶ +14.59% 0.3384†‡§¶ +12.81% 0.4318†‡§ +2.14% 0.3582†‡§ +3.91% EDRM-KNRM 0.3096†‡§¶ +17.04% 0.4547†‡§¶ +8.32% 0.3327†‡§¶ +10.92% 0.4341†‡§¶ +2.68% 0.3616†‡§¶ +4.90% EDRM-CKNRM 0.3397†‡§¶ +28.42% 0.4821†‡§¶ +14.86% 0.3708†‡§¶∗ +23.60% 0.4513†‡§¶∗ +6.74% 0.3892†‡§¶∗ +12.90% documents. As shown in Figure 2, the majority of queries have at least one entity annotation; the average number of entity annotated per document title is about four. Baselines. The baselines include feature-based ranking models and neural ranking models. Most of the baselines are borrowed from previous research (Xiong et al., 2017b; Dai et al., 2018). Feature-based baselines include two learning to rank systems, RankSVM (Joachims, 2002) and coordinate ascent (Coor-Accent) (Metzler and Croft, 2006). The standard word-based unsupervised retrieval model, BM25, is also compared. Neural baselines include CDSSM (Shen et al., 2014), MatchPyramid (MP) (Pang et al., 2016), DRMM (Grauman and Darrell, 2005), K-NRM (Xiong et al., 2017b) and Conv-KNRM (Dai et al., 2018). CDSSM is representation based. It uses CNN to build query and document representations on word letter-tri-grams (or Chinese characters). MP and DRMM are both interaction based models. They use CNNs or histogram pooling to extract features from embedding based translation matrix. Our main baselines are K-NRM and Conv-KNRM, the recent state-of-the-art neural models on the Sogou-Log dataset. The goal of our experiments is to explore the effectiveness of knowledge graphs in these state-of-the-art interaction based neural models. Implementation Details. The dimension of word embedding, entity embedding and type embedding are 300. Vocabulary size of entities and words are 44,930 and 165,877. Conv-KNRM uses one layer CNN with 128 filter size for the ngram composition. Entity description encoder is a one layer CNN with 128 and 300 filter size for Conv-KNRM and K-NRM respectively. All models are implemented with PyTorch. Adam is utilized to optimize all parameters with learning rate = 0.001, ϵ = 1e −5 and early stopping with the practice of 5 epochs. There are two versions of EDRM: EDRM-KNRM and EDRM-CKNRM, integrating with K-NRM and Conv-KNRM respectively. The first one (K-NRM) enriches the word based neural ranking model with entities and knowledge graph semantics; the second one (Conv-KNRM) enriches the n-gram based neural ranking model. 6 Evaluation Results Four experiments are conducted to study the effectiveness of EDRM: the overall performance, the contributions of matching kernels, the ablation study, and the influence of entities in different scenarios. We also do case studies to show effect of EDRM on document ranking. 6.1 Ranking Accuracy The ranking accuracies of the ranking methods are shown in Table 1. K-NRM and Conv-KNRM outperform other baselines in all testing scenarios by large margins as shown in previous research. EDRM-KNRM out performs K-NRM by over 10% improvement in Testing-SAME and Testing-DIFF. EDRM-CKNRM has almost same performance on Testing-SAME with Conv-KNRM. A possible reason is that, entity annotations provide effective phrase matches, but Conv-KNRM is also able to learn phrases matches automatically from data. However, EDRM-CKNRM has significant improvement on Testing-DIFF and Testing-RAW. Those demonstrate that EDRM has strong ability to overcome domain differences from different labels. 2401 Table 2: Ranking accuracy of adding diverse semantics based on K-NRM and Conv-KNRM. Relative performances compared are in percentages. †, ‡, §, ¶, ∗, ∗∗indicate statistically significant improvements over K-NRM† (or Conv-KNRM†), +Embed‡, +Type§, +Description¶, +Embed+Type∗and +Embed+Description∗∗respectively. Testing-SAME Testing-DIFF Testing-RAW Method NDCG@1 NDCG@10 NDCG@1 NDCG@10 MRR K-NRM 0.2645 – 0.4197 – 0.3000 – 0.4228 – 0.3447 – +Embed 0.2743 +3.68% 0.4296 +2.35% 0.3134 +4.48% 0.4306 +1.86% 0.3641† +5.62% +Type 0.2709 +2.41% 0.4395† +4.71% 0.3126 +4.20% 0.4373† +3.43% 0.3531 +2.43% +Description 0.2827 +6.86% 0.4364† +3.97% 0.3181 +6.04% 0.4306 +1.86% 0.3691†§∗ +7.06% +Embed+Type 0.2924† +10.52% 0.4533†‡§¶ +8.00% 0.3034 +1.13% 0.4297 +1.65% 0.3544 +2.79% +Embed+Description 0.2891 +9.29% 0.4443†‡ +5.85% 0.3197 +6.57% 0.4304 +1.80% 0.3564 +3.38% Full Model 0.3096†‡§ +17.04% 0.4547†‡§¶ +8.32% 0.3327†∗ +10.92% 0.4341† +2.68% 0.3616† +4.90% Conv-KNRM 0.3357 – 0.4810 – 0.3384 – 0.4318 – 0.3582 – +Embed 0.3382 +0.74% 0.4831 +0.44% 0.3450 +1.94% 0.4413 +2.20% 0.3758† +4.91% +Type 0.3370 +0.38% 0.4762 −0.99% 0.3422 +1.12% 0.4423† +2.42% 0.3798† +6.02% +Description 0.3396 +1.15% 0.4807 −0.05% 0.3533 +4.41% 0.4468† +3.47% 0.3819† +6.61% +Embed+Type 0.3420 +1.88% 0.4828 +0.39% 0.3546 +4.79% 0.4491† +4.00% 0.3805† +6.22% +Embed+Description 0.3382 +0.73% 0.4805 −0.09% 0.3608 +6.60% 0.4494† +4.08% 0.3868† +7.99% Full Model 0.3397 +1.19% 0.4821 +0.24% 0.3708†‡§ +9.57% 0.4513†‡ +4.51% 0.3892†‡ +8.65% (a) Kernel weight distribution for EDRM-KNRM. (b) Kernel weight distribution for EDRM-CKNRM. Figure 3: Ranking contribution for EDRM. Three scenarios are presented: Exact VS. Soft compares the weights of exact match kernel and others; Solo Word VS. Others shows the proportion of only text based matches; In-space VS. Cross-space compares in-space matches and cross-space matches. These results show the effectiveness and the generalization ability of EDRM. In the following experiments, we study the source of this generalization ability. 6.2 Contributions of Matching Kernels This experiment studies the contribution of knowledge graph semantics by investigating the weights learned on the different types of matching kernels. As shown in Figure 3(a), most of the weight in EDRM-KNRM goes to soft match (Exact VS. Soft); entity related matches play an as important role as word based matches (Solo Word VS. Others); cross-space matches are more important than in-space matches (In-space VS. Crossspace). As shown in Figure 3(b), the percentages of word based matches and cross-space matches are more important in EDRM-CKNRM compared to in EDRM-KNRM. The contribution of each individual match type in EDRM-CKNRM is shown in Figure 4. The weight of unigram, bigram, trigram, and entity is almost uniformly distributed, indicating the effectiveness of entities and all components are important in EDRM-CKNRM. 6.3 Ablation Study This experiment studies which part of the knowledge graph semantics leads to the effectiveness and generalization ability of EDRM. There are three types of embeddings incorporating different aspects of knowledge graph information: entity embedding (Embed), description embedding (Description) and type embedding (Type). This experiment starts with the word-only K-NRM and Conv-KNRM, and adds these three types of embedding individually or two-by-two (Embed+Type and Embed+Description). The performances of EDRM with different groups of embeddings are shown in Table 2. The description embeddings show the greatest improvement among the three embeddings. Entity 2402 Figure 4: Individual kernel weight for EDRMCKNRM. X-axis and y-axis denote document and query respectively. type plays an important role only combined with other embeddings. Entity embedding improves K-NRM while has little effect on Conv-KNRM. This result further confirms that the signal from entity names are captured by the n-gram CNNs in Conv-KNRM. Incorporating all of three embeddings usually gets the best ranking performance. This experiments shows that knowledge graph semantics are crucial to EDRM’s effectiveness. Conv-KNRM learns good phrase matches that overlap with the entity embedding signals. However, the knowledge graph semantics (descriptions and types) is hard to be learned just from user clicks. 6.4 Performance on Different Scenarios This experiment analyzes the influence of knowledge graphs in two different scenarios: multiple difficulty degrees and multiple length degrees. Query Difficulty Experiment studies EDRM’s performance on testing queries at different difficulty, partitioned by Conv-KNRM’s MRR value: Hard (MRR < 0.167), Ordinary (MRR ∈ [0.167, 0.382], and Easy (MRR > 0.382). As shown in Figure 5, EDRM performs the best on hard queries. Query Length Experiment evaluates EDRM’s effectiveness on Short (1 words), Medium (2-3 words) and Long (4 or more words) queries. As shown in Figure 6, EDRM has more win cases and achieves the greatest improvement on short queries. Knowledge embeddings are more crucial when limited information is available from the original query text. (a) K-NRM VS. EDRM (b) Conv-KNRM VS. EDRM Figure 5: Performance VS. Query Difficulty. The x-axises mark three query difficulty levels. The yaxises are the Win/Tie/Loss (left) and MRR (right) in the corresponding group. (a) K-NRM VS. EDRM (b) Conv-KNRM VS. EDRM Figure 6: Performance VS. Query Length. The xaxises mark three query length levels, and y-axises are the Win/Tie/Loss (left) and MRR (right) in the corresponding group. These two experiments reveal that the effectiveness of EDRM is more observed on harder or shorter queries, whereas the word-based neural models either find it difficult or do not have sufficient information to leverage. 6.5 Case Study Table 3 provide examples reflecting two possible ways, in which the knowledge graph semantics could help the document ranking. First, the entity descriptions explain the meaning of entities and connect them through the word space. Meituxiuxiu web version and Meilishuo are two websites providing image processing and shopping services respectively. Their descriptions provide extra ranking signals to promote the related documents. Second, entity types establish underlying relevance patterns between query and documents. The underlying patterns can be captured by crossspace matches. For example, the types of the query entity Crayon Shin-chan and GINTAMA overlaps with the bag-of-words in the relevant documents. They can also be captured by the entity-based matches through their type overlaps, 2403 Table 3: Examples of entity semantics connecting query and title. All the examples are correctly ranked by EDRM-CKNRM. Table 3a shows query-document pairs. Table 3b lists the related entity semantics that include useful information to match the query-document pair. The examples and related semantics are picked by manually examining the ranking changes between different variances of EDRM-CKNRM. (a) Query and document examples. Entities are emphasized. Query Document Meituxiuxiu web version Meituxiuxiu web version: An online picture processing tools Home page of Meilishuo Home page of Meilishuo - Only the correct popular fashion Master Lu Master Lu official website: System optimization, hardware test, phone evaluation Crayon Shin-chan: The movie Crayon Shin-chan: The movie online - Anime GINTAMA GINTAMA: The movie online - Anime - Full HD online watch (b) Semantics of related entities. The first two rows and last two rows show entity descriptions and entity types respectively. Entity Content Meituxiuxiu web version Description: Meituxiuxiu is the most popular Chinese image processing software, launched by the Meitu company Meilishuo Description: Meilishuo, the largest women’s fashion e-commerce platform, dedicates to provide the most popular fashion shopping experience Crayon Shin-chan, GINTAMA Type: Anime; Cartoon characters; Comic Master Lu, System Optimization Type: Hardware test; Software; System tool for example, between the query entity Master Lu and the document entity System Optimization. 7 Conclusions This paper presents EDRM, the Entity-Duet Neural Ranking Model that incorporating knowledge graph semantics into neural ranking systems. EDRM inherits entity-oriented search to match query and documents with bag-of-words and bag-of-entities in neural ranking models. The knowledge graph semantics are integrated as distributed representations of entities. The neural model leverages these semantics to help document ranking. Using user clicks from search logs, the whole model—the integration of knowledge graph semantics and the neural ranking networks– is trained end-to-end. It leads to a data-driven combination of entity-oriented search and neural information retrieval. Our experiments on the Sogou search log and CN-DBpedia demonstrate EDRM’s effectiveness and generalization ability over two state-of-theart neural ranking models. Our further analyses reveal that the generalization ability comes from the integration of knowledge graph semantics. The neural ranking models can effectively model n-gram matches between query and document, which overlaps with part of the ranking signals from entity-based matches: Solely adding the entity names may not improve the ranking accuracy much. However, the knowledge graph semantics, introduced by the description and type embeddings, provide novel ranking signals that greatly improve the generalization ability of neural rankers in difficult scenarios. This paper preliminarily explores the role of structured semantics in deep learning models. Though mainly fouced on search, we hope our findings shed some lights on a potential path towards more intelligent neural systems and will motivate more explorations in this direction. Acknowledgments This work1 is supported by the Major Project of the National Social Science Foundation of China (No.13&ZD190) as well as the China-Singapore Joint Research Project of the National Natural Science Foundation of China (No. 61661146007) under the umbrella of the NexT Joint Research Center of Tsinghua University and National University of Singapore. Chenyan Xiong is supported by National Science Foundation (NSF) grant IIS1422676. We thank Sogou for providing access to the search log. 1Source codes of this work are available at http://github.com/thunlp/ EntityDuetNeuralRanking 2404 References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. DBpedia: A nucleus for a web of open data. Springer. Adam Berger and John Lafferty. 1999. Information retrieval as statistical translation. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 1999). ACM, pages 222–229. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data (SIGMOD 2008). ACM, pages 1247–1250. Guihong Cao, Jian-Yun Nie, Jianfeng Gao, and Stephen Robertson. 2008. Selecting good expansion terms for pseudo-relevance feedback. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2008). ACM, pages 243– 250. Aleksandr Chuklin, Ilya Markov, and Maarten de Rijke. 2015. Click models for web search. Synthesis Lectures on Information Concepts, Retrieval, and Services 7(3):1–115. Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional neural networks for soft-matching n-grams in ad-hoc search. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM 2018). ACM, pages 126–134. Jeffrey Dalton, Laura Dietz, and James Allan. 2014. Entity query feature expansion using knowledge base links. In Proceedings of the 37th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2014). ACM, pages 365–374. Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. 2017. Neural ranking models with weak supervision. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2017). ACM, pages 65–74. Laura Dietz and Patrick Verga. 2014. Umass at TREC 2014: Entity query feature expansion using knowledge base links. In Proceedings of The 23st Text Retrieval Conference (TREC 2014). NIST. Faezeh Ensan and Ebrahim Bagheri. 2017. Document retrieval model through semantic linking. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (WSDM 2017). ACM, pages 181–190. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Scott Wen-tau Yih, and Michel Galley. 2018. A knowledgegrounded neural conversation model. In The ThirtySecond AAAI Conference on Artificial Intelligence (AAAI 2018). Kristen Grauman and Trevor Darrell. 2005. The pyramid match kernel: Discriminative classification with sets of image features. In Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1. IEEE, volume 2, pages 1458–1465. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016a. Semantic matching by non-linear word transportation for information retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management (CIKM 2016). ACM, pages 701–710. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W.Bruce Croft. 2016b. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management (CIKM 2016). ACM, pages 55–64. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). pages 2681–2690. Faegheh Hasibi, Krisztian Balog, and Svein Erik Bratsberg. 2017. Entity linking in queries: Efficiency vs. effectiveness. In European Conference on Information Retrieval. Springer, pages 40–53. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 (NIPS 2014). MIT Press, pages 2042–2050. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management (CIKM 2013). ACM, pages 2333–2338. Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017. Pacrr: A position-aware neural ir model for relevance matching. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017). pages 1060– 1069. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2002). ACM, pages 133–142. 2405 Xitong Liu and Hui Fang. 2015. Latent entity space: A novel retrieval approach for entity-bearing queries. Information Retrieval Journal 18(6):473–503. Cheng Luo, Yukun Zheng, Yiqun Liu, Xiaochuan Wang, Jingfang Xu, Min Zhang, and Shaoping Ma. 2017. Sogout-16: A new web corpus to embrace ir research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2017). ACM, pages 1233–1236. Donald Metzler and W. Bruce Croft. 2006. Linear feature-based models for information retrieval. Information Retrieval 10(3):257–274. Alexander H. Miller, Adam Fisch, Jesse Dodge, AmirHossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). pages 1400– 1409. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceedings of the 26th International Conference on World Wide Web (WWW 2017). ACM, pages 1291–1299. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. 2016. Text matching as image recognition. In In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI 2016). pages 2793–2799. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, and Xueqi Cheng. 2017. Deeprank: A new deep architecture for relevance ranking in information retrieval. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM 2017). ACM, pages 257–266. Hadas Raviv, Oren Kurland, and David Carmel. 2016. Document retrieval using entity-based language models. In Proceedings of the 39th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016). ACM, pages 65–74. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management (CIKM 2014). ACM, pages 101–110. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web (WWW 2007). ACM, pages 697–706. Hongning Wang, ChengXiang Zhai, Anlei Dong, and Yi Chang. 2013. Content-aware click modeling. In Proceedings of the 22Nd International Conference on World Wide Web (WWW 2013). ACM, pages 1365–1376. Chenyan Xiong and Jamie Callan. 2015. EsdRank: Connecting query and documents through external semi-structured data. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (CIKM 2015). ACM, pages 951–960. Chenyan Xiong, Jamie Callan, and Tie-Yan Liu. 2016. Bag-of-entities representation for ranking. In Proceedings of the sixth ACM International Conference on the Theory of Information Retrieval (ICTIR 2016). ACM, pages 181–184. Chenyan Xiong, Jamie Callan, and Tie-Yan Liu. 2017a. Word-entity duet representations for document ranking. In Proceedings of the 40th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2017). ACM, pages 763–772. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017b. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th annual international ACM SIGIR conference on Research and Development in Information Retrieval (SIGIR 2017). ACM, pages 55–64. Chenyan Xiong, Russell Power, and Jamie Callan. 2017c. Explicit semantic ranking for academic search via knowledge graph embedding. In Proceedings of the 26th International Conference on World Wide Web (WWW 2017). ACM, pages 1271– 1279. Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, and Yanghua Xiao. 2017. Cndbpedia: A never-ending chinese knowledge extraction system. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. Springer, pages 428–438. Hua Ping Zhang, Hong Kui Yu, De Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Sighan Workshop on Chinese Language Processing. pages 758–759.
2018
223
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2406–2417 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2406 Neural Natural Language Inference Models Enhanced with External Knowledge Qian Chen University of Science and Technology of China [email protected] Xiaodan Zhu ECE, Queen’s University [email protected] Zhen-Hua Ling University of Science and Technology of China [email protected] Diana Inkpen University of Ottawa [email protected] Si Wei iFLYTEK Research [email protected] Abstract Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets. 1 Introduction Reasoning and inference are central to both human and artificial intelligence. Natural language inference (NLI), also known as recognizing textual entailment (RTE), is an important NLP problem concerned with determining inferential relationship (e.g., entailment, contradiction, or neutral) between a premise p and a hypothesis h. In general, modeling informal inference in language is a very challenging and basic problem towards achieving true natural language understanding. In the last several years, larger annotated datasets were made available, e.g., the SNLI (Bowman et al., 2015) and MultiNLI datasets (Williams et al., 2017), which made it feasible to train rather complicated neuralnetwork-based models that fit a large set of parameters to better model NLI. Such models have shown to achieve the state-of-the-art performance (Bowman et al., 2015, 2016; Yu and Munkhdalai, 2017b; Parikh et al., 2016; Sha et al., 2016; Chen et al., 2017a,b; Tay et al., 2018). While neural networks have been shown to be very effective in modeling NLI with large training data, they have often focused on end-to-end training by assuming that all inference knowledge is learnable from the provided training data. In this paper, we relax this assumption and explore whether external knowledge can further help NLI. Consider an example: • p: A lady standing in a wheat field. • h: A person standing in a corn field. In this simplified example, when computers are asked to predict the relation between these two sentences and if training data do not provide the knowledge of relationship between “wheat” and “corn” (e.g., if one of the two words does not appear in the training data or they are not paired in any premise-hypothesis pairs), it will be hard for computers to correctly recognize that the premise contradicts the hypothesis. In general, although in many tasks learning tabula rasa achieved state-of-the-art performance, we believe complicated NLP problems such as NLI 2407 could benefit from leveraging knowledge accumulated by humans, particularly in a foreseeable future when machines are unable to learn it by themselves. In this paper we enrich neural-network-based NLI models with external knowledge in coattention, local inference collection, and inference composition components. We show the proposed model improves the state-of-the-art NLI models to achieve better performances on the SNLI and MultiNLI datasets. The advantage of using external knowledge is more significant when the size of training data is restricted, suggesting that if more knowledge can be obtained, it may bring more benefit. In addition to attaining the state-of-theart performance, we are also interested in understanding how external knowledge contributes to the major components of typical neural-networkbased NLI models. 2 Related Work Early research on natural language inference and recognizing textual entailment has been performed on relatively small datasets (refer to MacCartney (2009) for a good literature survey), which includes a large bulk of contributions made under the name of RTE, such as (Dagan et al., 2005; Iftene and Balahur-Dobrescu, 2007), among many others. More recently the availability of much larger annotated data, e.g., SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017), has made it possible to train more complex models. These models mainly fall into two types of approaches: sentence-encoding-based models and models using also inter-sentence attention. Sentence-encoding-based models use Siamese architecture (Bromley et al., 1993). The parametertied neural networks are applied to encode both the premise and the hypothesis. Then a neural network classifier is applied to decide relationship between the two sentences. Different neural networks have been utilized for sentence encoding, such as LSTM (Bowman et al., 2015), GRU (Vendrov et al., 2015), CNN (Mou et al., 2016), BiLSTM and its variants (Liu et al., 2016c; Lin et al., 2017; Chen et al., 2017b; Nie and Bansal, 2017), self-attention network (Shen et al., 2017, 2018), and more complicated neural networks (Bowman et al., 2016; Yu and Munkhdalai, 2017a,b; Choi et al., 2017). Sentence-encoding-based models transform sentences into fixed-length vector representations, which may help a wide range of tasks (Conneau et al., 2017). The second set of models use inter-sentence attention (Rockt¨aschel et al., 2015; Wang and Jiang, 2016; Cheng et al., 2016; Parikh et al., 2016; Chen et al., 2017a). Among them, Rockt¨aschel et al. (2015) were among the first to propose neural attention-based models for NLI. Chen et al. (2017a) proposed an enhanced sequential inference model (ESIM), which is one of the best models so far and is used as one of our baselines in this paper. In this paper we enrich neural-network-based NLI models with external knowledge. Unlike early work on NLI (Jijkoun and de Rijke, 2005; MacCartney et al., 2008; MacCartney, 2009) that explores external knowledge in conventional NLI models on relatively small NLI datasets, we aim to merge the advantage of powerful modeling ability of neural networks with extra external inference knowledge. We show that the proposed model improves the state-of-the-art neural NLI models to achieve better performances on the SNLI and MultiNLI datasets. The advantage of using external knowledge is more significant when the size of training data is restricted, suggesting that if more knowledge can be obtained, it may have more benefit. In addition to attaining the state-of-the-art performance, we are also interested in understanding how external knowledge affect major components of neural-network-based NLI models. In general, external knowledge has shown to be effective in neural networks for other NLP tasks, including word embedding (Chen et al., 2015; Faruqui et al., 2015; Liu et al., 2015; Wieting et al., 2015; Mrksic et al., 2017), machine translation (Shi et al., 2016; Zhang et al., 2017b), language modeling (Ahn et al., 2016), and dialogue systems (Chen et al., 2016b). 3 Neural-Network-Based NLI Models with External Knowledge In this section we propose neural-network-based NLI models to incorporate external inference knowledge, which, as we will show later in Section 5, achieve the state-of-the-art performance. In addition to attaining the leading performance we are also interested in investigating the effects of external knowledge on major components of neural-network-based NLI modeling. 2408 Figure 1 shows a high-level general view of the proposed framework. While specific NLI systems vary in their implementation, typical state-of-theart NLI models contain the main components (or equivalents) of representing premise and hypothesis sentences, collecting local (e.g., lexical) inference information, and aggregating and composing local information to make the global decision at the sentence level. We incorporate and investigate external knowledge accordingly in these major NLI components: computing co-attention, collecting local inference information, and composing inference to make final decision. 3.1 External Knowledge As discussed above, although there exist relatively large annotated data for NLI, can machines learn all inference knowledge needed to perform NLI from the data? If not, how can neural networkbased NLI models benefit from external knowledge and how to build NLI models to leverage it? We study the incorporation of external, inference-related knowledge in major components of neural networks for natural language inference. For example, intuitively knowledge about synonymy, antonymy, hypernymy and hyponymy between given words may help model soft-alignment between premises and hypotheses; knowledge about hypernymy and hyponymy may help capture entailment; knowledge about antonymy and co-hyponyms (words sharing the same hypernym) may benefit the modeling of contradiction. In this section, we discuss the incorporation of basic, lexical-level semantic knowledge into neural NLI components. Specifically, we consider external lexical-level inference knowledge between word wi and wj, which is represented as a vector rij and is incorporated into three specific components shown in Figure 1. We will discuss the details of how rij is constructed later in the experiment setup section (Section 4) but instead focus on the proposed model in this section. Note that while we study lexical-level inference knowledge in the paper, if inference knowledge about larger pieces of text pairs (e.g., inference relations between phrases) are available, the proposed model can be easily extended to handle that. In this paper, we instead let the NLI models to compose lexicallevel knowledge to obtain inference relations between larger pieces of texts. 3.2 Encoding Premise and Hypothesis Same as much previous work (Chen et al., 2017a,b), we encode the premise and the hypothesis with bidirectional LSTMs (BiLSTMs). The premise is represented as a = (a1, . . . , am) and the hypothesis is b = (b1, . . . , bn), where m and n are the lengths of the sentences. Then a and b are embedded into de-dimensional vectors [E(a1), . . . , E(am)] and [E(b1), . . . , E(bn)] using the embedding matrix E ∈Rde×|V |, where |V | is the vocabulary size and E can be initialized with the pre-trained word embedding. To represent words in its context, the premise and the hypothesis are fed into BiLSTM encoders (Hochreiter and Schmidhuber, 1997) to obtain context-dependent hidden states as and bs: as i = Encoder(E(a), i) , (1) bs j = Encoder(E(b), j) . (2) where i and j indicate the i-th word in the premise and the j-th word in the hypothesis, respectively. 3.3 Knowledge-Enriched Co-Attention As discussed above, soft-alignment of word pairs between the premise and the hypothesis may benefit from knowledge-enriched co-attention mechanism. Given the relation features rij ∈Rdr between the premise’s i-th word and the hypothesis’s j-th word derived from the external knowledge, the co-attention is calculated as: eij = (as i)Tbs j + F(rij) . (3) The function F can be any non-linear or linear functions. In this paper, we use F(rij) = λ1(rij), where λ is a hyper-parameter tuned on the development set and 1 is the indication function as follows: 1(rij) = ( 1 if rij is not a zero vector ; 0 if rij is a zero vector . (4) Intuitively, word pairs with semantic relationship, e.g., synonymy, antonymy, hypernymy, hyponymy and co-hyponyms, are probably aligned together. We will discuss how we construct external knowledge later in Section 4. We have also tried a twolayer MLP as a universal function approximator in function F to learn the underlying combination function but did not observe further improvement over the best performance we obtained on the development datasets. 2409 Figure 1: A high-level view of neural-network-based NLI models enriched with external knowledge in co-attention, local inference collection, and inference composition. Soft-alignment is determined by the coattention matrix e ∈Rm×n computed in Equation (3), which is used to obtain the local relevance between the premise and the hypothesis. For the hidden state of the i-th word in the premise, i.e., as i (already encoding the word itself and its context), the relevant semantics in the hypothesis is identified into a context vector ac i using eij, more specifically with Equation (5). αij = exp(eij) Pn k=1 exp(eik) , ac i = n X j=1 αijbs j , (5) βij = exp(eij) Pm k=1 exp(ekj) , bc j = m X i=1 βijas i , (6) where α ∈Rm×n and β ∈Rm×n are the normalized attention weight matrices with respect to the 2-axis and 1-axis. The same calculation is performed for each word in the hypothesis, i.e., bs j, with Equation (6) to obtain the context vector bc j. 3.4 Local Inference Collection with External Knowledge By way of comparing the inference-related semantic relation between as i (individual word representation in premise) and ac i (context representation from hypothesis which is align to word as i), we can model local inference (i.e., word-level inference) between aligned word pairs. Intuitively, for example, knowledge about hypernymy or hyponymy may help model entailment and knowledge about antonymy and co-hyponyms may help model contradiction. Through comparing as i and ac i, in addition to their relation from external knowledge, we can obtain word-level inference information for each word. The same calculation is performed for bs j and bc j. Thus, we collect knowledge-enriched local inference information: am i = G([as i; ac i; as i −ac i; as i ◦ac i; n X j=1 αijrij]) , (7) bm j = G([bs j, bc j; bs j −bc j; bs j ◦bc j; m X i=1 βijrji]) , (8) where a heuristic matching trick with difference and element-wise product is used (Mou et al., 2016; Chen et al., 2017a). The last terms in Equation (7)(8) are used to obtain word-level inference information from external knowledge. Take Equation (7) as example, rij is the relation feature between the i-th word in the premise and the j-th word in the hypothesis, but we care more about semantic relation between aligned word pairs between the premise and the hypothesis. Thus, we use a soft-aligned version through the soft-alignment weight αij. For the i-th word in the premise, the last term in Equation (7) is a word-level inference information based on external knowledge between the i-th word and the aligned word. The same calculation for hypothesis is performed in Equation (8). G is a nonlinear mapping function to reduce dimensionality. Specifically, we use a 1-layer feed-forward neural network with the ReLU activation function with a shortcut connection, i.e., concatenate the hidden states after ReLU with the input Pn j=1 αijrij (or Pm i=1 βijrji) as the output am i (or bm j ). 2410 3.5 Knowledge-Enhanced Inference Composition In this component, we introduce knowledgeenriched inference composition. To determine the overall inference relationship between the premise and the hypothesis, we need to explore a composition layer to compose the local inference vectors (am and bm) collected above: av i = Composition(am, i) , (9) bv j = Composition(bm, j) . (10) Here, we also use BiLSTMs as building blocks for the composition layer, but the responsibility of BiLSTMs in the inference composition layer is completely different from that in the input encoding layer. The BiLSTMs here read local inference vectors (am and bm) and learn to judge the types of local inference relationship and distinguish crucial local inference vectors for overall sentence-level inference relationship. Intuitively, the final prediction is likely to depend on word pairs appearing in external knowledge that have some semantic relation. Our inference model converts the output hidden vectors of BiLSTMs to the fixed-length vector with pooling operations and puts it into the final classifier to determine the overall inference class. Particularly, in addition to using mean pooling and max pooling similarly to ESIM (Chen et al., 2017a), we propose to use weighted pooling based on external knowledge to obtain a fixed-length vector as in Equation (11)(12). aw = m X i=1 exp(H(Pn j=1 αijrij)) Pm i=1 exp(H(Pn j=1 αijrij))av i , (11) bw = n X j=1 exp(H(Pm i=1 βijrji)) Pn j=1 exp(H(Pm i=1 βijrji))bv j . (12) In our experiments, we regard the function H as a 1-layer feed-forward neural network with ReLU activation function. We concatenate all pooling vectors, i.e., mean, max, and weighted pooling, into the fixed-length vector and then put the vector into the final multilayer perceptron (MLP) classifier. The MLP has one hidden layer with tanh activation and softmax output layer in our experiments. The entire model is trained end-to-end, through minimizing the cross-entropy loss. 4 Experiment Set-Up 4.1 Representation of External Knowledge Lexical Semantic Relations As described in Section 3.1, to incorporate external knowledge (as a knowledge vector rij) to the state-of-theart neural network-based NLI models, we first explore semantic relations in WordNet (Miller, 1995), motivated by MacCartney (2009). Specifically, the relations of lexical pairs are derived as described in (1)-(4) below. Instead of using JiangConrath WordNet distance metric (Jiang and Conrath, 1997), which does not improve the performance of our models on the development sets, we add a new feature, i.e., co-hyponyms, which consistently benefit our models. (1) Synonymy: It takes the value 1 if the words in the pair are synonyms in WordNet (i.e., belong to the same synset), and 0 otherwise. For example, [felicitous, good] = 1, [dog, wolf] = 0. (2) Antonymy: It takes the value 1 if the words in the pair are antonyms in WordNet, and 0 otherwise. For example, [wet, dry] = 1. (3) Hypernymy: It takes the value 1 −n/8 if one word is a (direct or indirect) hypernym of the other word in WordNet, where n is the number of edges between the two words in hierarchies, and 0 otherwise. Note that we ignore pairs in the hierarchy which have more than 8 edges in between. For example, [dog, canid] = 0.875, [wolf, canid] = 0.875, [dog, carnivore] = 0.75, [canid, dog] = 0 (4) Hyponymy: It is simply the inverse of the hypernymy feature. For example, [canid, dog] = 0.875, [dog, canid] = 0. (5) Co-hyponyms: It takes the value 1 if the two words have the same hypernym but they do not belong to the same synset, and 0 otherwise. For example, [dog, wolf] = 1. As discussed above, we expect features like synonymy, antonymy, hypernymy, hyponymy and cohyponyms would help model co-attention alignment between the premise and the hypothesis. Knowledge of hypernymy and hyponymy may help capture entailment; knowledge of antonymy and co-hyponyms may help model contradiction. Their final contributions will be learned in end-to-end model training. We regard the vector r ∈Rdr as 2411 the relation feature derived from external knowledge, where dr is 5 here. In addition, Table 1 reports some key statistics of these features. Feature #Words #Pairs Synonymy 84,487 237,937 Antonymy 6,161 6,617 Hypernymy 57,475 753,086 Hyponymy 57,475 753,086 Co-hyponyms 53,281 3,674,700 Table 1: Statistics of lexical relation features. In addition to the above relations, we also use more relation features in WordNet, including instance, instance of, same instance, entailment, member meronym, member holonym, substance meronym, substance holonym, part meronym, part holonym, summing up to 15 features, but these additional features do not bring further improvement on the development dataset, as also discussed in Section 5. Relation Embeddings In the most recent years graph embedding has been widely employed to learn representation for vertexes and their relations in a graph. In our work here, we also capture the relation between any two words in WordNet through relation embedding. Specifically, we employed TransE (Bordes et al., 2013), a widely used graph embedding methods, to capture relation embedding between any two words. We used two typical approaches to obtaining the relation embedding. The first directly uses 18 relation embeddings pretrained on the WN18 dataset (Bordes et al., 2013). Specifically, if a word pair has a certain type relation, we take the corresponding relation embedding. Sometimes, if a word pair has multiple relations among the 18 types; we take an average of the relation embedding. The second approach uses TransE’s word embedding (trained on WordNet) to obtain relation embedding, through the objective function used in TransE, i.e., l ≈ t −h, where l indicates relation embedding, t indicates tail entity embedding, and h indicates head entity embedding. Note that in addition to relation embedding trained on WordNet, other relational embedding resources exist; e.g., that trained on Freebase (WikiData) (Bollacker et al., 2007), but such knowledge resources are mainly about facts (e.g., relationship between Bill Gates and Microsoft) and are less for commonsense knowledge used in general natural language inference (e.g., the color yellow potentially contradicts red). 4.2 NLI Datasets In our experiments, we use Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015) and Multi-Genre Natural Language Inference (MultiNLI) (Williams et al., 2017) dataset, which focus on three basic relations between a premise and a potential hypothesis: the premise entails the hypothesis (entailment), they contradict each other (contradiction), or they are not related (neutral). We use the same data split as in previous work (Bowman et al., 2015; Williams et al., 2017) and classification accuracy as the evaluation metric. In addition, we test our models (trained on the SNLI training set) on a new test set (Glockner et al., 2018), which assesses the lexical inference abilities of NLI systems and consists of 8,193 samples. WordNet 3.0 (Miller, 1995) is used to extract semantic relation features between words. The words are lemmatized using Stanford CoreNLP 3.7.0 (Manning et al., 2014). The premise and the hypothesis sentences fed into the input encoding layer are tokenized. 4.3 Training Details For duplicability, we release our code1. All our models were strictly selected on the development set of the SNLI data and the in-domain development set of MultiNLI and were then tested on the corresponding test set. The main training details are as follows: the dimension of the hidden states of LSTMs and word embeddings are 300. The word embeddings are initialized by 300D GloVe 840B (Pennington et al., 2014), and out-of-vocabulary words among them are initialized randomly. All word embeddings are updated during training. Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate of 0.0004. The mini-batch size is set to 32. Note that the above hyperparameter settings are same as those used in the baseline ESIM (Chen et al., 2017a) model. ESIM is a strong NLI baseline framework with the source code made available at https://github.com/lukecq1231/nli (the ESIM core code has also been adapted to summarization (Chen et al., 2016a) and questionanswering tasks (Zhang et al., 2017a)). The trade-off λ for calculating co1https://github.com/lukecq1231/kim 2412 attention in Equation (3) is selected in [0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50] based on the development set. When training TransE for WordNet, relations are represented with vectors of 20 dimension. 5 Experimental Results 5.1 Overall Performance Table 2 shows the results of state-of-the-art models on the SNLI dataset. Among them, ESIM (Chen et al., 2017a) is one of the previous state-of-the-art systems with an 88.0% test-set accuracy. The proposed model, namely Knowledge-based Inference Model (KIM), which enriches ESIM with external knowledge, obtains an accuracy of 88.6%, the best single-model performance reported on the SNLI dataset. The difference between ESIM and KIM is statistically significant under the one-tailed paired t-test at the 99% significance level. Note that the KIM model reported here uses five semantic relations described in Section 4. In addition to that, we also use 15 semantic relation features, which does not bring additional gains in performance. These results highlight the effectiveness of the five semantic relations described in Section 4. To further investigate external knowledge, we add TransE relation embedding, and again no further improvement is observed on both the development and test sets when TransE relation embedding is used (concatenated) with the semantic relation vectors. We consider this is due to the fact that TransE embedding is not specifically sensitive to inference information; e.g., it does not model co-hyponyms features, and its potential benefit has already been covered by the semantic relation features used. Table 3 shows the performance of models on the MultiNLI dataset. The baseline ESIM achieves 76.8% and 75.8% on in-domain and cross-domain test set, respectively. If we extend the ESIM with external knowledge, we achieve significant gains to 77.2% and 76.4% respectively. Again, the gains are consistent on SNLI and MultiNLI, and we expect they would be orthogonal to other factors when external knowledge is added into other stateof-the-art models. 5.2 Ablation Results Figure 2 displays the ablation analysis of different components when using the external knowledge. To compare the effects of external knowledge under different training data scales, we ranModel Test LSTM Att. (Rockt¨aschel et al., 2015) 83.5 DF-LSTMs (Liu et al., 2016a) 84.6 TC-LSTMs (Liu et al., 2016b) 85.1 Match-LSTM (Wang and Jiang, 2016) 86.1 LSTMN (Cheng et al., 2016) 86.3 Decomposable Att. (Parikh et al., 2016) 86.8 NTI (Yu and Munkhdalai, 2017b) 87.3 Re-read LSTM (Sha et al., 2016) 87.5 BiMPM (Wang et al., 2017) 87.5 DIIN (Gong et al., 2017) 88.0 BCN + CoVe (McCann et al., 2017) 88.1 CAFE (Tay et al., 2018) 88.5 ESIM (Chen et al., 2017a) 88.0 KIM (This paper) 88.6 Table 2: Accuracies of models on SNLI. Model In Cross CBOW (Williams et al., 2017) 64.8 64.5 BiLSTM (Williams et al., 2017) 66.9 66.9 DiSAN (Shen et al., 2017) 71.0 71.4 Gated BiLSTM (Chen et al., 2017b) 73.5 73.6 SS BiLSTM (Nie and Bansal, 2017) 74.6 73.6 DIIN * (Gong et al., 2017) 77.8 78.8 CAFE (Tay et al., 2018) 78.7 77.9 ESIM (Chen et al., 2017a) 76.8 75.8 KIM (This paper) 77.2 76.4 Table 3: Accuracies of models on MultiNLI. * indicates models using extra SNLI training set. domly sample different ratios of the entire training set, i.e., 0.8%, 4%, 20% and 100%. “A” indicates adding external knowledge in calculating the coattention matrix as in Equation (3), “I” indicates adding external knowledge in collecting local inference information as in Equation (7)(8), and “C” indicates adding external knowledge in composing inference as in Equation (11)(12). When we only have restricted training data, i.e., 0.8% training set (about 4,000 samples), the baseline ESIM has a poor accuracy of 62.4%. When we only add external knowledge in calculating co-attention (“A”), the accuracy increases to 66.6% (+ absolute 4.2%). When we only utilize external knowledge in collecting local inference information (“I”), the accuracy has a significant gain, to 70.3% (+ absolute 7.9%). When we only add external knowledge in inference composition (“C”), the accuracy gets a smaller gain to 63.4% (+ absolute 1.0%). The comparison indicates that “I” plays the most important role among the three components in using external knowledge. Moreover, when we com2413 pose the three components (“A,I,C”), we obtain the best result of 72.6% (+ absolute 10.2%). When we use more training data, i.e., 4%, 20%, 100% of the training set, only “I” achieves a significant gain, but “A” or “C” does not bring any significant improvement. The results indicate that external semantic knowledge only helps co-attention and composition when limited training data is limited, but always helps in collecting local inference information. Meanwhile, for less training data, λ is usually set to a larger value. For example, the optimal λ on the development set is 20 for 0.8% training set, 2 for the 4% training set, 1 for the 20% training set and 0.2 for the 100% training set. Figure 3 displays the results of using different ratios of external knowledge (randomly keep different percentages of whole lexical semantic relations) under different sizes of training data. Note that here we only use external knowledge in collecting local inference information as it always works well for different scale of the training set. Better accuracies are achieved when using more external knowledge. Especially under the condition of restricted training data (0.8%), the model obtains a large gain when using more than half of external knowledge. Figure 2: Accuracies of models of incorporating external knowledge into different NLI components, under different sizes of training data (0.8%, 4%, 20%, and the entire training data). 5.3 Analysis on the (Glockner et al., 2018) Test Set In addition, Table 4 shows the results on a newly published test set (Glockner et al., 2018). Compared with the performance on the SNLI test Figure 3: Accuracies of models under different sizes of external knowledge. More external knowledge corresponds to higher accuracies. Model SNLI Glockner’s(∆) (Parikh et al., 2016)* 84.7 51.9 (-32.8) (Nie and Bansal, 2017)* 86.0 62.2 (-23.8) ESIM * 87.9 65.6 (-22.3) KIM (This paper) 88.6 83.5 ( -5.1) Table 4: Accuracies of models on the SNLI and (Glockner et al., 2018) test set. * indicates the results taken from (Glockner et al., 2018). set, the performance of the three baseline models dropped substantially on the (Glockner et al., 2018) test set, with the differences ranging from 22.3% to 32.8% in accuracy. Instead, the proposed KIM achieves 83.5% on this test set (with only a 5.1% drop in performance), which demonstrates its better ability of utilizing lexical level inference and hence better generalizability. Figure 5 displays the accuracy of ESIM and KIM in each replacement-word category of the (Glockner et al., 2018) test set. KIM outperforms ESIM in 13 out of 14 categories, and only performs worse on synonyms. 5.4 Analysis by Inference Categories We perform more analysis (Table 6) using the supplementary annotations provided by the MultiNLI dataset (Williams et al., 2017), which have 495 samples (about 1/20 of the entire development set) for both in-domain and out-domain set. We compare against the model outputs of the ESIM model across 13 categories of inference. Table 6 reports the results. We can see that KIM outperforms ESIM on overall accuracies on both in-domain and 2414 Category Instance ESIM KIM Antonyms 1,147 70.4 86.5 Cardinals 759 75.5 93.4 Nationalities 755 35.9 73.5 Drinks 731 63.7 96.6 Antonyms WordNet 706 74.6 78.8 Colors 699 96.1 98.3 Ordinals 663 21.0 56.6 Countries 613 25.4 70.8 Rooms 595 69.4 77.6 Materials 397 89.7 98.7 Vegetables 109 31.2 79.8 Instruments 65 90.8 96.9 Planets 60 3.3 5.0 Synonyms 894 99.7 92.1 Overall 8,193 65.6 83.5 Table 5: The number of instances and accuracy per category achieved by ESIM and KIM on the (Glockner et al., 2018) test set. Category In-domain Cross-domain ESIM KIM ESIM KIM Active/Passive 93.3 93.3 100.0 100.0 Antonym 76.5 76.5 70.0 75.0 Belief 72.7 75.8 75.9 79.3 Conditional 65.2 65.2 61.5 69.2 Coreference 80.0 76.7 75.9 75.9 Long sentence 82.8 78.8 69.7 73.4 Modal 80.6 79.9 77.0 80.2 Negation 76.7 79.8 73.1 71.2 Paraphrase 84.0 72.0 86.5 89.2 Quantity/Time 66.7 66.7 56.4 59.0 Quantifier 79.2 78.4 73.6 77.1 Tense 74.5 78.4 72.2 66.7 Word overlap 89.3 85.7 83.8 81.1 Overall 77.1 77.9 76.7 77.4 Table 6: Detailed Analysis on MultiNLI. cross-domain subset of development set. KIM outperforms or equals ESIM in 10 out of 13 categories on the cross-domain setting, while only 7 out of 13 categories on in-domain setting. It indicates that external knowledge helps more in crossdomain setting. Especially, for antonym category in cross-domain set, KIM outperform ESIM significantly (+ absolute 5.0%) as expected, because antonym feature captured by external knowledge would help unseen cross-domain samples. 5.5 Case Study Table 7 includes some examples from the SNLI test set, where KIM successfully predicts the inference relation and ESIM fails. In the first examP/G Sentences e/c p: An African person standing in a wheat field. h: A person standing in a corn field. e/c p: Little girl is flipping an omelet in the kitchen. h: A young girl cooks pancakes. c/e p: A middle eastern marketplace. h: A middle easten store. c/e p: Two boys are swimming with boogie boards. h: Two boys are swimming with their floats. Table 7: Examples. Word in bold are key words in making final prediction. P indicates a predicted label and G indicates gold-standard label. e and c denote entailment and contradiction, respectively. ple, the premise is “An African person standing in a wheat field” and the hypothesis “A person standing in a corn field”. As the KIM model knows that “wheat” and “corn” are both a kind of cereal, i.e, the co-hyponyms relationship in our relation features, KIM therefore predicts the premise contradicts the hypothesis. However, the baseline ESIM cannot learn the relationship between “wheat” and “corn” effectively due to lack of enough samples in the training sets. With the help of external knowledge, i.e., “wheat” and “corn” having the same hypernym “cereal”, KIM predicts contradiction correctly. 6 Conclusions Our neural-network-based model for natural language inference with external knowledge, namely KIM, achieves the state-of-the-art accuracies. The model is equipped with external knowledge in its main components, specifically, in calculating coattention, collecting local inference, and composing inference. We provide detailed analyses on our model and results. The proposed model of infusing neural networks with external knowledge may also help shed some light on tasks other than NLI. Acknowledgments We thank Yibo Sun and Bing Qin for early helpful discussion. 2415 References Sungjin Ahn, Heeyoul Choi, Tanel P¨arnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. CoRR, abs/1608.00318. Kurt D. Bollacker, Robert P. Cook, and Patrick Tufts. 2007. Freebase: A shared database of structured general human knowledge. In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, July 22-26, 2007, Vancouver, British Columbia, Canada, pages 1962–1963. Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 2787– 2795. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 1721, 2015, pages 632–642. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S¨ackinger, and Roopak Shah. 1993. Signature verification using a siamese time delay neural network. In Advances in Neural Information Processing Systems 6, [7th NIPS Conference, Denver, Colorado, USA, 1993], pages 737–744. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016a. Distraction-based neural networks for modeling document. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2754–2760. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017a. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1657–1668. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017b. Recurrent neural network-based sentence encoder with gated attention for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, RepEval@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, pages 36–40. Yun-Nung Chen, Dilek Z. Hakkani-T¨ur, G¨okhan T¨ur, Asli C¸ elikyilmaz, Jianfeng Gao, and Li Deng. 2016b. Knowledge as a teacher: Knowledgeguided structural attention networks. CoRR, abs/1609.03286. Zhigang Chen, Wei Lin, Qian Chen, Xiaoping Chen, Si Wei, Hui Jiang, and Xiaodan Zhu. 2015. Revisiting word embedding for contrasting meaning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 106– 115. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 551–561. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2017. Unsupervised learning of task-specific tree structures with tree-lstms. CoRR, abs/1707.02786. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 670– 680. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges, Evaluating Predictive Uncertainty, Visual Object Classification and Recognizing Textual Entailment, First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, pages 177–190. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1606–1615. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In The 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia. 2416 Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natural language inference over interaction space. CoRR, abs/1709.04348. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Adrian Iftene and Alexandra Balahur-Dobrescu. 2007. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, chapter Hypothesis Transformation and Semantic Variability Rules Used in Recognizing Textual Entailment. Association for Computational Linguistics. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the 10th Research on Computational Linguistics International Conference, ROCLING 1997, Taipei, Taiwan, August 1997, pages 19–33. Valentin Jijkoun and Maarten de Rijke. 2005. Recognizing textual entailment using lexical similarity. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Zhouhan Lin, Minwei Feng, C´ıcero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. CoRR, abs/1703.03130. Pengfei Liu, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016a. Deep fusion lstms for text semantic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Pengfei Liu, Xipeng Qiu, Yaqian Zhou, Jifan Chen, and Xuanjing Huang. 2016b. Modelling interaction of sentence pair with coupled-lstms. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1703–1712. Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1501– 1511. Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016c. Learning natural language inference using bidirectional LSTM model and inner-attention. CoRR, abs/1605.09090. Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University. Bill MacCartney, Michel Galley, and Christopher D. Manning. 2008. A phrase-based alignment model for natural language inference. In 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Proceedings of the Conference, 25-27 October 2008, Honolulu, Hawaii, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 802–811. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demonstrations, pages 55–60. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6297–6308. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers. Nikola Mrksic, Ivan Vulic, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gasic, Anna Korhonen, and Steve J. Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. CoRR, abs/1706.00374. Yixin Nie and Mohit Bansal. 2017. Shortcutstacked sentence encoders for multi-domain inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP, RepEval@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, pages 41–45. Ankur P. Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2249–2255. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2417 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Tim Rockt¨aschel, Edward Grefenstette, Karl Moritz Hermann, Tom´as Kocisk´y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. CoRR, abs/1509.06664. Lei Sha, Baobao Chang, Zhifang Sui, and Sujian Li. 2016. Reading and thinking: Re-read LSTM unit for textual entailment recognition. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 2870–2879. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2017. Disan: Directional self-attention network for rnn/cnn-free language understanding. CoRR, abs/1709.04696. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. 2018. Reinforced selfattention network: a hybrid of hard and soft attention for sequence modeling. CoRR, abs/1801.10296. Chen Shi, Shujie Liu, Shuo Ren, Shi Feng, Mu Li, Ming Zhou, Xu Sun, and Houfeng Wang. 2016. Knowledge-based semantic embedding for machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. A compare-propagate architecture with alignment factorization for natural language inference. CoRR, abs/1801.00102. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2015. Order-embeddings of images and language. CoRR, abs/1511.06361. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1442– 1451. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4144–4150. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. TACL, 3:345– 358. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. CoRR, abs/1704.05426. Hong Yu and Tsendsuren Munkhdalai. 2017a. Neural semantic encoders. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 397–407. Hong Yu and Tsendsuren Munkhdalai. 2017b. Neural tree indexers for text understanding. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 11–21. Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, Si Wei, and Hui Jiang. 2017a. Exploring question understanding and adaptation in neural-network-based question answering. CoRR, abs/arXiv:1703.04617v2. Shiyue Zhang, Gulnigar Mahmut, Dong Wang, and Askar Hamdulla. 2017b. Memory-augmented chinese-uyghur neural machine translation. In 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2017, Kuala Lumpur, Malaysia, December 1215, 2017, pages 1092–1096.
2018
224
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2418–2428 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2418 AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples Dongyeop Kang1 Tushar Khot2 Ashish Sabharwal2 Eduard Hovy1 1School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA 2Allen Institute for Artificial Intelligence, Seattle, WA, USA fdongyeok,[email protected] ftushark,[email protected] Abstract We consider the problem of learning textual entailment models with limited supervision (5K-10K training examples), and present two complementary approaches for it. First, we propose knowledge-guided adversarial example generators for incorporating large lexical resources in entailment models via only a handful of rule templates. Second, to make the entailment model—a discriminator—more robust, we propose the first GAN-style approach for training it using a natural language example generator that iteratively adjusts based on the discriminator’s performance. We demonstrate effectiveness using two entailment datasets, where the proposed methods increase accuracy by 4.7% on SciTail and by 2.8% on a 1% training sub-sample of SNLI. Notably, even a single hand-written rule, negate, improves the accuracy on the negation examples in SNLI by 6.1%. 1 Introduction The impressive success of machine learning models on large natural language datasets often does not carry over to moderate training data regimes, where models often struggle with infrequently observed patterns and simple adversarial variations. A prominent example of this phenomenon is textual entailment, the fundamental task of deciding whether a premise text entails (⊨) a hypothesis text. On certain datasets, recent deep learning entailment systems (Parikh et al., 2016; Wang et al., 2017; Gong et al., 2018) have achieved close to human level performance. Nevertheless, the problem is far from solved, as evidenced by how easy it is to generate minor adversarial exTable 1: Failure examples from the SNLI dataset: negation (Top) and re-ordering (Bottom). P is premise, H is hypothesis, and S is prediction made by an entailment system (Parikh et al., 2016). P: The dog did not eat all of the chickens. H: The dog ate all of the chickens. S: entails (score 56:5%) P: The red box is in the blue box. H: The blue box is in the red box. S: entails (score 92:1%) amples that break even the best systems. As Table 1 illustrates, a state-of-the-art neural system for this task, namely the Decomposable Attention Model (Parikh et al., 2016), fails when faced with simple linguistic phenomena such as negation, or a re-ordering of words. This is not unique to a particular model or task. Minor adversarial examples have also been found to easily break neural systems on other linguistic tasks such as reading comprehension (Jia and Liang, 2017). A key contributor to this brittleness is the use of specific datasets such as SNLI (Bowman et al., 2015) and SQuAD (Rajpurkar et al., 2016) to drive model development. While large and challenging, these datasets also tend to be homogeneous. E.g., SNLI was created by asking crowd-source workers to generate entailing sentences, which then tend to have limited linguistic variations and annotation artifacts (Gururangan et al., 2018). Consequently, models overfit to sufficiently repetitive patterns—and sometimes idiosyncrasies—in the datasets they are trained on. They fail to cover long-tail and rare patterns in the training distribution, or linguistic phenomena such as negation that would be obvious to a layperson. To address this challenge, we propose to train textual entailment models more robustly using ad2419 versarial examples generated in two ways: (a) by incorporating knowledge from large linguistic resources, and (b) using a sequence-to-sequence neural model in a GAN-style framework. The motivation stems from the following observation. While deep-learning based textual entailment models lead the pack, they generally do not incorporate intuitive rules such as negation, and ignore large-scale linguistic resources such as PPDB (Ganitkevitch et al., 2013) and WordNet (Miller, 1995). These resources could help them generalize beyond specific words observed during training. For instance, while the SNLI dataset contains the pattern two men ⊨people, it does not contain the analogous pattern two dogs ⊨ animals found easily in WordNet. Effectively integrating simple rules or linguistic resources in a deep learning model, however, is challenging. Doing so directly by substantially adapting the model architecture (Sha et al., 2016; Chen et al., 2018) can be cumbersome and limiting. Incorporating such knowledge indirectly via modified word embeddings (Faruqui et al., 2015; Mrkˇsi´c et al., 2016), as we show, can have little positive impact and can even be detrimental. Our proposed method, which is task-specific but model-independent, is inspired by dataaugmentation techniques. We generate new training examples by applying knowledge-guided rules, via only a handful of rule templates, to the original training examples. Simultaneously, we also use a sequence-to-sequence or seq2seq model for each entailment class to generate new hypotheses from a given premise, adaptively creating new adversarial examples. These can be used with any entailment model without constraining model architecture. We also introduce the first approach to train a robust entailment model using a Generative Adversarial Network or GAN (Goodfellow et al., 2014) style framework. We iteratively improve both the entailment system (the discriminator) and the differentiable part of the data-augmenter (specifically the neural generator), by training the generator based on the discriminator’s performance on the generated examples. Importantly, unlike the typical use of GANs to create a strong generator, we use it as a mechanism to create a strong and robust discriminator. Our new entailment system, called AdvEntuRe, demonstrates that in the moderate data regime, adversarial iterative data-augmentation via only a handful of linguistic rule templates can be surprisingly powerful. Specifically, we observe 4.7% accuracy improvement on the challenging SciTail dataset (Khot et al., 2018) and a 2.8% improvement on 10K-50K training subsets of SNLI. An evaluation of our algorithm on the negation examples in the test set of SNLI reveals a 6.1% improvement from just a single rule. 2 Related Work Adversarial example generation has recently received much attention in NLP. For example, Jia and Liang (2017) generate adversarial examples using manually defined templates for the SQuAD reading comprehension task. Glockner et al. (2018) create an adversarial dataset from SNLI by using WordNet knowledge. Automatic methods (Iyyer et al., 2018) have also been proposed to generate adversarial examples through paraphrasing. These works reveal how neural network systems trained on a large corpus can easily break when faced with carefully designed unseen adversarial patterns at test time. Our motivation is different. We use adversarial examples at training time, in a data augmentation setting, to train a more robust entailment discriminator. The generator uses explicit knowledge or hand written rules, and is trained in a end-to-end fashion along with the discriminator. Incorporating external rules or linguistic resources in a deep learning model generally requires substantially adapting the model architecture (Sha et al., 2016; Liang et al., 2017; Kang et al., 2017). This is a model-dependent approach, which can be cumbersome and constraining. Similarly non-neural textual entailment models have been developed that incorporate knowledge bases. However, these also require model-specific engineering (Raina et al., 2005; Haghighi et al., 2005; Silva et al., 2018). An alternative is the modeland taskindependent route of incorporating linguistic resources via word embeddings that are retro-fitted (Faruqui et al., 2015) or counterfitted (Mrkˇsi´c et al., 2016) to such resources. We demonstrate, however, that this has little positive impact in our setting and can even be detrimental. Further, it is unclear how to incorporate knowledge sources into advanced representations such as contextual embeddings (McCann et al., 2420 2017; Peters et al., 2018). We thus focus on a task-specific but model-independent approach. Logical rules have also been defined to label existing examples based on external resources (Hu et al., 2016). Our focus here is on generating new training examples. Our use of the GAN framework to create a better discriminator is related to CatGANs (Wang and Zhang, 2017) and TripleGANs (Chongxuan et al., 2017) where the discriminator is trained to classify the original training image classes as well as a new ‘fake’ image class. We, on the other hand, generate examples belonging to the same classes as the training examples. Further, unlike the earlier focus on the vision domain, this is the first approach to train a discriminator using GANs for a natural language task with discrete outputs. 3 Adversarial Example Generation We present three different techniques to create adversarial examples for textual entailment. Specifically, we show how external knowledge resources, hand-authored rules, and neural language generation models can be used to generate such examples. Before describing these generators in detail, we introduce the notation used henceforth. We use lower-case letters for single instances (e.g., x; p; h), upper-case letters for sets of instances (e.g., X; P; H), blackboard bold for models (e.g., D), and calligraphic symbols for discrete spaces of possible values (e.g., class labels C). For the textual entailment task, we assume each example is represented as a triple (p, h, c), where p is a premise (a natural language sentence), h is a hypothesis, and c is an entailment label: (a) entails (v) if h is true whenever p is true; (b) contradicts (⋏) if h is false whenever p is true; or (c) neutral (#) if the truth value of h cannot be concluded from p being true.1 We will introduce various example generators in the rest of this section. Each such generator, G, is defined by a partial function f and a label g. If a sentence s has a certain property required by f (e.g., contains a particular string), f transforms it into another sentence s0 and g provides an entailment label from s to s0. Applied to a sentence s, G thus either “fails” (if the pre-requisite isn’t met) or generates a new entailment example triple, s; f(s); g  . For instance, consider the generator 1The symbols are based on Natural Logic (Lakoff, 1970) and use the notation of MacCartney and Manning (2012). Source ρ f(s) g Knowledge Base, GKB WordNet hyper(x; y) v anto(x, y) ⋏ syno(x, y) Replace x with y in s v PPDB x  y v SICK c(x; y) c Hand-authored, GH Domain knowledge neg negate(s) ⋏ Neural Model, Gs2s Training data (s2s, c) Gs2s c (s) c Table 2: Various generators G characterized by their source, (partial) transformation function f as applied to a sentence s, and entailment label g for :=hypernym(car, vehicle) with the (partial) transformation function f:=“Replace car with vehicle” and the label g:=entails. f would fail when applied to a sentence not containing the word “car”. Applying f to the sentence s=“A man is driving the car” would generate s’=“A man is driving the vehicle”, creating the example (s; s0; entails). The seven generators we use for experimentation are summarized in Table 2 and discussed in more detail subsequently. While these particular generators are simplistic and one can easily imagine more advanced ones, we show that training using adversarial examples created using even these simple generators leads to substantial accuracy improvement on two datasets. 3.1 Knowledge-Guided Generators Large knowledge-bases such as WordNet and PPDB contain lexical equivalences and other relationships highly relevant for entailment models. However, even large datasets such as SNLI generally do not contain most of these relationships in the training data. E.g., that two dogs entails animals isn’t captured in the SNLI data. We define simple generators based on lexical resources to create adversarial examples that capture the underlying knowledge. This allows models trained on these examples to learn these relationships. As discussed earlier, there are different ways of incorporating such symbolic knowledge into neural models. Unlike task-agnostic ways of approaching this goal from a word embedding perspective (Faruqui et al., 2015; Mrkˇsi´c et al., 2016) 2421 or the model-specific approach (Sha et al., 2016; Chen et al., 2018), we use this knowledge to generate task-specific examples. This allows any entailment model to learn how to use these relationships in the context of the entailment task, helping them outperform the above task-agnostic alternative. Our knowledge-guided example generators, GKB  , use lexical relations available in a knowledge-base:  := r(x; y) where the relation r (such as synonym, hypernym, etc.) may differ across knowledge bases. We use a simple (partial) transformation function, f(s):=“Replace x in s with y”, as described in an earlier example. In some cases, when part-of-speech (POS) tags are available, the partial function requires the tags for x in s and in r(x; y) to match. The entailment label g for the resulting examples is also defined based on the relation r, as summarized in Table 2. This idea is similar to Natural Logic Inference or NLI (Lakoff, 1970; Sommers, 1982; Angeli and Manning, 2014) where words in a sentence can be replaced by their hypernym/hyponym to produce entailing/neutral sentences, depending on their context. We propose a context-agnostic use of lexical resources that, despite its simplicity, already results in significant gains. We use three sources for generators: WordNet (Miller, 1995) is a large, handcurated, semantic lexicon with synonymous words grouped into synsets. Synsets are connected by many semantic relations, from which we use hyponym and synonym relations to generate entailing sentences, and antonym relations to generate contradicting sentences2. Given a relation r(x; y), the (partial) transformation function f is the POS-tag matched replacement of x in s with y, and requires the POS tag to be noun or verb. NLI provides a more robust way of using these relations based on context, which we leave for future work. PPDB (Ganitkevitch et al., 2013) is a large resource of lexical, phrasal, and syntactic paraphrases. We use 24,273 lexical paraphrases in their smallest set, PPDB-S (Pavlick et al., 2015), as equivalence relations, x  y. The (partial) transformation function f for this generator is POS-tagged matched replacement of x in s with y, and the label g is entails. 2A similar approach was used in a parallel work to generate an adversarial dataset from SNLI (Glockner et al., 2018). SICK (Marelli et al., 2014) is dataset with entailment examples of the form (p; h; c), created to evaluate an entailment model’s ability to capture compositional knowledge via hand-authored rules. We use the 12,508 patterns of the form c(x; y) extracted by Beltagy et al. (2016) by comparing sentences in this dataset, with the property that for each SICK example (p; h; c), replacing (when applicable) x with y in p produces h. For simplicity, we ignore positional information in these patterns. The (partial) transformation function f is replacement of x in s with y, and the label g is c. 3.2 Hand-Defined Generators Even very large entailment datasets have no or very few examples of certain otherwise common linguistic constructs such as negation,3 causing models trained on them to struggle with these constructs. A simple model-agnostic way to alleviate this issue is via a negation example generator whose transformation function f(s) is negate(s), described below, and the label g is contradicts. negate(s): If s contains a ‘be’ verb (e.g., is, was), add a “not” after the verb. If not, also add a “did” or “do” in front based on its tense. E.g., change “A person is crossing” to “A person is not crossing” and “A person crossed” to “A person did not cross.” While many other rules could be added, we found that this single rule covered a majority of the cases. Verb tenses are also considered4 and changed accordingly. Other functions such as dropping adverbial clauses or changing tenses could be defined in a similar manner. Both the knowledge-guided and hand-defined generators make local changes to the sentences based on simple rules. It should be possible to extend the hand-defined rules to cover the long tail (as long as they are procedurally definable). However, a more scalable approach would be to extend our generators to trainable models that can cover a wider range of phenomena than hand-defined rules. Moreover, the applicability of these rules generally depends on the context which can also be incorporated in such trainable generators. 3.3 Neural Generators For each entailment class c, we use a trainable sequence-to-sequence neural model (Sutskever 3Only 211 examples (2.11%) in the SNLI training set contain negation triggers such as not, ’nt, etc. 4https://www.nodebox.net/code/index.php/Linguistics 2422 et al., 2014; Luong et al., 2015) to generate an entailment example (s; s0; c) from an input sentence s. The seq2seq model, trained on examples labeled c, itself acts as the transformation function f of the corresponding generator Gs2s c . The label g is set to c. The joint probability of seq2seq model is: Gs2s c (Xc; c) = Gs2s c (Hc; Pc; c) (1) = ΠiP (hi;cjpi;c; c)P (hi) (2) The loss function for training the seq2seq is: Oc = argmin c L(Hc; Gs2s c (Xc; c)) (3) where L is the cross-entropy loss between the original hypothesis Hc and the predicted hypothesis. Cross-entropy is computed for each predicted word wi against the same in Hc given the sequence of previous words in Hc . Oc are the optimal parameters in Gs2s c that minimize the loss for class c. We use the single most likely output to generate sentences in order to reduce decoding time. 3.4 Example Generation The generators described above are used to create new entailment examples from the training data. For each example (p; h; c) in the data, we can create two new examples: p; f(p); g  and h; f(h); g  . The examples generated this way using GKB and GH can, however, be relatively easy, as the premise and hypothesis would differ by only a word or so. We therefore compose such simple (“first-order”) generated examples with the original input example to create more challenging “second-order” examples. We can create secondorder examples by composing the original example (p; h; c) with a generated sentence from hypothesis, f(h) and premise, f(p). Figure 1 depicts how these two kinds of examples are generated from an input example (p; h; c). First, we consider the second-order example between the original premise and the transformed hypothesis: (p; f(h); L(c; g)), where L, defined in the left half of Table 3, composes the input example label c (connecting p and h) and the generated example label g to produce a new label. For instance, if p entails h and h entails f(h), p would entail f. In other words, L(v; v) is v. For example, composing (“A man is playing P H P' H' Entailment in data (x) Generation (z) First/Second-order entailment between z & x Figure 1: Generating first-order (blue) and second-order (red) examples. p ) h h ) h′ p ) h′ p ) h p ) p′ p′ ) h c g L c g N v v v v v ? v ⋏ ⋏ v ⋏ ? v # # v # # ⋏ v ? ⋏ v ? ⋏ ⋏ ? ⋏ ⋏ ? ⋏ # # ⋏ # # # v # # v # # ⋏ # # ⋏ # # # # # # # Table 3: Entailment label composition functions L (left) and N (right) for creating second-order examples. c and g are the original and generated labels, resp. v: entails, ⋏: contradicts, #: neutral, ?: undefined soccer”, “A man is playing a game”, v) with a generated hypothesis f(h): “A person is playing a game.” will give a new second-order entailment example: (“A man is playing soccer”, “A person is playing a game”, v). Second, we create an example from the generated premise to the original hypothesis: (f(p); h; N(g; c)). The composition function here, denoted N and defined in the right half of Table 3, is often undetermined. For example, if p entails f(p) and p entails h, the relation between f(p) and h is undetermined i.e. N(v; v ) =?. While this particular composition N often leads to undetermined or neutral relations, we use it here for completeness. For example, composing the previous example with a generated neutral premise, f(p): “A person is wearing a cap” would generate an example (“A person is wearing a cap”, “A man is playing a game”, #) The composition function L is the same as the “join” operation in natural logic reasoning (Icard III and Moss, 2014), except for two differences: (a) relations that do not belong to our 2423 three entailment classes are mapped to ‘?’, and (b) the exclusivity/alternation relation is mapped to contradicts. The composition function N, on the other hand, does not map to the join operation. 3.5 Implementation Details Given the original training examples X, we generate the examples from each premise and hypothesis in a batch using GKB and GH. We also generate new hypothesis per class for each premise using Gs2s c . Using all the generated examples to train the model would, however, overwhelm the original training set. For examples, our knowledge-guided generators GKB can be applied in 17,258,314 different ways. To avoid this, we sub-sample our synthetic examples to ensure that they are proportional to the input examples X, specifically they are bounded to ˛jXj where ˛ is tuned for each dataset. Also, as seen in Table 3, our knowledge-guided generators are more likely to generate neutral examples than any other class. To make sure that the labels are not skewed, we also sub-sample the examples to ensure that our generated examples have the same class distribution as the input batch. The SciTail dataset only contains two classes: entails mapped to v and neutral mapped to ⋏. As a result, generated examples that do not belong to these two classes are ignored. The sub-sampling, however, has a negative sideeffect where our generated examples end up using a small number of lexical relations from the large knowledge bases. On moderate datasets, this would cause the entailment model to potentially just memorize these few lexical relations. Hence, we generate new entailment examples for each mini-batch and update the model parameters based on the training+generated examples in this batch. The overall example generation procedure goes as follows: For each mini-batch X (1) randomly choose 3 applicable rules per source and sentence (e.g., replacing men with people based on PPDB in premise is one rule), (2) produce examples Zall using GKB, GH and Gs2s, (3) randomly sub-select examples Z from Zall to ensure the balance between classes and jZj= ˛jXj. 4 AdvEntuRe Figure 2 shows the complete architecture of our model, AdvEntuRe (ADVersarial training for textual ENTailment Using Rule-based Examples.). The entailment model D is shown with the white box and two proposed generators are shown using black boxes. We combine the two symbolic untrained generators, GKB and GH into a single Grule model. We combine the generated adversarial examples Z with the original training examples X to train the discriminator. Next, we describe how the individual models are trained and finally present our new approach to train the generator based on the discriminator’s performance. 4.1 Discriminator Training We use one of the state-of-the-art entailment models (at the time of its publication) on SNLI, decomposable attention model (Parikh et al., 2016) with intra-sentence attention as our discriminator D. The model attends each word in hypothesis with each word in the premise, compares each pair of the attentions, and then aggregates them as a final representation. This discriminator model can be easily replaced with any other entailment model without any other change to the AdvEntuRe architecture. We pre-train our discriminator D on the original dataset, X=(P, H, C) using: D(X; ) = argmax OC D( OCjP; H; ) (4) O = argmin  L(C; D(X; )) (5) where L is cross-entropy loss function between the true labels, Y and the predicted classes, and O are the learned parameters. 4.2 Generator Training Our knowledge-guided and hand-defined generators are symbolic parameter-less methods which are not currently trained. For simplicity, we will refer to the set of symbolic rule-based generators as Grule := GKB [ GH. The neural generator Gs2s, on the other hand, can be trained as described earlier. We leave the training of the symbolic models for future work. 4.3 Adversarial Training We now present our approach to iteratively train the discriminator and generator in a GAN-style framework. Unlike traditional GAN (Goodfellow et al., 2014) on image/text generation that aims to obtain better generators, our goal is to build a robust discriminator regularized by the generators (Gs2s and Grule). The discriminator and generator are iteratively trained against each other to 2424 [H → H’, WordNet("part of” → “piece of”), C] The chromosomes are a piece of our body cells [P → P’, NEG, C] Humans don’t have 23 chromosome pairs Data [P → H, C] The chromosomes are pulled to the two pairs of chromosomes, that are identical [P → H, C] The chromosomes are a part of our body cells x z G rule G s2s D [P] Humans have 23 chromosome pairs [H] The chromosomes are a part of our body cells C ρ PPDB/WordNet SICK/Hand Figure 2: Overview of AdvEntuRe, our model for knowledge-guided textual entailment. Algorithm 1 Training procedure for AdvEntuRe. 1: pretrain discriminator D( O) on X; 2: pretrain generators Gs2s c ( O) on X; 3: for number of training iterations do 4: for mini-batch B X do 5: generate examples from G 6: ZG(G(B; ), 7: balance X and ZG s.t. jZGj  ˛jXj 8: optimize discriminator: 9: O = argmin LD(X + ZG; ) 10: optimize generator: 11: O = argmin LGs2s(ZG; LD; ) 12: Update  O;  O achieve better discrimination on the augmented data from the generator and better example generation against the learned discriminator. Algorithm 1 shows our training procedure. First, we pre-train the discriminator D and the seq2seq generators Gs2s on the original data X. We alternate the training of the discriminator and generators over K iterations (set to 30 in our experiments). For each iteration, we take a mini-batch B from our original data X. For each mini-batch, we generate new entailment examples, ZG using our adversarial examples generator. Once we collect all the generated examples, we balance the examples based on their source and label (as described in Section 3.5). In each training iteration, we optimize the discriminator against the augmented training data, X + ZG and use the discriminator loss to guide the generator to pick challenging examples. For every mini-batch of examples X + ZG, we compute the discriminator loss L(C; D(X + ZG; )) and apply the negative of this loss to each word of the generated sentence in Gs2s. In other words, the discriminator loss value replaces the cross-entropy loss used to train the seq2seq model (similar to a REINFORCE (Williams, 1992) reward). This basic approach uses the loss over the entire batch to update the generator, ignoring whether specific examples were hard or easy for the discriminator. Instead, one could update the generator per example based on the discriminator’s loss on that example. We leave this for future work. 5 Experiments Our empirical assessment focuses on two key questions: (a) Can a handful of rule templates improve a state-of-the-art entailment system, especially with moderate amounts of training data? (b) Can iterative GAN-style training lead to an improved discriminator? To this end, we assess various models on the two entailment datasets mentioned earlier: SNLI (570K examples) and SciTail (27K examples).5 To test our hypothesis that adversarial example based training prevents overfitting in small to moderate training data regimes, we compare model accuracies on the test sets when using 1%, 10%, 50%, and 100% subsamples of the train and dev sets. We consider two baseline models: D, the Decomposable Attention model (Parikh et al., 2016) with intra-sentence attention using pre-trained word embeddings (Pennington et al., 2014); and Dretro which extends D with word embeddings initialized by retrofitted vectors (Faruqui et al., 2015). The vectors are retrofitted on PPDB, Word5SNLI has a 96.4%/1.7%/1.7% split and SciTail has a 87.3%/4.8%/7.8% split on train, valid, and test sets, resp. 2425 Table 4: Test accuracies with different subsampling ratios on SNLI (top) and SciTail (bottom). SNLI 1% 10% 50% 100% D 57.68 75.03 82.77 84.52 Dretro 57.04 73.45 81.18 84.14 AdvEntuRe ⌞D + Gs2s 58.35 75.66 82.91 84.68 ⌞D + Grule 60.45 77.11 83.51 84.40 ⌞D + Grule + Gs2s 59.33 76.03 83.02 83.25 SciTail 1% 10% 50% 100% D 56.60 60.84 73.24 74.29 Dretro 59.75 67.99 69.05 72.63 AdvEntuRe ⌞D + Gs2s 65.78 70.77 74.68 76.92 ⌞D + Grule 61.74 66.53 73.99 79.03 ⌞D + Grule + Gs2s 63.28 66.78 74.77 78.60 Net, FrameNet, and all of these, with the best results for each dataset reported here. Our proposed model, AdvEntuRe, is evaluated in three flavors: D augmented with examples generated by Grule, Gs2s, or both, where Grule = GKB[GH. In the first two cases, we create new examples for each batch in every epoch using a fixed generator (cf. Section 3.5). In the third case (D + Grule + Gs2s), we use the GAN-style training. We uses grid search to find the best hyperparameters for D based on the validation set: hidden size 200 for LSTM layer, embedding size 300, dropout ratio 0.2, and fine-tuned embeddings. The ratio between the number of generated vs. original examples, ˛ is empirically chosen to be 1.0 for SNLI and 0.5 for SciTail, based on validation set performance. Generally, very few generated examples (small ˛) has little impact, while too many of them overwhelm the original dataset resulting in worse scores (cf. Appendix for more details). 5.1 Main Results Table 4 summarizes the test set accuracies of the different models using various subsampling ratios for SNLI and SciTail training data. We make a few observations. First, Dretro is ineffective or even detrimental in most cases, except on SciTail when 1% (235 examples) or 10% (2.3K examples) of the training data is used. The gain in these two cases is likely because retrofitted lexical rules are helpful with extremely less data training while not as data size increases. On the other hand, our method always achieves Table 5: Test accuracies across various rules R and classes C. Since SciTail has two classes, we only report results on two classes of Gs2s R/C SNLI (5%) SciTail (10%) D +Grule D 69.18 60.84 + PPDB 72.81 (+3.6%) 65.52 (+4.6%) + SICK 71.32 (+2.1%) 67.49 (+6.5%) + WordNet 71.54 (+2.3%) 64.67 (+3.8%) + HAND 71.15 (+1.9%) 69.05 (+8.2%) + all 71.31 (+2.1%) 64.16 (+3.3%) D +Gs2s D 69.18 60.84 + positive 71.21 (+2.0%) 67.49 (+6.6%) + negative 71.76 (+2.6%) 68.95 (+8.1%) + neutral 71.72 (+2.5%) + all 72.28 (+3.1%) 70.77 (+9.9%) the best result compared to the baselines (D and Dretro). Especially, significant improvements are made in less data setting: +2.77% in SNLI (1%) and 9.18% in SciTail (1%). Moreover, D + Grule’s accuracy on SciTail (100%) also outperforms the previous state-of-the-art model (DGEM (Khot et al., 2018), which achieves 77.3%) for that dataset by 1.7%. Among the three different generators combined with D, both Grule and Gs2s are useful in SciTail, while Grule is much more useful than Gs2s on SNLI. We hypothesize that seq2seq model trained on large training sets such as SNLI will be able to reproduce the input sentences. Adversarial examples from such a model are not useful since the entailment model uses the same training examples. However, on smaller sets, the seq2seq model would introduce noise that can improve the robustness of the model. 5.2 Ablation Study To evaluate the impact of each generator, we perform ablation tests against each symbolic generator in D + Grule and the generator Gs2s c for each entailment class c. We use a 5% sample of SNLI and a 10% sample of SciTail. The results are summarized in Table 5. Interestingly, while PPDB (phrasal paraphrases) helps the most (+3.6%) on SNLI, simple negation rules help significantly (+8.2%) on SciTail dataset. Since most entailment examples in SNLI are minor rewrites by Turkers, PPDB often contains these simple paraphrases. For SciTail, the sentences are authored independently with limited gains from simple paraphrasing. However, a model trained on only 10% of the dataset (2.3K 2426 Table 6: Given a premise P (underlined), examples of hypothesis sentences H’ generated by seq2seq generators Gs2s, and premise sentences P’ generated by rule based generators Grule, on the full SNLI data. Replaced words or phrases are shown in bold. This illustrates that even simple, easy-to-define rules can generate useful adversarial examples. P a person on a horse jumps over a broken down airplane H’: Gs2s c=v a person is on a horse jumps over a rail, a person jumping over a plane H’: Gs2s c=⋏ a person is riding a horse in a field with a dog in a red coat H’: Gs2s c=# a person is in a blue dog is in a park P (or H) a dirt bike rider catches some air going offa large hill P’: GKB(PPDB) =;g=v a dirt motorcycle rider catches some air going offa large hill P’: GKB(SICK) =c;g=# a dirt bike man on yellow bike catches some air going offa large hill P’: GKB(WordNet) =syno;g=v a dirt bike rider catches some atmosphere going offa large hill P’: GHand =neg;g=⋏ a dirt bike rider do not catch some air going offa large hill examples) would end up learning a model relying on purely word overlap. We believe that the simple negation examples introduce neutral examples with high lexical overlap, forcing the model to find a more informative signal. On the other hand, using all classes for Gs2s results in the best performance, supporting the effectiveness of the GAN framework for penalizing or rewarding generated sentences based on D’s loss. Preferential selection of rules within the GAN framework remains a promising direction. 5.3 Qualitative Results Table 6 shows examples generated by various methods in AdvEntuRe. As shown, both seq2seq and rule based generators produce reasonable sentences according to classes and rules. As expected, seq2seq models trained on very few examples generate noisy sentences. The quality of our knowledge-guided generators, on the other hand, does not depend on the training set size and they still produce reliable sentences. 5.4 Case Study: Negation For further analysis of the negation-based generator in Table 1, we collect only the negation examples in test set of SNLI, henceforth referred to as nega-SNLI. Specifically, we extract examples where either the premise or the hypothesis contains “not”, “no”, “never”, or a word that ends with “n’t’. These do not cover more subtle ways of expressing negation such as “seldom” and the use of antonyms. nega-SNLI contains 201 examples with the following label distribution: 51 (25.4%) neutral, 42 (20.9%) entails, 108 (53.7%) contradicts. Table 7 shows examples in each category. Table 7: Negation examples in nega-SNLI v P: several women are playing volleyball. H: this doesn’t look like soccer. # P: a man with no shirt on is performing with a baton. H: a man is trying his best at the national championship of baton. ⋏ P: island native fishermen reeling in their nets after a long day’s work. H: the men did not go to work today but instead played bridge. While D achieves an accuracy of only 76.64%6 on nega-SNLI, D + GH with negate is substantially more successful (+6.1%) at handling negation, achieving an accuracy of 82.74%. 6 Conclusion We introduced an adversarial training architecture for textual entailment. Our seq2seq and knowledge-guided example generators, trained in an end-to-end fashion, can be used to make any base entailment model more robust. The effectiveness of this approach is demonstrated by the significant improvement it achieves on both SNLI and SciTail, especially in the low to medium data regimes. Our rule-based generators can be expanded to cover more patterns and phenomena, and the seq2seq generator extended to incorporate per-example loss for adversarial training. 6This is much less than the full test accuracy of 84.52%. 2427 References Gabor Angeli and Christopher D Manning. 2014. NaturalLI: Natural logic inference for common sense reasoning. In EMNLP, pages 534–545. Islam Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J. Mooney. 2016. Representing meaning with a combination of logical and distributional models. Computational Linguistics, 42:763–808. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, and Diana Inkpen. 2018. Natural language inference with external knowledge. In ACL. LI Chongxuan, Taufik Xu, Jun Zhu, and Bo Zhang. 2017. Triple generative adversarial nets. In NIPS, pages 4091–4101. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. NAACL. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In NAACL-HLT, pages 758–764. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In ACL. Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. ICLR. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS, pages 2672–2680. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In NAACL. Aria Haghighi, Andrew Ng, and Christopher Manning. 2005. Robust textual inference via graph matching. In EMNLP. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neural networks with logic rules. ACL. Thomas Icard III and Lawrence Moss. 2014. Recent progress in monotonicity. LiLT (Linguistic Issues in Language Technology), 9. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke S. Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In NAACL. R. Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP. Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, and Eduard Hovy. 2017. Detecting and explaining causes from text for a time series event. In EMNLP. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. AAAI. George Lakoff. 1970. Linguistics and Natural Logic. Synthese, 22(1-2):151–271. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In ACL. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP. Bill MacCartney and Christopher D. Manning. 2012. Natural logic and natural language inference. In Computing Meaning. Text, Speech and Language Technology, volume 47. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In LREC, pages 216–223. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NIPS. George A Miller. 1995. WordNet: a lexical database for english. Communications of the ACM, 38(11):39–41. Nikola Mrkˇsi´c, Diarmuid O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In HLT-NAACL. Ankur P. Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In EMNLP. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In ACL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In EMNLP, pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. 2428 Rajat Raina, Aria Haghighi, Christopher Cox, Jenny Finkel, JeffMichels, Kristina Toutanova, Bill MacCartney, Marie-Catherine de Marneffe, Christopher D Manning, and Andrew Y Ng. 2005. Robust textual inference using diverse knowledge sources. In 1st PASCAL Recognition Textual Entailment Challenge Workshop. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP. Lei Sha, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. Recognizing textual entailment via multi-task knowledge assisted lstm. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 285–298. Springer. Vivian S Silva, Andr´e Freitas, and Siegfried Handschuh. 2018. Recognizing and justifying text entailment through distributional navigation on definition graphs. In AAAI. Fred Sommers. 1982. The logic of natural language. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112. Shanshan Wang and Lei Zhang. 2017. CatGAN: Coupled adversarial transfer for domain generation. CoRR, abs/1711.08904. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In IJCAI. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. In Reinforcement Learning, pages 5–32. Springer.
2018
225
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2429–2438 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2429 Subword-level Word Vector Representations for Korean Sungjoon Park 1, Jeongmin Byun 1, Sion Baek 2, Yongseok Cho 3, Alice Oh 1 1 Department of Computing, KAIST, Republic of Korea 2 Program in Cognitive Science, Seoul National University, Republic of Korea 3 Natural Language Processing team, Adecco, Republic of Korea {sungjoon.park, jmbyun}@kaist.ac.kr, [email protected] [email protected], [email protected] Abstract Research on distributed word representations is focused on widely-used languages such as English. Although the same methods can be used for other languages, language-specific knowledge can enhance the accuracy and richness of word vector representations. In this paper, we look at improving distributed word representations for Korean using knowledge about the unique linguistic structure of Korean. Specifically, we decompose Korean words into the jamo level, beyond the characterlevel, allowing a systematic use of subword information. To evaluate the vectors, we develop Korean test sets for word similarity and analogy and make them publicly available. The results show that our simple method outperforms word2vec and character-level Skip-Grams on semantic and syntactic similarity and analogy tasks and contributes positively toward downstream NLP tasks such as sentiment analysis. 1 Introduction Word vector representations built from a large corpus embed useful semantic and syntactic knowledge. They can be used to measure the similarity between words and can be applied to various downstream tasks such as document classification (Yang et al., 2016), conversation modeling (Serban et al., 2016), and machine translation (Neishi et al., 2017). Most previous research for learning the vectors focuses on English (Collobert and Weston, 2008; Mikolov et al., 2013b,a; Pennington et al., 2014; Liu et al., 2015; Cao and Lu, 2017) and thus leads to difficulties and limitations in directly applying those techniques to a language with a different internal structure from that of English. The mismatch is especially significant for morphologically rich languages such as Korean where the morphological richness could be captured by subword level embedding such as character embedding. It has been already shown that decomposing a word into subword units and using them as inputs improves performance for downstream NLP such as text classification (Zhang et al., 2015), language modeling (Kim et al., 2016), and machine translation (Ling et al., 2015; Lee et al., 2017). Despite their effectiveness in capturing syntactic features of diverse languages, decomposing a word into a set of n-grams and learning n-gram vectors does not consider the unique linguistic structures of various languages. Thus, researchers have integrated language-specific structures to learn word vectors, for example subcharacter components of Chinese characters (Yu et al., 2017) and syntactic information (such as prefixes or post-fixes) derived from external sources for English (Cao and Lu, 2017). For Korean, integrating Korean linguistic structure at the level of jamo, the consonants and vowels that are much more rigidly defined than English, is shown to be effective for sentence parsing (Stratos, 2017). Previous work has looked at improving the vector representations of Korean using the character-level decomposition (Choi et al., 2017), but there is room for further investigation because Korean characters can be decomposed to jamos which are smaller units than the characters. In this paper, we propose a method to integrate Korean-specific subword information to learn Korean word vectors and show improvements over previous baselines methods for word similarity, analogy, and sentiment analysis. Our first contri2430 bution is the method to decompose the words into both character-level units and jamo-level units and train the subword vectors through the Skip-Gram model. Our second major contribution is the Korean evaluation datasets for word similarity and analogy tasks, a translation of the WS-353 with annotations by 14 Korean native speakers, and 10,000 items for semantic and syntactic analogies, developed with Korean linguistic expertise. Using those datasets, we show that our model improves performance over other baseline methods without relying on external resources for word decomposition. 2 Related Work 2.1 Language-specific features for NLP Recent studies in NLP field flourish with development of various word vector models. Although such studies aim for universal usage, distinct characteristics of individual languages still remain as a barrier for a unified model. The aforementioned issue is even more prominent when it comes to languages that have rich morphology but lack resources for research (Berardi et al., 2015). Accordingly, various studies dealing with language specific NLP technique proposed considering linguistics traits in models. A large portion of these papers was dedicated to Chinese. Since Chinese is a logosyllabic language, (Yu et al., 2017) relevant studies focused on incorporation of different subword level features on word embedding, such as word internal structure (Wang et al., 2017), subcharacter component,(Yu et al., 2017), syllable (Assylbekov et al., 2017), radicals (Yin et al., 2016), and sememe (Niu et al., 2017). The Korean language is a member of the agglutinative languages (Song, 2006), so previous studies have tried fusing the complex internal structure into the model. For example, a grammatical composition called ’Josa’ in combination with word embedding is utilized in semantic role labeling (Nam and Kim, 2016) and exploiting jamo to handle morphological variation (Stratos, 2017). Also considered in prior work to obtain the word vector presentations for Korean is the syllable (Choi et al., 2017). 2.2 Subword features for NLP Applying subword features to various NLP tasks has become popular in the NLP field. Typically, character-level information is useful when combined with the neural network based models. (Vania and Lopez, 2017; Assylbekov et al., 2017; Cao and Lu, 2017) Previous papers showed performance enhancement in various tasks including language modeling (Bojanowski et al., 2017, 2015), machine translation (Ling et al., 2015), text classification (Zhang et al., 2015; Ling et al., 2015) and parsing (Yu and Vu, 2017). In addition, the character n-gram fused model was suggested as a solution for a small dataset due to its robustness against data sparsity (Cao and Lu, 2017). 3 Model We introduce our model training Korean word vector representations based on a subword-level information Skip-Gram. First, we briefly explain the hierarchical composition structure of Korean words to show how we decompose a Korean word into a sequence of subword components (jamo). Then, we extract character and jamo n-grams from the decomposed sequence to compute word vectors as a mean of the extracted n-grams. We train the vectors by widely-used Skip-Gram model. 3.1 Decomposition of Korean Words Korean words are formed by an explicit hierarchical structure which can be exploited for better modeling. Every word can be decomposed into a sequence of characters, which in turn can be decomposed into jamos, the smallest lexicographic units representing the consonants and vowels of the language. Unlike English which has a more flexible sequences of consonants and vowels making up syllables (e.g., ”straight”), a Korean ”character” which is similar to a syllable in English has a rigid structure of three jamos. They have names that reflect the position in a character: 1) chosung (syllable onset), 2) joongsung (syllable nucleus), and 3) jongsung (syllable coda). The prefix cho in chosung means ”first”, joong in joongsung means ”middle”, and jong in jongsung means ”end” of a character. Each component indicates how the character should be pronounced. With the exception of empty consonants, chosung and jongsung are consonants while joongsung are vowels. The jamos are written with the chosung on top, with joongsung on the right of or below chosung, and jongsung on the bottom (see Fig. 1). As shown in the top of Fig. 1, some characters such as ‘해Sun’ lack jongsung. In this case, we add 2431 (a) chosung (b) joongsung (c) jongsung Figure 1: Example of the composition of a Korean character. Each character is comprised of 3 parts as shown in example of ’달Moon’. On the other hand, as in the top case ’해Sun’, some characters lack the last component, ’jongsung’. an empty jongsung symbol e such that a character always has three (jamos). Thus, the character ‘달Moon’ is decomposed into {ㄷ, ㅏ, ㄹ}, and ‘해Sun’ into {ㅎ, ㅐ, e}. When decomposing a word, we keep the order of the characters and the order of jamos (chosung, joongsung, and jongsung) within the character. By following this rule, we ensure that a Korean word with N characters will have 3N jamos in order. Lastly, the symbols for start of a word < and end of a word > are added to the sequence. For example, the word ‘강아지puppy’ will be decomposed to a sequence of jamos: {<, ㄱ, ㅏ, ㅇ, ㅇ, ㅏ, e, ㅈ, ㅣ, e, >}. 3.2 Extracting N-grams from jamo Sequence We extract the following jamo-level and characterlevel n-grams from the decomposed Korean words: 1) character-level n-grams, and 2) intercharacter jamo-level n-grams. These two levels of subword features can be successfully integrated into jamo-level n-grams by ensuring a character has three jamos, adding empty jongsung symbol to the sequence. For better understanding, we start with the word ‘먹었다ate’. Character-level n-grams. Since we add the empty jongsung symbol e when decomposing characters, we can find jamo-level trigrams representing a single character in the decomposed jamo sequence of a word. For example, there are three character-level unigrams in the word ‘먹었다ate’: {ㅁ, ㅓ, ㄱ}, {ㅇ, ㅓ, ㅆ}, {ㄷ, ㅏ, e} Next, we find character-level n-grams by using the extracted unigrams. Adjacent unigrams are attached to construct n-grams. There are two character-level bigrams, and one trigram in the example: {ㅁ, ㅓ, ㄱ, ㅇ, ㅓ, ㅆ}, {ㅇ, ㅓ, ㅆ, ㄷ, ㅏ, e} {ㅁ, ㅓ, ㄱ, ㅇ, ㅓ, ㅆ, ㅇ, ㅓ, ㅆ, ㄷ, ㅏ, e} Lastly, we add the total jamo sequence of a word including < and > to the set of extracted characterlevel n-grams. Inter-character jamo-level n-grams. Since Korean is a member of the agglutinative language, a syntactic character is attached to the semantic part in the word, and this generates many variations. These variations are often determined by jamo-level information. For example, usage of the subjective case ‘이’ or ‘가’ is determined by the existence of jongsung in the previous character. In order to learn these regularities, we consider jamolevel n-grams across adjacent characters as well. For instance, there are 6 inter-character jamo-level trigrams in the example: {<, ㅁ, ㅓ}, {ㅓ, ㄱ, ㅇ}, {ㄱ, ㅇ, ㅓ}, {ㅆ, ㄷ, ㅏ}, {ㅓ, ㅆ, ㄷ}, {ㅏ, e, >} 3.3 Subword Information Skip-Gram Suppose the training corpus contains a sequence of words {..., wt−2, wt−1, wt, wt+1, wt+2, ...}, the Skip-Gram model maximizes the log probability of context word wt+j under a target word wt: 1 T T X t=1 2c X −c≤j≤c,j̸=0 log p(wt+j|wt) (1) where c is the size of context window, t is total number of words in the corpus. The original Skip-Gram model use softmax function outputs for log p(wt+j|wt) in Eq. 1, however, it requires large computational cost. To avoid computing softmax precisely, we approximately maximize the log probability by Noise Contrastive Estimation, and it can be simplified to the negative sampling using the binary logistic loss: log(1 + e−s(wt+j,wt)) + nc X n=1 log(1 + es(wt+j,wn)) (2) where nc is the number of negative samples, and s(wt+j, wt) is a scoring function. The function computes the dot product between the input of the target word vector wt and the output of the context word vector wt+j. In Skip-Gram (Mikolov et al., 2013a), an input of a word wt is uniquely assigned over the training corpus; however, the vector in the Subword Information Skip-Gram model (Bojanowski et al., 2017) is the mean vector of the 2432 set of n-grams extracted from the word. Formally, the scoring function s(wt, wt+j) is: 1 |Gt| |Gt| X gt∈Gt zT gtvt+j (3) where the decomposed set of n-grams of wt is Gt and its elements are gt, |Gt| is total number of elements of Gt. In general, the n-grams for 3 ≤ n ≤6 is extracted from a word, regardless of the subword-level or compositionality of a word. Similarly, we construct a vector representation of a Korean word by using the extracted two types of n-grams. We compute the sum of jamo-level ngrams, sum of character-level n-grams, and compute mean of the vectors. Let us denote characterlevel n-grams of wt to Gct, and inter-character jamo-level n-grams Gjt, then we obtain the scoring function s(wt, wt+j) as follows: 1 N ( |Gct| X gct∈Gct zT gctvt+j + |Gjt| X gjt∈Gjt zT gjtvt+j) (4) where zgjt is the vector representation of the jamolevel n-gram gjt, and zgct is that of the characterlevel n-gram gct. N is sum of the number of character-level n-grams and the number of intercharacter jamo-level n-grams |Gct| + |Gjt|. 4 Experiments 4.1 Corpus We collect a corpus of Korean documents from various sources to cover a wide context of word usages. The corpus used to train the models include: 1) Korean Wikipedia, 2) online news articles, and 3) Sejong Corpus. The corpus contains 0.12 billion tokens with 638,708 unique words. We discard words that occur fewer than ten times in the entire corpus. Details of the corpus are shown in Table 1. Korean Wikipedia. First, we choose Korean Wikipedia articles1 for training word vector representations. The corpus contains 0.4M articles, 3.3M sentences and 43.4M words. Online News Articles. We collect online news articles of 5 major press from following sections: 1) society, 2) politics, 3) economics, 4) foreign, 5) culture, 6) digital. The articles were published from September to November, 2017. The corpus contains 3.2M sentences and 47.1M words. 1https://dumps.wikimedia.org/kowiki/20171103/ # of words # of sentences # of unique words Wikipedia 43.4M 3.3M 299,528 Online News 47.1M 3.2M 282,955 Sejong Corpus 31.4M 2.2M 231,332 Total 121.9M 8.8M 638,708 Table 1: Number of tokens, sentences and unique words of corpus used to train the word vector representations. We aggregate three sources to make the corpus containing 0.12 billions word tokens with 0.6M unique words. Sejong Corpus. This data is a publicly available corpus2 which is collected under a national research project named the “21st century Sejong Project”. The corpus was developed from 1998 to 2007, and contains formal text (newpapers, dictionaries, novels, etc) and informal text (transcriptions of TV shows and radio programs, etc). Thus, the corpus covers topics and context of language usage which could not be dealt with Wikipedia or news articles. We exclude some documents containing unnatural sentences such as POS-tagged sentences. 4.2 Evaluation Tasks and Datasets We evaluate the performance of word vectors through word similarity task and word analogy task. However, to best of our knowledge, there is no Korean evaluation dataset for either task. Thus we first develop the evaluation datasets. We also test the word vectors for sentiment analysis. 4.2.1 Word Similarity Evaluation Dataset Translating the test set. We develop a Korean version of the word similarity evaluation set. Two graduate students who speak Korean as native language translated the English word pairs in WS-353 (Finkelstein et al., 2001). Then, 14 Korean native speakers annotated the similarity between pairs by giving scores from 0 to 10 for the translated pairs, following written instructions. The original English instructions were translated into Korean as well. Among the 14 scores for each pair, we exclude the minimum and maximum scores and compute the mean of the rest of the scores. The correlation between the original scores and the annotated scores of the translated pairs is .82, which 2https://ithub.korean.go.kr/user/main.do 2433 indicates that the translations are sufficiently reliable. We attribute the difference to the linguistic and cultural differences. We make the Korean version of WS-353 publicly available.3 4.2.2 Word Analogy Evaluation Dataset We develop the word analogy test items to evaluate the performance of word vectors. The evaluation dataset consists of 10,000 items and includes 5,000 items for evaluating the semantic features and 5,000 for the syntactic features. We also release our word analogy evaluation dataset for future research. Semantic Feature Evaluation To evaluate the semantic features of word vectors, we refer to the English version of the word analogy test sets. (Mikolov et al., 2013a; Gladkova et al., 2016). We cover the features in both sets and translated items into Korean. The items are clustered to five categories including miscellaneous items. Each category consists of 1,000 items. • Capital-Country (Capt.) includes two word pairs representing the relation between the country name and its capital: 아테네Athens : 그리스Greece = 바그다드Baghdad : 이라크Iraq • Male-Female (Gend.) evaluates the relation between male and female: 왕자prince:공주princess = 신사gentlemen:숙녀ladies • Name-Nationality (Name) evaluates the relation between the name of celebrities or stars and their nationality: 간디Gandhi : 인도India = 링컨Lincoln : 미국USA • Country-Language (Lang.) evaluates the relation between the country name and its official language: 아르헨티나Argentina : 스페인어Spanish = 미국USA : 영어English • Miscellaneous (Mics.) includes various semantic features, such as pairs of a young animals, sound of animals, and Korean-specific color-words or regions, etc.. 개구리Frog : 올챙이tadpole = 말horse : 망아지pony 닭chicken:꼬꼬댁cackling=호랑이tiger:으르렁growl 파란blue:새파란bluish=노란yellow:샛노란yellowish 부산Busan : 경상남도South Gyeongsang Province = 대구Daegu : 경상북도North Gyeongsang Province Syntactic Feature Evaluation We define five representative syntactic categories and develop 3https://github.com/SungjoonPark/KoreanWordVectors Korean-specific test items, rather than trying to cover the existing categories in the original sets (Mikolov et al., 2013a; Gladkova et al., 2016). This is because most of the syntactic features in these sets are not available in Korean. We develop the test set with linguistic expert knowledge of Korean. The following case is a good example. In Korean, the subject marker is attached to the back of a word, and other case markers are also explicit at the word level. Here, word level refers to ‘a phrase delimited by two whitespaces around it’. Unlike Korean, in English, subjects are determined by the position in a sentence (i.e., subject comes before the verb), so the case is not explicitly marked in the word. Similarly, there are other important and unique syntactic features of the Korean language, of which we choose the following five categories to evaluate the word vectors: • Case contains various case markers attached to common nouns. This evaluates a case in Korean which is represented within a wordlevel: 교수Professor : 교수가Professor+case가 = 축구soccer : 축구가soccer+case가 • Tense includes a verb variation of two tenses, one of which is a present tense and a past tense for the other: 싸우다fight : 싸웠다fought = 오다come : 왔다came • Voice has a pair of verb voice, one for an active voice and a passive voice for the other. It evaluates the voice which is represented by a verbal suffix: 팔았다sold : 팔렸다be sold = 평가했다evaluated : 평가됐다was evaluated • Verb ending form includes various verb ending forms. The various forms are part of verbal inflection in Korean: 가다go : 가고go+form고 = 쓰다write : 쓰고write+form고 • Honorific (Honr.) evaluates a morphological variation for verbs in Korean. An honorific expression is one of the most distinctive feature in Korean compared to other languages. This test set introduces the honorific morpheme ‘-시-’ which is used in verbs: 도왔다helped : 도우셨다helped+honorific시 = 됐다done : 되셨다done+honorific시 4.2.3 Sentiment Analysis We perform a binary sentiment classification task for evaluation of word vectors. Given a sequence 2434 of words, the trained classifier should predict the sentiment from the inputs while maintaining the input word vectors fixed. Dataset We choose Naver Sentiment Movie Corpus4. Scraped from Korean portal site Naver, the dataset contains 200K movie reviews. Each review is no longer than 140 characters and contain binary label according to its sentiment (1 for positive and 0 for negative). The number of samples in both sentiments is equal with 100K of positives and 100K of negatives in sum. We sample from the dataset for training (100K), validation (25K), and test set (25K). Again, each set’s ratio of sentiment class is balanced. Although we apply simple preprocessing of stripping out punctuation and emoticon, the dataset is still noisy with typos, segmentation errors and abnormal word usage since its original source is raw comments from portal site. Classifier In order to build sentiment classifier, we adopt single layer LSTM with 300 hidden units and 0.5 dropout rates. Given the final state of LSTM unit, sigmoid activation function is applied for output prediction. We use cross-entropy loss and optimize parameters through Adam optimizer (Kingma and Ba, 2014) with learning rate of 0.001. 4.3 Comparison Models We compare performance of our model to comparison models including word-level, character-level, and jamo-level Skip-Gram models trained by negative sampling. Hyperparameters of each models are tuned over word similarity task. We fix the number of training epochs 5. Skip-Gram (SG) We first compare the performance with word-level Skip-Gram model (Mikolov et al., 2013a) where a unique vector is assigned for every unique words in the corpus. We set the number of dimensions as 300, number of negative samples to 5, and window size to 5. Character-level Skip-Gram (SISG(ch)) splits words to character-level n-grams based on subword information skip-gram. (Bojanowski et al., 2017). We set the number of dimensions as 300, number of negative samples to 5, and window size to 5. The n was set to 2-4. Jamo-level Skip-Gram with Empty Jongsung Symbol (SISG(jm)) splits words to jamo-level ngrams based on subword information skip-gram. 4https://github.com/e9t/nsmc 0.599 0.658 0.671 0.677 0.677 0.550 0.580 0.610 0.640 0.670 0.700 SG SISG (ch) SISG (jm) SISG (ch4+jm) SISG (ch6+jm) Word Similarity Figure 2: Spearman’s correlation coefficient of word similarity task for each models. The results show higher consistency to human word similarity judgment on our method. (Bojanowski et al., 2017). In addition, if a character lacks jongsung, the symbol e is added. We set the number of dimensions as 300, number of negative samples to 5, and window size to 5. The n was set to 3-6. Note that setting n=3-6 and adding the jongsung symbol makes this model as a specific case of our model, containing jamo-level ngrams (n=3-6) and character-level n-grams (n=12) as well. 4.4 Optimization In order to train our model, we apply stochastic gradient descent with linearly scheduled learning rate decay. Initial learning rate is set to .025. To speed up the training, we train the vectors in parallel with shared parameters, and they are updated asynchronously. For our model, we set n of character n-grams to 1-4 or 1-6, and n of inter-character jamolevel n-grams to 3-5. We name both model as SISG(ch4+jm) and SISG(ch6+jm), respectively. The number of dimension is set to 300, window size to 5, and negative samples to 5. We train our model 5 epochs over training corpus. 5 Results Word Similarity. We report Spearman’s correlation coefficient between the human judgment and model’s cosine similarity for the similarity of word pairs. Fig. 2 presents the results. For word-level skip-gram, Spearman’s correlation is .599. If we decompose words into characters n-grams in order to construct word vectors (SISG(Ch)), performance is highly improved to .658. It indicates that decomposing words itself is helpful to learn good 2435 Model Analogy Semantic Syntactic Capt Gend Name Lang Misc Case Tense Voice Form Honr SG 0.460 0.551 0.537 0.435 0.574 0.521 0.597 0.594 0.685 0.634 SISG(ch) 0.469 0.584 0.608 0.439 0.614 0.422 0.559 0.550 0.656 0.489 SISG(jm) 0.442 0.515 0.574 0.362 0.565 0.228 0.421 0.434 0.537 0.367 SISG(ch4+jm) 0.431 0.504 0.570 0.361 0.556 0.212 0.415 0.434 0.501 0.364 SISG(ch6+jm) 0.425 0.498 0.561 0.354 0.554 0.210 0.414 0.426 0.507 0.367 Table 2: Performance of our method and comparison models. Average cosine distance for each category in word analogy task are reported. Overall, our model outperforms comparison models, showing close distance between predicted vector a + b −c and the target vector d (a:b=c:d). Specifically, performance is improved more in syntactic analogies. Korean word vectors, which is morphologically rich language. Moreover, if the words are decomposed to deeper level (SISG(jm)), performance is further improved to .671. Next, addition of an empty jongsung symbol e to jamo sequence, which reflects Koreanspecific linguistic regularities, improves the quality of word vectors. SISG(jm), specific case of our model, shows higher correlation coefficient than the other baselines. Lastly, when we extend number of characters to learn in a word to 4 or 6, our models outperform others. Word Analogy. In general, given an item a:b=c:d and corresponding word vectors ua, ub, uc, ud, the vector ua + ub −uc is used to compute cosine distances between the vector and the others. Then the vectors are ranked in terms of the distance by ascending order and if the vector ud is found at the top, the item is counted as correct. Top 1 accuracy or error rate for each category is frequently used metric for this task, however, in this case these rank-based measures may not be an appropriate measure since the total number of unique n-grams (e.g., SISG) or unique words (e.g., SG) over the same corpus largely differ from each other. For fair comparison, we directly report cosine distances between the vector ua + ub −uc and ud of each category, rather than evaluating ranks of the vectors. Formally, given an item a:b=c:d, we compute 3COSADD based metric: 1 −cos(ua + ub −uc, ud) (5) We report the average cosine distance between predicted vector ua + ub −uc and target vector ud of each category. In semantic analogies, decomposing word into character helps little for learning semantic features. However, jamo-level n-grams help representing overall semantic features and our model show higher performance compared to baseline models. One exception is Name-Nationality category since it mainly consists of items including proper nouns, and decomposing these nouns does not help learning the semantic feature of the word. For example, it is obvious that the semantic features of both words ‘간디Ghandi’ and ‘인도India’ could not be derived from that of characters or jamo n-grams comprising those words. On the other hand, decomposing words does help to learn syntactic features for all categories, and decomposing a word to even deeper levels makes learning those features more effectively. Our model outperforms all other baselines, and the amount of decreased cosine distances compared to that of word-level Skip-Gram is larger than semantic categories. Korean language is agglutinative language that character-level syntactic affixes are attached to the root of the word, and the combination of them determines final form the word. Also, the form can be reduced with jamo-level transformation. This is the main reason that we can learn syntactic feature of Korean words if we decompose a word into character-level and jamo-level simultanously. We observe similar tendency when using 3COSMUL distance metric. (Levy and Goldberg, 2014) Sentiment Analysis. We report accuracy, loss, precision, recall and f1 score for binary sentiment classification task over test set. Although overall performance is homogeneous, our method which decompose a word to 1-6 character n-grams and 35 jamo n-grams show slightly higher performance over comparison models. In addition, our approach show better results compared to character2436 Model Acc. (%) Prc. Rec. F1 SG 76.15 .746 .792 .768 SISG(ch) 76.26 .774 .741 .757 SISG(jm) 76.53 .790 .722 .754 SISG(ch4+jm) 76.28 .755 .776 .765 SISG(ch6+jm) 76.54 .750 .795 .772 Table 3: Performance of sentiment classification task. 3-5 jamo n-grams and 1-6 chracter n-grams show slightly higher performance in terms of accuracy and f1-score over comparison models. Word Sim. # of chars 4 5 6 all # of jamos 2-4 0.660 0.655 0.659 0.651 3-4 0.660 0.650 0.652 0.660 3-5 0.677 0.672 0.677 0.675 3-6 0.665 0.663 0.664 0.669 Table 4: Spearman’s correlation coefficient of Word similarity task by n-gram of jamos and characters. Performance are improved when the 3-5 gram of jamos and 1-4 or 1-6 gram of characters. level SISG or jamo-level SISG. On the other hand, word-level Skip-Gram show comparable F1-score to our model, and is even higher than other comparison models. This is because the dataset contains significant amount of proper nouns, such as movie or actor names, and these word’s semantic representations are captured better by word-level representations, as shown in word analogy task. Effect of Size n in both n-grams. Table. 4 shows performance of word similarity task for each number of inter-character jamo-level n-grams and character-level n-grams. For the n of jamo-level ngrams, including n=5,6 of n-grams and excluding bigrams show higher performance. Meanwhile, n of character-level n-grams, including all of the character n-grams while decomposing a word does not guarantee performance improvement. Since most of the Korean word consists of no more than 6 characters (97.2% of total corpus), it seems maximum number of n=6 in character n-gram is large enough to learn word vectors. In addition, words with no more than 4 characters takes 82.6% of total corpus, so that n=4 sufficient to learn character n-grams as well. 6 Conclusion and Discussions In this paper, we present how to decompose a Korean character into a sequence of jamos with empty jongsung symbols, then extract characterlevel n-grams and intercharacter jamo-level ngrams from that sequence. Both n-grams construct a word vector representation by computing the average of n-grams, and these vectors are trained by subword-level information Skip-Gram. Prior to evaluating the performance of the vectors, we developed test set for word similarity and word analogy tasks for Korean. We demonstrated the effectiveness of the learned word vectors in capturing the semantic and syntactic information by evaluating these vectors with word similarity and word analogy tasks. Specifically, the vectors using both jamo and character-level information can represent syntactic features more precisely even in an agglutinative language. Furthermore, sentiment classification results of our work indicate that the representative power of the vectors positively contributes to downstream NLP task. Decomposing Korean word into jamo-level or character unigram helps capturing syntactic information. For example, Korean words add a character to the root of the word (e.g., ‘-은’ subjective case, ‘-었’ for past tense ‘-시-’ for honorific, ‘-히’ for voice, and ‘-고-’ for verb ending form.) Then composed word can be reduced to have fewer characters by transforming jamos, such as ‘되었 다’ to ‘됐다’. Hence, the inter-character jamolevel n-grams also help capture these features. On the other hand, larger n-grams such as characterlevel trigram will learn unique meaning of that word since those larger component of the word will mostly occur with that word. By leveraging both features, our method produces word vectors reflecting linguistic features effectively, and thus, outperforms previous word-level approaches. Since Korean words are divisible once more into grapheme level, resulting in longer sequence of jamos for a given word, we plan to explore potential applicability of deeper level of subword information in Korean. Meanwhile, we will further train our model over noisy data and investigate how it is dealing with noisy words. Generally, informal Korean text contains intentional typos (‘맛잇다‘delicious’ with typo’), stand-alone jamo as a character, (‘ㅋㅋlol’) and segmentation errors. (‘같 이가다‘go together’ without space’). Since these errors 2437 occur frequently, it is important to apply the vectors in training NLP models over real-word data. We plan to apply these vectors for various neural network based NLP models, such as conversation modeling. Lastly, since our method can capture Korean syntactic features through jamo and character n-grams, we can apply the same idea to other tasks such as POS tagging and parsing. Acknowledgments This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2017-0-01778, Development of Explainable Human-level Deep Machine Learning Inference Framework). References Zhenisbek Assylbekov, Rustem Takhanov, Bagdat Myrzakhmetov, and Jonathan N Washington. 2017. Syllable-aware neural language models: A failure to beat character-aware ones. In Proc. of EMNLP. Giacomo Berardi, Andrea Esuli, and Diego Marcheggiani. 2015. Word embeddings go to italy: A comparison of models and training datasets. In IIR. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL . Piotr Bojanowski, Armand Joulin, and Tomas Mikolov. 2015. Alternative structures for character-level rnns. arXiv preprint arXiv:1511.06303 . Shaosheng Cao and Wei Lu. 2017. Improving word embeddings with convolutional feature learning and subword information. In Proc. of AAAI. Sanghyuk Choi, Taeuk Kim, Jinseok Seol, and Sanggoo Lee. 2017. A syllable-based technique for word embeddings of korean words. In Proc. of the First Workshop on Subword and Character Level Models in NLP. pages 36–40. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. of ICML. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proc. of WWW. Anna Gladkova, Aleksandr Drozd, and Satoshi Matsuoka. 2016. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In Proc. of the NAACL Student Research Workshop. pages 8–15. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Proc. of AAAI. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 . Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the ACL . Omer Levy and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth CoNLL. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015. Character-based neural machine translation. In Proc. of ACL. 2438 Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2015. Learning context-sensitive word embeddings with neural tensor skip-gram model. In Proc. of IJCAI. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. of NIPS. Kyeong-Min Nam and Yu-Seop Kim. 2016. A word embedding and a josa vector for korean unsupervised semantic role induction. In AAAI. pages 4240–4241. Masato Neishi, Jin Sakuma, Satoshi Tohda, Shonosuke Ishiwatari, Naoki Yoshinaga, and Masashi Toyoda. 2017. A bag of useful tricks for practical neural machine translation: Embedding layer initialization and large batch size. In Proceedings of the 4th Workshop on Asian Translation (WAT2017). Yilin Niu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Improved word representation learning with sememes. In Proc. of ACL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proc. of AAAI. Jae Jung Song. 2006. The Korean language: Structure, use and context. Routledge. Karl Stratos. 2017. A sub-character architecture for korean language processing. In Proc. of EMNLP. pages 721–726. Clara Vania and Adam Lopez. 2017. From characters to words to in between: Do we capture morphology? In Proc. of ACL. Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2017. Exploiting word internal structures for generic chinese sentence representation. In Proc. of EMNLP. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proc. of NAACL. Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity chinese word embedding. In Proc. of EMNLP. pages 981–986. Jinxing Yu, Xun Jian, Hao Xin, and Yangqiu Song. 2017. Joint embeddings of chinese words, characters, and fine-grained subcharacter components. In Proc. of EMNLP. pages 286–291. Xiang Yu and Ngoc Thang Vu. 2017. Character composition model with convolutional neural networks for dependency parsing on morphologically rich languages. In Proc. of ACL. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proc. of NIPS.
2018
226
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2439–2449 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2439 Incorporating Chinese Characters of Words for Lexical Sememe Prediction Huiming Jin1∗†, Hao Zhu2†, Zhiyuan Liu2,3‡, Ruobing Xie4, Maosong Sun2,3, Fen Lin4, Leyu Lin4 1 Shenyuan Honors College, Beihang University, Beijing, China 2 Beijing National Research Center for Information Science and Technology, State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University, Beijing, China 3Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou 221009 China 4 Search Product Center, WeChat Search Application Department, Tencent, China Abstract Sememes are minimum semantic units of concepts in human languages, such that each word sense is composed of one or multiple sememes. Words are usually manually annotated with their sememes by linguists, and form linguistic commonsense knowledge bases widely used in various NLP tasks. Recently, the lexical sememe prediction task has been introduced. It consists of automatically recommending sememes for words, which is expected to improve annotation efficiency and consistency. However, existing methods of lexical sememe prediction typically rely on the external context of words to represent the meaning, which usually fails to deal with low-frequency and out-ofvocabulary words. To address this issue for Chinese, we propose a novel framework to take advantage of both internal character information and external context information of words. We experiment on HowNet, a Chinese sememe knowledge base, and demonstrate that our framework outperforms state-of-the-art baselines by a large margin, and maintains a robust performance even for low-frequency words. i 1 Introduction A sememe is an indivisible semantic unit for human languages defined by linguists (Bloomfield, 1926). The semantic meanings of concepts (e.g., words) can be composed by a finite number of sememes. However, the sememe set of a word is ∗Work done while doing internship at Tsinghua University. † Equal contribution. Huiming Jin proposed the overall idea, designed the first experiment, conducted both experiments, and wrote the paper; Hao Zhu made suggestions on ensembling, proposed the second experiment, and spent a lot of time on proofreading the paper and making revisions. All authors helped shape the research, analysis and manuscript. ‡ Corresponding author: Z. Liu ([email protected]) i Code is available at https://github.com/thunlp/Character-enhanced-Sememe-Prediction 职位 (occupation) HostOf define RelateTo sememes sense word domain Word embedding 匠(craftsman) 铁(iron) 金属 (metal) 工 (industrial) 人 (human) ironsmith 铁匠(ironsmith) External information Internal information Figure 1: Sememes of the word “铁匠” (ironsmith) in HowNet, where occupation, human and industrial can be inferred by both external (contexts) and internal (characters) information, while metal is well-captured only by the internal information within the character “ 铁” (iron). not explicit, which is why linguists build knowledge bases (KBs) to annotate words with sememes manually. HowNet is a classical widely-used sememe KB (Dong and Dong, 2006). In HowNet, linguists manually define approximately 2, 000 sememes, and annotate more than 100, 000 common words in Chinese and English with their relevant sememes in hierarchical structures. HowNet is well developed and has a wide range of applications in many NLP tasks, such as word sense disambiguation (Duan et al., 2007), sentiment analysis (Fu et al., 2013; Huang et al., 2014) and cross-lingual word similarity (Xia et al., 2011). Since new words and phrases are emerging every day and the semantic meanings of existing concepts keep changing, it is time-consuming and work-intensive for human experts to annotate new 2440 concepts and maintain consistency for large-scale sememe KBs. To address this issue, Xie et al. (2017) propose an automatic sememe prediction framework to assist linguist annotation. They assumed that words which have similar semantic meanings are likely to share similar sememes. Thus, they propose to represent word meanings as embeddings (Pennington et al., 2014; Mikolov et al., 2013) learned from a large-scale text corpus, and they adopt collaborative filtering (Sarwar et al., 2001) and matrix factorization (Koren et al., 2009) for sememe prediction, which are concluded as Sememe Prediction with Word Embeddings (SPWE) and Sememe Prediction with Sememe Embeddings (SPSE) respectively. However, those methods ignore the internal information within words (e.g., the characters in Chinese words), which is also significant for word understanding, especially for words which are of lowfrequency or do not appear in the corpus at all. In this paper, we take Chinese as an example and explore methods of taking full advantage of both external and internal information of words for sememe prediction. In Chinese, words are composed of one or multiple characters, and most characters have corresponding semantic meanings. As shown by Yin (1984), more than 90% of Chinese characters in modern Chinese corpora are morphemes. Chinese words can be divided into single-morpheme words and compound words, where compound words account for a dominant proportion. The meanings of compound words are closely related to their internal characters as shown in Fig. 1. Taking a compound word “铁匠” (ironsmith) for instance, it consists of two Chinese characters: “铁” (iron) and “匠” (craftsman), and the semantic meaning of “铁匠” can be inferred from the combination of its two characters (iron + craftsman →ironsmith). Even for some single-morpheme words, their semantic meanings may also be deduced from their characters. For example, both characters of the single-morpheme word “徘徊” (hover) represent the meaning of “hover” or “linger”. Therefore, it is intuitive to take the internal character information into consideration for sememe prediction. In this paper, we propose a novel framework for Character-enhanced Sememe Prediction (CSP), which leverages both internal character information and external context for sememe prediction. CSP predicts the sememe candidates for a target word from its word embedding and the corresponding character embeddings. Specifically, we follow SPWE and SPSE as introduced by Xie et al. (2017) to model external information and propose Sememe Prediction with Word-to-Character Filtering (SPWCF) and Sememe Prediction with Character and Sememe Embeddings (SPCSE) to model internal character information. In our experiments, we evaluate our models on the task of sememe prediction using HowNet. The results show that CSP achieves state-of-the-art performance and stays robust for low-frequency words. To summarize, the key contributions of this work are as follows: (1) To the best of our knowledge, this work is the first to consider the internal information of characters for sememe prediction. (2) We propose a sememe prediction framework considering both external and internal information, and show the effectiveness and robustness of our models on a real-world dataset. 2 Related Work Knowledge Bases. Knowledge Bases (KBs), aiming to organize human knowledge in structural forms, are playing an increasingly important role as infrastructural facilities of artificial intelligence and natural language processing. KBs rely on manual efforts (Bollacker et al., 2008), automatic extraction (Auer et al., 2007), manual evaluation (Suchanek et al., 2007), automatic completion and alignment (Bordes et al., 2013; Toutanova et al., 2015; Zhu et al., 2017) to build, verify and enrich their contents. WordNet (Miller, 1995) and BabelNet (Navigli and Ponzetto, 2012) are the representative of linguist KBs, where words of similar meanings are grouped to form thesaurus (Nastase and Szpakowicz, 2001). Apart from other linguistic KBs, sememe KBs such as HowNet (Dong and Dong, 2006) can play a significant role in understanding the semantic meanings of concepts in human languages and are favorable for various NLP tasks: information structure annotation (Gan and Wong, 2000), word sense disambiguation (Gan et al., 2002), word representation learning (Niu et al., 2017; Faruqui et al., 2015), and sentiment analysis (Fu et al., 2013) inter alia. Hence, lexical sememe prediction is an important task to construct sememe KBs. Automatic Sememe Prediction. Automatic sememe prediction is proposed by Xie et al. (2017). 2441 For this task, they propose SPWE and SPSE, which are inspired by collaborative filtering (Sarwar et al., 2001) and matrix factorization (Koren et al., 2009) respectively. SPWE recommends the sememes of those words that are close to the unlabelled word in the embedding space. SPSE learns sememe embeddings by matrix factorization (Koren et al., 2009) within the same embedding space of words, and it then recommends the most relevant sememes to the unlabelled word in the embedding space. In these methods, word embeddings are learned based on external context information (Pennington et al., 2014; Mikolov et al., 2013) on large-scale text corpus. These methods do not exploit internal information of words, and fail to handle low-frequency words and outof-vocabulary words. In this paper, we propose to incorporate internal information for lexical sememe prediction. Subword and Character Level NLP. Subword and character level NLP models the internal information of words, which is especially useful to address the out-of-vocabulary (OOV) problem. Morphology is a typical research area of subword level NLP. Subword level NLP has also been widely considered in many NLP applications, such as keyword spotting (Narasimhan et al., 2014), parsing (Seeker and C¸ etino˘glu, 2015), machine translation (Dyer et al., 2010), speech recognition (Creutz et al., 2007), and paradigm completion (Sutskever et al., 2014; Bahdanau et al., 2015; Cotterell et al., 2016a; Kann et al., 2017; Jin and Kann, 2017). Incorporating subword information for word embeddings (Bojanowski et al., 2017; Cotterell et al., 2016b; Chen et al., 2015; Wieting et al., 2016; Yin et al., 2016) facilitates modeling rare words and can improve the performance of several NLP tasks to which the embeddings are applied. Besides, people also consider character embeddings which have been utilized in Chinese word segmentation (Sun et al., 2014). The success of previous work verifies the feasibility of utilizing internal character information of words. We design our framework for lexical sememe prediction inspired by these methods. 3 Background and Notation In this section, we first introduce the organization of sememes, senses and words in HowNet. Then we offer a formal definition of lexical sememe prediction and develop our notation. 3.1 Sememes, Senses and Words in HowNet HowNet provides sememe annotations for Chinese words, where each word is represented as a hierarchical tree-like sememe structure. Specifically, a word in HowNet may have various senses, which respectively represent the semantic meanings of the word in the real world. Each sense is defined as a hierarchical structure of sememes. For instance, as shown in the right part of Fig. 1, the word “铁匠” (ironsmith) has one sense, namely ironsmith. The sense ironsmith is defined by the sememe “人” (human) which is modified by sememe “职位” (occupation), “金属” (metal) and “工” (industrial). In HowNet, linguists use about 2, 000 sememes to describe more than 100, 000 words and phrases in Chinese with various combinations and hierarchical structures. 3.2 Formalization of the Task In this paper, we focus on the relationships between the words and the sememes. Following the settings of Xie et al. (2017), we simply ignore the senses and the hierarchical structure of sememes, and we regard the sememes of all senses of a word together as the sememe set of the word. We now introduce the notation used in this paper. Let G = (W, S, T) denotes the sememe KB, where W = {w1, w2, . . . , w|W|} is the set of words, S is the set of sememes, and T ⊆W × S is the set of relation pairs between words and sememes. We denote the Chinese character set as C, with each word wi ∈C+. Each word w has its sememe set Sw = {s|(w, s) ∈T}. Take the word “铁匠” (ironsmith) for example, the sememe set S铁匠(ironsmith) consists of “人” (human), “职 位” (occupation), “金属” (metal) and “工” (industrial). Given a word w ∈C+, the task of lexical sememe prediction aims to predict the corresponding P(s|w) of sememes in S to recommend them to w. 4 Methodology In this section, we present our framework for lexical sememe prediction (SP). For each unlabelled word, our framework aims to recommend the most appropriate sememes based on the internal and external information. Because of introducing character information, our framework can work for both high-frequency and low-frequency words. 2442 Our framework is the ensemble of two parts: sememe prediction with internal information (i.e., internal models), and sememe prediction with external information (i.e., external models). Explicitly, we adopt SPWE, SPSE, and their ensemble (Xie et al., 2017) as external models, and we take SPWCF, SPCSE, and their ensemble as internal models. In the following sections, we first introduce SPWE and SPSE. Then, we show the details of SPWCF and SPCSE. Finally, we present the method of model ensembling. 4.1 SP with External Information SPWE and SPSE are introduced by Xie et al. (2017) as the state of the art for sememe prediction. These methods represent word meanings with embeddings learned from external information, and apply the ideas of collaborative filtering and matrix factorization in recommendation systems for sememe predication. SP with Word Embeddings (SPWE) is based on the assumption that similar words should have similar sememes. In SPWE, the similarity of words are measured by cosine similarity. The score function P(sj|w) of sememe sj given a word w is defined as: P(sj|w) ∼ X wi∈W cos(w, wi) · Mij · cri, (1) where w and wi are pre-trained word embeddings of words w and wi. Mij ∈{0, 1} indicates the annotation of sememe sj on word wi, where Mij = 1 indicates the word sj ∈Swi and otherwise is not. ri is the descend cosine word similarity rank between w and wi, and c ∈(0, 1) is a hyper-parameter. SP with Sememe Embeddings (SPSE) aims to map sememes into the same low-dimensional space of the word embeddings to predict the semantic correlations of the sememes and the words. This method learns two embeddings s and ¯s for each sememe by solving matrix factorization with the loss function defined as: L = X wi∈W,sj∈S wi · (sj + ¯sj) + bi + b′ j −Mij 2 + λ X sj,sk∈S (sj · ¯sk −Cjk)2 , (2) where M is the same matrix used in SPWE. C indicates the correlations between sememes, in which Cjk is defined as the point-wise mutual information PMI(sj, sk). The sememe embeddings are learned by factorizing the word-sememe matrix M and the sememe-sememe matrix C synchronously with fixed word embeddings. bi and b′ j denote the bias of wi and sj, and λ is a hyperparameter. Finally, the score of sememe sj given a word w is defined as: P(sj|w) ∼w · (sj + ¯sj) . (3) 4.2 SP with Internal Information We design two methods for sememe prediction with only internal character information without considering contexts as well as pre-trained word embeddings. 4.2.1 SP with Word-to-Character Filtering (SPWCF) Inspired by collaborative filtering (Sarwar et al., 2001), we propose to recommend sememes for an unlabelled word according to its similar words based on internal information. Instead of using pre-trained word embeddings, we consider words as similar if they contain the same characters at the same positions. In Chinese, the meaning of a character may vary according to its position within a word (Chen et al., 2015). We consider three positions within a word: Begin, Middle, and End. For example, as shown in Fig. 2, the character at the Begin position of the word “火车站” (railway station) is “火” (fire), while “车” (vehicle) and “站” (station) are at the Middle and End position respectively. The character “站” usually means station when it is at the End position, while it usually means stand at the Begin position like in “站立” (stand), “站 岗哨兵” (standing guard) and “站起来” (stand up). 高等教育 Begin End Middle Figure 2: An example of the position of characters in a word. Formally, for a word w = c1c2...c|w|, we define πB(w) = {c1}, πM(w) = {c2, ..., c|w−1|}, πE(w) = {c|w|}, and Pp(sj|c) ∼ P wi∈W∧c∈πp(wi) Mij P wi∈W∧c∈πp(wi) |Swi|, (4) 2443 that represents the score of a sememe sj given a character c and a position p, where πp may be πB, πM, or πE. M is the same matrix used in Eq. (1). Finally, we define the score function P(sj|w) of sememe sj given a word w as: P(sj|w) ∼ X p∈{B,M,E} X c∈πp(w) Pp(sj|c). (5) SPWCF is a simple and efficient method. It performs well because compositional semantics are pervasive in Chinese compound words, which makes it straightforward and effective to find similar words according to common characters. 4.2.2 SP with Character and Sememe Embeddings (SPCSE) The method Sememe Prediction with Word-toCharacter Filtering (SPWCF) can effectively recommend the sememes that have strong correlations with characters. However, just like SPWE, it ignores the relations between sememes. Hence, inspired by SPSE, we propose Sememe Prediction with Character and Sememe Embeddings (SPCSE) to take the relations between sememes into account. In SPCSE, we instead learn the sememe embeddings based on internal character information, then compute the semantic distance between sememes and words for prediction. Inspired by GloVe (Pennington et al., 2014) and SPSE, we adopt matrix factorization in SPCSE, by decomposing the word-sememe matrix and the sememe-sememe matrix simultaneously. Instead of using pre-trained word embeddings in SPSE, we use pre-trained character embeddings in SPCSE. Since the ambiguity of characters is stronger than that of words, multiple embeddings are learned for each character (Chen et al., 2015). We select the most representative character and its embedding to represent the word meaning. Because low-frequency characters are much rare than those low-frequency words, and even lowfrequency words are usually composed of common characters, it is feasible to use pre-trained character embeddings to represent rare words. During factorizing the word-sememe matrix, the character embeddings are fixed. We set Ne as the number of embeddings for each character, and each character c has Ne embeddings c1, ..., cNe. Given a word w and a sememe s, we select the embedding of a character of w closest to the sememe embedding by cosine distance as the representation of the word w, 铁(iron) 1 铁(iron) 2 铁(iron) 3 匠(craftsman) 1 匠(craftsman) 2 匠(craftsman) 3 金属(metal) 金属(metal) prediction 铁匠(ironsmith) 0.87 0.47 0.70 0.88 1.15 1.04 Figure 3: An example of adopting multipleprototype character embeddings. The numbers are the cosine distances. The sememe “金属” (metal) is the closest to one embedding of “铁” (iron). as shown in Fig. 3. Specifically, given a word w = c1...c|w| and a sememe sj, we define ˆk, ˆr = arg min k,r  1 −cos(cr k, (s′ j + ¯s′ j))  , (6) where ˆk and ˆr indicate the indices of the character and its embedding closest to the sememe sj in the semantic space. With the same word-sememe matrix M and sememe-sememe correlation matrix C in Eq. (2), we learn the sememe embeddings with the loss function: L = X wi∈W,sj∈S  cˆr ˆk · s′ j + ¯s′ j  + bc ˆk + b′′ j −Mij 2 + λ′ X sj,sq∈S s′ j · ¯s′ q −Cjq 2 , (7) where s′ j and ¯s′ j are the sememe embeddings for sememe sj, and cˆr ˆk is the embedding of the character that is the closest to sememe sj within wi. Note that, as the characters and the words are not embedded into the same semantic space, we learn new sememe embeddings instead of using those learned in SPSE, hence we use different notations for the sake of distinction. bc k and b′′ j denote the biases of ck and sj, and λ′ is the hyper-parameter adjusting the two parts. Finally, the score function of word w = c1...c|w| is defined as: P(sj|w) ∼cˆr ˆk · s′ j + ¯s′ j  . (8) 4.3 Model Ensembling SPWCF / SPCSE and SPWE / SPSE take different sources of information as input, which means that they have different characteristics: SPWCF / SPCSE only have access to internal information, while SPWE / SPSE can only make use of external 2444 information. On the other hand, just like the difference between SPWE and SPSE, SPWCF originates from collaborative filtering, whereas SPCSE uses matrix factorization. All of those methods have in common that they tend to recommend the sememes of similar words, but they diverge in their interpretation of similar. SPCSE word high-frequency words low-frequency words Legend SPWCF SPSE SPWE External Internal CSP Figure 4: The illustration of model ensembling. Hence, to obtain better prediction performance, it is necessary to combine these models. We denote the ensemble of SPWCF and SPCSE as the internal model, and we denote the ensemble of SPWE and SPSE as the external model. The ensemble of the internal and the external models is our novel framework CSP. In practice, for words with reliable word embeddings, i.e., highfrequency words, we can use the integration of the internal and the external models; for words with extremely low frequencies (e.g., having no reliable word embeddings), we can just use the internal model and ignore the external model, because the external information is noise in this case. Fig. 4 shows model ensembling in different scenarios. For the sake of comparison, we use the integration of SPWCF, SPCSE, SPWE, and SPSE as CSP in our all experiments. In this paper, two models are integrated by simple weighted addition. 5 Experiments In this section, we evaluate our models on the task of sememe prediction. Additionally, we analyze the performance of different methods for various word frequencies. We also execute an elaborate case study to demonstrate the mechanism of our methods and the advantages of using internal information. 5.1 Dataset We use the human-annotated sememe KB HowNet for sememe prediction. In HowNet, 103, 843 words are annotated with 212, 539 senses, and each sense is defined as a hierarchical structure of sememes. There are about 2, 000 sememes in HowNet. However, the frequencies of some sememes in HowNet are very low, such that we consider them unimportant and remove them. Our final dataset contains 1, 400 sememes. For learning the word and character embeddings, we use the Sogou-T corpusii (Liu et al., 2012), which contains 2.7 billion words. 5.2 Experimental Settings In our experiments, we evaluate SPWCF, SPCSE, and SPWCF + SPCSE which only use internal information, and the ensemble framework CSP which uses both internal and external information for sememe prediction. We use the stateof-the-art models from Xie et al. (2017) as our baselines. Additionally, we use the SPWE model with word embeddings learned by fastText (Bojanowski et al., 2017) that considers both internal and external information as a baseline. For the convenience of comparison, we select 60, 000 high-frequency words in Sogou-T corpus from HowNet. We divide the 60, 000 words into train, dev, and test sets of size 48, 000, 6, 000, and 6, 000, respectively, and we keep them fixed throughout all experiments except for Section 5.4. In Section 5.4, we utilize the same train and dev sets, but use other words from HowNet as the test set to analyze the performance of our methods for different word frequency scenarios. We select the hyper-parameters on the dev set for all models including the baselines and report the evaluation results on the test set. We set the dimensions of the word, sememe, and character embeddings to be 200. The word embeddings are learned by GloVe (Pennington et al., 2014). For the baselines, in SPWE, the hyper-parameter c is set to 0.8, and the model considers no more than K = 100 nearest words. We set the probability of decomposing zero elements in the word-sememe matrix in SPSE to be 0.5%. λ in Eq. (2) is 0.5. The model is trained for 20 epochs, and the initial learning rate is 0.01, which decreases through iterations. For fastText, we use skip-gram with hierarchical softmax to learn word embeddings, and we set the minimum length of character n-grams to be 1 and the maximum length ii Sogou-T corpus is provided by Sogou Inc., a Chinese commercial search engine company. https://www. sogou.com/labs/resource/t.php 2445 of character n-grams to be 2. For model ensembling, we use λSPWE λSPSE = 2.1 as the addition weight. For SPCSE, we use Cluster-based Character Embeddings (Chen et al., 2015) to learn pretrained character embeddings, and we set Ne to be 3. We set λ′ in Eq. (7) to be 0.1, and the model is trained for 20 epochs. The initial learning rate is 0.01 and decreases during training as well. Since generally each character can relate to about 15 20 sememes, we set the probability of decomposing zero elements in the word-sememe matrix in SPCSE to be 2.5%. The ensemble weight of SPWCF and SPCSE λSPWCF λSPCSE = 4.0. For better performance of the final ensemble model CSP, we set λ = 0.1 and λSPWE λSPSE = 0.3125, though 0.5 and 2.1 are the best for SPSE and SPWE + SPSE. Finally, we choose λinternal λexternal = 1.0 to integrate the internal and external models. 5.3 Sememe Prediction 5.3.1 Evaluation Protocol The task of sememe prediction aims to recommend appropriate sememes for unlabelled words. We cast this as a multi-label classification task, and adopt mean average precision (MAP) as the evaluation metric. For each unlabelled word in the test set, we rank all sememe candidates with the scores given by our models as well as baselines, and we report the MAP results. The results are reported on the test set, and the hyper-parameters are tuned on the dev set. 5.3.2 Experiment Results The evaluation results are shown in Table 1. We can observe that: Method MAP SPSE 0.411 SPWE 0.565 SPWE+SPSE 0.577 SPWCF 0.467 SPCSE 0.331 SPWCF + SPCSE 0.483 SPWE + fastText 0.531 CSP 0.654 Table 1: Evaluation results on sememe prediction. The result of SPWCF + SPCSE is bold for comparing with other methods (SPWCF and SPCSE) which use only internal information. (1) Considerable improvements are obtained via model ensembling, and the CSP model achieves state-of-the-art performance. CSP combines the internal character information with the external context information, which significantly and consistently improves performance on sememe prediction. Our results confirm the effectiveness of a combination of internal and external information for sememe prediction; since different models focus on different features of the inputs, the ensemble model can absorb the advantages of both methods. (2) The performance of SPWCF + SPCSE is better than that of SPSE, which means using only internal information could already give good results for sememe prediction as well. Moreover, in internal models, SPWCF performs much better than SPCSE, which also implies the strong power of collaborative filtering. (3) The performance of SPWCF + SPCSE is worse than SPWE + SPSE. This indicates that it is still difficult to figure out the semantic meanings of a word without contextual information, due to the ambiguity and meaning vagueness of internal characters. Moreover, some words are not compound words (e.g., single-morpheme words or transliterated words), whose meanings can hardly be inferred directly by their characters. In Chinese, internal character information is just partial knowledge. We present the results of SPWCF and SPCSE merely to show the capability to use the internal information in isolation. In our case study, we will demonstrate that internal models are powerful for low-frequency words, and can be used to predict senses that do not appear in the corpus. 5.4 Analysis on Different Word Frequencies To verify the effectiveness of our models on different word frequencies, we incorporate the remaining words in HowNetiii into the test set. Since the remaining words are low-frequency, we mainly focus on words with long-tail distribution. We count the number of occurrences in the corpus for each word in the test set and group them into eight categories by their frequency. The evaluation results are shown in Table 2, from which we can observe that: iii In detail, we do not use the numeral words, punctuations, single-character words, the words do not appear in Sogou-T corpus (because they need to appear at least for one time to get the word embeddings), and foreign abbreviations. 2446 word frequency ⩽50 51– 100 101 – 1,000 1,001 – 5,000 5,001 – 10,000 10,001 – 30,000 >30,000 occurrences 8537 4868 3236 2036 663 753 686 SPWE 0.312 0.437 0.481 0.558 0.549 0.556 0.509 SPSE 0.187 0.273 0.339 0.409 0.407 0.424 0.386 SPWE + SPSE 0.284 0.414 0.478 0.556 0.548 0.554 0.511 SPWCF 0.456 0.414 0.400 0.443 0.462 0.463 0.479 SPCSE 0.309 0.291 0.286 0.312 0.339 0.353 0.342 SPWCF + SPCSE 0.467 0.437 0.418 0.456 0.477 0.477 0.494 SPWE + fastText 0.495 0.472 0.462 0.520 0.508 0.499 0.490 CSP 0.527 0.555 0.555 0.626 0.632 0.641 0.624 Table 2: MAP scores on sememe prediction with different word frequencies. words models Top 5 sememes 钟表匠 (clockmaker) internal 人 人 人(human), 职 职 职位 位 位(occupation), 部件(part), 时 时 时间 间 间(time), 告 告 告诉 诉 诉(tell) external 人 人 人(human), 专(ProperName), 地方(place), 欧洲(Europe), 政(politics) ensemble 人 人 人(human), 职 职 职位 位 位(occupation), 告 告 告诉 诉 诉(tell), 时 时 时间 间 间(time), 用 用 用具 具 具(tool) 奥斯卡 (Oscar) internal 专 专 专(ProperName), 地方(place), 市(city), 人(human), 国都(capital) external 奖 奖 奖励 励 励(reward), 艺 艺 艺(entertainment), 专 专 专(ProperName), 用具(tool), 事 事 事情 情 情(fact) ensemble 专 专 专(ProperName), 奖 奖 奖励 励 励(reward), 艺 艺 艺(entertainment), 著名(famous), 地方(place) Table 3: Examples of sememe prediction. For each word, we present the top 5 sememes predicted by the internal model, external model and the final ensemble model (CSP). Bold sememes are correct. (1) The performances of SPSE, SPWE, and SPWE + SPSE decrease dramatically with low-frequency words compared to those with high-frequency words. On the contrary, the performances of SPWCF, SPCSE, and SPWCF + SPCSE, though weaker than that on highfrequency words, is not strongly influenced in the long-tail scenario. The performance of CSP also drops since CSP also uses external information, which is not sufficient with low-frequency words. These results show that the word frequencies and the quality of word embeddings can influence the performance of sememe prediction methods, especially for external models which mainly concentrate on the word itself. However, the internal models are more robust when encountering longtail distributions. Although words do not need to appear too many times for learning good word embeddings, it is still hard for external models to recommend sememes for low-frequency words. While since internal models do not use external word embeddings, they can still work in such scenario. As for the performance on high-frequency words, since these words are used widely, the ambiguity of high-frequency words is thus much stronger, while the internal models are still stable for high-frequency words. (2) The results also indicate that even lowfrequency words in Chinese are mostly composed of common characters, and thus it is possible to utilize internal character information for sememe prediction on words with long-tail distribution (even on those new words that never appear in the corpus). Moreover, the stability of the MAP scores given by our methods on various word frequencies also reflects the reliability and universality of our models in real-world sememe annotations in HowNet. We will give detailed analysis in our case study. 5.5 Case Study The results of our main experiments already show the effectiveness of our models. In this case study, we further investigate the outputs of our models to confirm that character-level knowledge is truly incorporated into sememe prediction. In Table 3, we demonstrate the top 5 sememes for “钟表匠” (clockmaker) and “奥斯卡” (Oscar, i.e., the Academy Awards). “钟表匠” (clockmaker) is a typical compound word, while “奥 斯卡” (Oscar) is a transliterated word. For each word, the top 5 results generated by the internal model (SPWCF + SPCSE), the external model (SPWE + SPSE) and the ensemble model (CSP) are listed. The word “钟表匠” (clockmaker) is composed of three characters: “钟” (bell, clock), “表” (clock, watch) and “匠” (craftsman). Humans can intuitively conclude that clock + craftsman →clockmaker. However, the external model does not per2447 form well for this example. If we investigate the word embedding of “钟表匠” (clockmaker), we can know why this method recommends these unreasonable sememes. The closest 5 words in the train set to “钟表匠” (clockmaker) by cosine similarity of their embeddings are: “瑞士” (Switzerland), “卢梭” (Jean-Jacques Rousseau), “鞋匠” (cobbler), “发明家” (inventor) and “奥地利人” (Austrian). Note that none of these words are directly relevant to bells, clocks or watches. Hence, the sememes “时间” (time), “告诉” (tell), and “用 具” (tool) cannot be inferred by those words, even though the correlations between sememes are introduced by SPSE. In fact, those words are related to clocks in an indirect way: Switzerland is famous for watch industry; Rousseau was born into a family that had a tradition of watchmaking; cobbler and inventor are two kinds of occupations as well. With the above reasons, those words usually co-occur with “钟表匠” (clockmaker), or usually appear in similar contexts as “钟表匠” (clockmaker). It indicates that related word embeddings as used in an external model do not always recommend related sememes. The word “奥斯卡” (Oscar) is created by the pronunciation of Oscar. Therefore, the meaning of each character in “奥斯卡” (Oscar) is unrelated to the meaning of the word. Moreover, the characters “奥”, “斯”, and “卡” are common among transliterated words, thus the internal method recommends “专” (ProperName) and “地方” (place), etc., since many transliterated words are proper nouns or place names. 6 Conclusion and Future Work In this paper, we introduced character-level internal information for lexical sememe prediction in Chinese, in order to alleviate the problems caused by the exclusive use of external information. We proposed a Character-enhanced Sememe Prediction (CSP) framework which integrates both internal and external information for lexical sememe prediction and proposed two methods for utilizing internal information. We evaluated our CSP framework on the classical manually annotated sememe KB HowNet. In our experiments, our methods achieved promising results and outperformed the state of the art on sememe prediction, especially for low-frequency words. We will explore the following research directions in the future: (1) Concepts in HowNet are annotated with hierarchical structures of senses and sememes, but those are not considered in this paper. In the future, we will take structured annotations into account. (2) It would be meaningful to take more information into account for blending external and internal information and design more sophisticated methods. (3) Besides Chinese, many other languages have rich subword-level information. In the future, we will explore methods of exploiting internal information in other languages. (4) We believe that sememes are universal for all human languages. We will explore a general framework to recommend and utilize sememes for other NLP tasks. Acknowledgments This research is part of the NExT++ project, supported by the National Research Foundation, Prime Minister’s Office, Singapore under its IRC@Singapore Funding Initiative. This work is also supported by the National Natural Science Foundation of China (NSFC No. 61661146007 and 61572273) and the research fund of Tsinghua University-Tencent Joint Laboratory for Internet Innovation Technology. Hao Zhu is supported by Tsinghua University Initiative Scientific Research Program. We would like to thank Katharina Kann, Shen Jin, and the anonymous reviewers for their helpful comments. References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In Proceedings of ISWC, pages 722–735. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Leonard Bloomfield. 1926. A set of postulates for the science of language. Language, 2(3):153–164. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of SIGMOD, pages 1247–1250. 2448 Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of NIPS, pages 2787–2795. Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huan-Bo Luan. 2015. Joint learning of character and word embeddings. In Proceedings of IJCAI, pages 1236–1242. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016a. The SIGMORPHON 2016 shared task—morphological reinflection. In Proceedings of SIGMORPHON, pages 10–22. Ryan Cotterell, Hinrich Sch¨utze, and Jason Eisner. 2016b. Morphological smoothing and extrapolation of word embeddings. In Proceedings of ACL, pages 1651–1660. Mathias Creutz, Teemu Hirsim¨aki, Mikko Kurimo, Antti Puurula, Janne Pylkk¨onen, Vesa Siivola, Matti Varjokallio, Ebru Arisoy, Murat Sarac¸lar, and Andreas Stolcke. 2007. Analysis of morph-based speech recognition and the modeling of out-ofvocabulary words across languages. In Processings of HLT-NAACL, pages 380–387. Zhendong Dong and Qiang Dong. 2006. HowNet and the computation of meaning. World Scientific. Xiangyu Duan, Jun Zhao, and Bo Xu. 2007. Word sense disambiguation through sememe labeling. In Proceedings of IJCAI, pages 1594–1599. Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Ganitkevitch, Phil Blunsom, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of the ACL 2010 System Demonstrations, pages 7–12. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of HLT-NAACL, pages 1606–1615. Xianghua Fu, Liu Guo, Guo Yanyan, and Wang Zhiqiang. 2013. Multi-aspect sentiment analysis for Chinese online social reviews based on topic modeling and HowNet lexicon. Knowledge-Based Systems, 37:186–195. Kok-Wee Gan, Chi-Yung Wang, and Brian Mak. 2002. Knowledge-based sense pruning using the HowNet: an alternative to word sense disambiguation. In Proceedings of ISCSLP. Kok Wee Gan and Ping Wai Wong. 2000. Annotating information structures in Chinese texts using HowNet. In Proceedings of The Second Chinese Language Processing Workshop, pages 85–92. Minlie Huang, Borui Ye, Yichen Wang, Haiqiang Chen, Junjun Cheng, and Xiaoyan Zhu. 2014. New word detection for sentiment analysis. In Proceedings of ACL, pages 531–541. Huiming Jin and Katharina Kann. 2017. Exploring cross-lingual transfer of morphological knowledge in sequence-to-sequence models. In Proceedings of SCLeM, pages 70–75. Katharina Kann, Ryan Cotterell, and Hinrich Sch¨utze. 2017. One-shot neural cross-lingual transfer for paradigm completion. In Proceedings of ACL, pages 1993–2003. Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer, 42(8). Yiqun Liu, Fei Chen, Weize Kong, Huijia Yu, Min Zhang, Shaoping Ma, and Liyun Ru. 2012. Identifying web spam with the wisdom of the crowds. ACM Transactions on the Web, 6(1):2:1–2:30. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. George A Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41. Karthik Narasimhan, Damianos Karakos, Richard Schwartz, Stavros Tsakalidis, and Regina Barzilay. 2014. Morphological segmentation for keyword spotting. In Proceedings of EMNLP, pages 880– 885. Vivi Nastase and Stan Szpakowicz. 2001. Word sense disambiguation in Roget’s thesaurus using WordNet. In Proceedings of the Workshop on WordNet and Other Lexical Resources: Applications, Extensions and Customizations. Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Yilin Niu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Improved word representation learning with sememes. In Proceedings of ACL, pages 2049– 2058. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of WWW, pages 285–295. 2449 Wolfgang Seeker and ¨Ozlem C¸ etino˘glu. 2015. A graph-based lattice dependency parser for joint morphological segmentation and syntactic analysis. TACL, 3:359–373. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proceedings of WWW, pages 697–706. Yaming Sun, Lei Lin, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2014. Radical-enhanced Chinese character embedding. In Proceedings of ICONIP, pages 279–286. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of EMNLP, pages 1499–1509. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proceedings of EMNLP, pages 1504–1515. Yunqing Xia, Taotao Zhao, Jianmin Yao, and Peng Jin. 2011. Measuring Chinese-English cross-lingual word similarity with HowNet and parallel corpus. In Proceedings of CICLing, pages 221–233. Springer. Ruobing Xie, Xingchi Yuan, Zhiyuan Liu, and Maosong Sun. 2017. Lexical sememe prediction via word embeddings and matrix factorization. In Proceedings of IJCAI, pages 4200–4206. Binyong Yin. 1984. Quantitative research on Chinese morphemes. Studies of the Chinese Language, 5:338–347. Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity Chinese word embedding. In Proceedings of EMNLP, pages 981– 986. Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In Proceedings of IJCAI, pages 4258–4264.
2018
227
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2450–2461 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2450 SemAxis: A Lightweight Framework to Characterize Domain-Specific Word Semantics Beyond Sentiment Jisun An† Haewoon Kwak† Yong-Yeol Ahn§ †Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar §Indiana University, Bloomington, IN, USA [email protected] [email protected] [email protected] Abstract Because word semantics can substantially change across communities and contexts, capturing domain-specific word semantics is an important challenge. Here, we propose SEMAXIS, a simple yet powerful framework to characterize word semantics using many semantic axes in wordvector spaces beyond sentiment. We demonstrate that SEMAXIS can capture nuanced semantic representations in multiple online communities. We also show that, when the sentiment axis is examined, SEMAXIS outperforms the state-of-theart approaches in building domain-specific sentiment lexicons. 1 Introduction In lexicon-based text analysis, a common, tacit assumption is that the meaning of each word does not change significantly across contexts. This approximation, however, falls short because context can strongly alter the meaning of words (Fischer, 1958; Eckert and McConnell-Ginet, 2013; Hovy, 2015; Hamilton et al., 2016b). For instance, the word kill may be used much more positively in the context of video games than it would be in a news story; the word soft may be used much more negatively in the context of sports than it is in the context of toy animals (Hamilton et al., 2016a). Thus, lexicon-based analysis exhibits a clear limitation when two groups with strongly dissimilar lexical contexts are compared. Recent breakthroughs in vector-space representation, such as word2vec (Mikolov et al., 2013b), provide new opportunities to tackle this challenge of context-dependence, because in these approaches, the representation of each word is learned from its context. For instance, a recent study shows that a propagation method on the vector space embedding can infer contextdependent sentiment values of words (Hamilton et al., 2016a). Yet, it remains to be seen whether it is possible to generalize this idea to general word semantics other than sentiment. In this work, we propose SEMAXIS, a lightweight framework to characterize domainspecific word semantics beyond sentiment. SEMAXIS characterizes word semantics with respect to many semantic perspectives in a domainspecific word-vector space. To systematically discover the manifold of word semantics, we induce 732 semantic axes based on the antonym pairs from ConceptNet (Speer et al., 2017). We would like to emphasize that, although some of the induced axes can be considered as an extended version of sentiment analysis, such as an axis of ‘respectful’ (positive) and ‘disrespectful’ (negative), some cannot be mapped to a positive and negative relationship, such as ‘exogeneous’ and ‘endogeneous,’ and ‘loose’ and ‘monogamous.’ Based on this rich set of semantic axes, SEMAXIS captures nuanced semantic representations across corpora. The key contributions of this paper are: • We propose a general framework to characterize the manifold of domain-specific word semantics. • We systematically identify semantic axes based on the antonym pairs in ConceptNet. • We demonstrate that SEMAXIS can capture semantic differences between two corpora. • We provide a systematic evaluation in comparison to the state-of-the-art, domainspecific sentiment lexicon construction methodologies. Although the idea of defining a semantic axis and assessing the meaning of a word with a vector projection is not new, it has not been demonstrated that this simple method can effectively in2451 duce context-aware semantic lexicons. All of the inferred lexicons along with code for SEMAXIS and all methods evaluated are made available in the SEMAXIS package released with this paper1. 2 Related Work For decades, researchers have been developing computational techniques for text analysis, including: sentiment analysis (Pang and Lee, 2004), stance detection (Biber and Finegan, 1988), point of view (Wiebe, 1994), and opinion mining (Pang and Lee, 2004). The practice of creating and sharing large-scale annotated lexicons also accelerated the research (Stone et al., 1966; Bradley and Lang, 1999; Pennebaker et al., 2001; Dodds et al., 2011; Mohammad et al., 2016, 2017). These approaches can be roughly grouped into two major categories: lexicon-based approach (Turney, 2002; Taboada et al., 2011) and classification-based approach (Pang et al., 2002; Kim, 2014; Socher et al., 2013). Although the recent advancement of neural networks increases the potential of the latter approach, the former has been widely used for its simplicity and transparency. ANEW (Bradley and Lang, 1999), LIWC (Pennebaker et al., 2001), SOCAL (Taboada et al., 2011), SentiWordNet (Esuli and Sebastiani, 2006), and LabMT (Dodds et al., 2011) are well-known lexicons. A clear limitation of the lexicon-based approach is that it overlooks the context-dependent semantic changes. Scholars reported that the meaning of a word can be altered by context, such as communities (Yang and Eisenstein, 2015; Hamilton et al., 2016a), diachronic changes (Hamilton et al., 2016b) or demographic (Rosenthal and McKeown, 2011; Eckert and McConnell-Ginet, 2013; Green, 2002; Hovy, 2015), geography (Trudgill, 1974), political and cultural attitudes (Fischer, 1958) and personal variations (Yang and Eisenstein, 2017). Recently, a few studies have shown the importance of taking such context into accounts. For example, Hovy et al. (2015) showed that, by including author demographics such as age and gender, the accuracy of sentiment analysis and topic classification can be improved. It was also shown that, without domain-specific lexicons, the performance of sentiment analysis can be significantly degraded (Hamilton et al., 2016a). Building domain-specific sentiment lexicons 1https://github.com/ghdi6758/SemAxis through human input (crowdsourcing or experts) requires not only significant resources but also careful control of biases (Dodds et al., 2011; Mohammad and Turney, 2010). The challenge is exacerbated because ‘context’ is difficult to concretely operationalize and there can be numerous contexts of interest. For resource-scarce languages, such problems become even more critical (Hong et al., 2013). Automatically building lexicons from web-scale resources (Velikovich et al., 2010; Tang et al., 2014) may solve this problem but poses a severe risk of unintended biases (Loughran and McDonald, 2011). Inducing domain-specific lexicons from the unlabeled corpora reduces the cost of dictionary building (Hatzivassiloglou and McKeown, 1997; Rothe et al., 2016). Although earlier research utilize syntactic (grammatical) structures (Hatzivassiloglou and McKeown, 1997; Widdows and Dorow, 2002), the approach of learning wordvector representations has gained a lot of momentum (Rothe et al., 2016; Hamilton et al., 2016a; Velikovich et al., 2010; Fast et al., 2016). The most relevant work is SENTPROP (Hamilton et al., 2016a), which constructs domainspecific sentiment lexicons using graph propagation techniques (Velikovich et al., 2010; Rao and Ravichandran, 2009). In contrast to SENTPROP’s sentiment-focused approach, we provide a framework to understand the semantics of words with respect to 732 semantic axes based on ConceptNet (Speer et al., 2017). 3 SEMAXIS Framework Our framework, SEMAXIS, involves three steps: constructing a word embedding, defining a semantic axis, and mapping words onto the semantic axis. Although they seem straightforward, the complexities and challenges in each step can add up. In particular, we tackle the issues of treating small corpora and selecting pole words. 3.1 The Basics of SEMAXIS 3.1.1 Building word embeddings The first step in our approach is to obtain word vectors from a given corpus. In principle, any standard method, such as Positive Pointwise Mutual Information (PPMI), Singular-Value Decomposition (SVD), or word2vec, can be used. Here, we use the word2vec model because word2vec is easier to train and is known to be more robust than 2452 competing methods (Levy et al., 2015). 3.1.2 Defining a semantic axis and computing the semantic axis vector A semantic axis is defined by the vector between two sets of ‘pole’ words that are antonymous to each other. For instance, a sentiment axis can be defined by a set of the most positive words on one end and the most negative words on the other end. Similarly, any antonymous word pair can be used to define a semantic axis (e.g., ‘clean’ vs. ‘dirty’ or ‘respectful’ vs. ‘disrespectful’). Once two sets of pole words for the corresponding axis are chosen, we compute the average vector for each pole. Then, by subtracting one vector from the other, we obtain the semantic axis vector that encodes the antonymous relationship between two sets of words. More formally, let S+={v+ 1 , v+ 2 , ..., v+ n } and S−={v− 1 , v− 2 , ..., v− m} be two sets of pole word vectors that have an antonym relationship. Then, the average vectors for each set are computed as V+= 1 n Pn 1 v+ i and V−= 1 m Pm 1 v− j . From the two average vectors, the semantic axis vector, Vaxis (from S−to S+), can be defined as: Vaxis = V+ −V− (1) 3.1.3 Projecting words onto a semantic axis Once the semantic axis vector is obtained, we can compute the cosine similarity between the axis vector and a word vector. The resulting cosine similarity captures how closely the word is aligned to the semantic axis. Given a word vector vw for a word w, the score of the word w along with the given semantic axis, Vaxis, is computed as: score(w)Vaxis = cos(vw, Vaxis) = vw · Vaxis ∥vw ∥∥Vaxis ∥ (2) A higher score means that the word w is more closely aligned to S+ than S−. When Vaxis is a sentiment axis, a higher score is corresponds to more positive sentiment. 3.2 SEMAXIS for Comparative Text Analysis Although aforementioned steps are conceptually simple, there are two practical challenges: 1) dealing with small corpus and 2) finding good pole words for building a semantic axis. 3.2.1 Semantic relations encoded in word embeddings Since semantic relations are particularly important in our method, we need to ensure that our word embedding maintains general semantic relations. This can be evaluated by analogy tasks. In particular, we use the Google analogy test dataset (Mikolov et al., 2013a), which contains 19,544 questions — 8,869 semantic and 10,675 syntactic questions — in 14 relation types, such as capital-world pairs and opposite relationships. 3.2.2 Dealing with small corpus As in other machine learning tasks, the amount of data critically influences the performance of word embedding methods. However, the corpora of our interest are often too small to facilitate the learning of rich semantic relationships therein. To mitigate this issue, we propose to pre-train a word embedding using a background corpus and update with the target corpora. In doing so, we capture the semantic changes while maintaining general semantic relations offered by the large reference model. The vector-space embedding drifts from the reference model as we train with the target corpus. If trained too much with the smaller target corpus, it will lose the ‘good’ initial embedding from the huge reference corpus. If trained too little, it will not be able to capture context-dependent semantic changes. Our goal is thus to minimize the loss in general semantic relations while maximizing the characteristic semantic relations in the new texts. Consider a corpus of our interest C and a reference corpus R. The model M is pre-trained on R, and then we start training it on C. We use the superscript e to represent the e-th epoch of training. That is, Me C is the model after the e-th epoch trained on C. Then, we evaluate the model regarding two aspects: general semantic relations and context-dependent semantic relations. The former is measured by the overall accuracy of the analogy test (Mikolov et al., 2013a). The latter is measured by tracking the semantic changes of the top k words in the given corpus C. The semantic changes of the words are measured by the changes in their scores, ∆, on a certain axis; for instance, a sentiment axis, between consecutive epochs. We stop learning when two conditions are satisfied: (1) When the accuracy of the analogy test drops by α; and (2) When ∆is lower than β. In principle, the model can be updated with the target corpus as long as the accuracy does not drop. We then use β 2453 to control the epochs. When ∆is low, the gain by updating the model becomes negligible compared to the cost and thus we can stop updating. 3.3 Identifying rich semantic axes for SEMAXIS The primary advantage of SEMAXIS is that it can be easily extended to examine diverse perspectives of texts as it is lightweight. Although the axis can be defined by any pair of words in principle, we propose a systematic way to define the axes. 3.3.1 732 Pre-defined Semantic Axes We begin with a pair of antonyms, called initial pole words. For instance, a sentiment axis, which is a basis of sentiment analyses, can be defined by a pair of sentiment antonyms, such as ‘good’ and ‘bad.’ To build a comprehensive set of initial pole words, we compile a list of antonyms from ConceptNet 5.5, which is the latest release of a knowledge graph among concepts (Speer et al., 2017). We extract all the antonym concepts marked as ‘/r/Antonym’ edges. Then, we filter out nonEnglish concepts and multi-word concepts. In addition, we eliminate duplicated antonyms that involve synonyms. For instance, only one of the (empower, prohibit) and (empower, forbid) needs to be kept because ‘prohibit’ and ‘forbid’ are synonyms, marked as ‘/r/Synonym’ in ConceptNet. To further refine the antonym pairs, we create a crowdsourcing task on Figure Eight, formerly known as CrowdFlower. Specifically, we ask crowdworkers: Do these two words have opposite meanings? We include those word pairs that a majority of crowdworkers agree to have an opposite meaning. The word pairs that the majority of crowdsource workers disagree were mostly erroneous antonym pairs, such as ‘5’ and ‘3’, and ‘have’ and ‘has.’ We then filter out the antonyms that are highly similar to each other. For example, (‘advisedly’ and ‘accidentally’) and (‘purposely’ and ‘accidentally’) show the cosine similarity of 0.5148, while ‘advisedly’ and ‘purposely’ are not marked as synonyms in ConceptNet. Although we use the threshold of 0.4 in this work, a different threshold can be chosen depending on the purpose. Finally, we eliminate concepts that do not appear in the pre-trained Google News 100B word embeddings. As a result, we obtain 732 pairs of antonyms. Each pair of antonyms becomes initial pole words to define one of the diverse axes for SEMAXIS. We assess the semantic diversity of the axes by computing cosine similarity between every possible pair of the axes. The absolute mean value of the cosine similarity is 0.062, and the standard deviation is 0.050. These low cosine similarity and standard deviation values indicate that the chosen axes have a variety of directions, covering diverse and distinct semantics. 3.3.2 Augmenting pole words We then expand the two initial pole words to larger sets of pole words, called expanded pole words, to obtain more robust results. If we use only two initial pole words to define the corresponding axis, the result will be sensitive to the choice of those words. Since the initial pole words are not necessarily the best combinations possible, we would like to augment it so that it is more robust to the choice of the initial pole words. To address this issue, we find the l closest words of each initial pole word in the word embedding. We then compute the geometric center (average vector) of l+1 words (including the initial pole word) and regard it as the vector representation of that pole of the axis. For instance, refining an axis representing a ‘good’ and ‘bad’ relation, we first find the l closest words for each of ‘good’ and ‘bad’ and then compute the geometric center of them. The newly computed geometric centers then become both ends of the axis representing a ‘good’ and ‘bad’ relation. We demonstrate how this approach improves the explanatory power of an axis describing a corresponding antonym relation in Section 4.3. 4 SEMAXIS Validation In this section, we quantitatively evaluate our approach using the ground-truth data and by comparing our method against the standard baselines and state-of-the-art approaches. We reproduce the evaluation task introduced by Hamilton et al. (2016a), recreating Standard English and Twitter sentiment lexicons for evaluation. We then compare the accuracy of sentiment classification with three other methods that generate domain-specific sentiment lexicons. It is worth noting that we validate SEMAXIS based on a sentiment axis mainly due to the availability of the well-established ground-truth data and evaluation process. Nevertheless, as the sentiment axis in the SEMAXIS framework is not 2454 specifically or manually designed but established based on the corresponding pole words, the validation based on the sentiment axis can be generalized to other axes that are similarly established based on other corresponding pole words. Standard English: We use well-known General Inquirer lexicon (Stone et al., 1966) and continuous valence scores collected by Warriner et al. (2013) to evaluate the performance of SEMAXIS compared to other state-of-the-art methods. We test all of the methods by using the off-the-shelf Google news embedding constructed from 1011 tokens (Google, 2013). Twitter: We evaluate our approach with the test dataset from the 2015 SemEval task 10E competition (Rosenthal et al., 2015) using the embedding constructed by Rothe et al. (2016). Domain Positive pole words Negative pole words Standard good, lovely, excellent, fortunate, pleasant, delightful, perfect, loved, love, happy bad, horrible, poor, unfortunate, unpleasant, disgusting, evil, hated, hate, unhappy Twitter love, loved, loves, awesome, nice, amazing, best, fantastic, correct, happy hate, hated, hates, terrible, nasty, awful, worst, horrible, wrong, sad Table 1: Manually selected pole words used for the evaluation task in (Hamilton et al., 2016a). These pole words are called seed words in (Hamilton et al., 2016a) 4.1 Evaluation Setup We compare our method against state-of-the-art approaches that generate domain-specific sentiment lexicons. State-of-the-art approaches: Our baseline for the standard English is a WordNet-based method, which performs label propagation over a WordNet-derived graph (San Vicente et al., 2014). For Twitter, we use Sentiment140, a distantly supervised approach that uses signals from emoticons (Mohammad and Turney, 2010). Moreover, on both datasets, we compare against two state-ofthe-art sentiment induction methods: DENSIFIER, a method that learns orthogonal transformations of word vectors (Rothe et al., 2016), and SENTPROP, a method with a label propagation approach on word embeddings (Hamilton et al., 2016a). Seed words, which are called pole words in our work, are listed in Table 1. Evaluation metrics: We evaluate the aforementioned approaches according to (i) their binary classification accuracy (positive and negative), (ii) ternary classification performance (positive, neutral, and negative), and (iii) Kendall τ rankcorrelation with continuous human-annotated polarity scores. Since all methods result in sentiment scores of words rather than assigning a class of sentiment, we label words as positive, neutral, or negative using the class-mass normalization method (Zhu et al., 2003). This normalization uses knowledge of the label distribution of a test dataset and simply assigns labels to best match this distribution. For the implementation of other methods, we directly use the source code without any modification or tuning (SocialSent, 2016) used in (Hamilton et al., 2016a). 4.2 Evaluation Results Table 2 summarizes the performance. Surprisingly, SEMAXIS — the simplest approach — outperforms others on both Standard English and Twitter datasets across all measures. Standard English Method AUC Ternary F1 Tau SEMAXIS 92.2 61.0 0.48 DENSIFIER 91.0 58.2 0.46 SENTPROP 88.4 56.1 0.41 WordNet 89.5 58.7 0.34 Twitter Method AUC Ternary F1 Tau SEMAXIS 90.0 59.2 0.57 DENSIFIER 88.5 58.8 0.55 SENTPROP 85.0 58.2 0.50 Sentiment140 86.2 57.7 0.51 Table 2: Evaluation results. Our method performs best on both Standard English and Twitter. 4.3 Sensitivity to Pole Words As discussed in Section 3.3.2, because the axes are derived from pole words, the choice of the pole words can significantly affect the performance. We compare the robustness of three methods for selecting pole words: 1) using sentiment lexicons; 2) using two pole words only (initial pole words); and 3) using l closest words on the word2vec model as well as the two initial pole words (expanded pole words). For the first, we choose two sets of pole words that have the highest scores and the lowest scores in two widely used sentiment 2455 lexicons, ANEW (Bradley and Lang, 1999) and LabMT (Dodds et al., 2011). Then, for the two pole words, we match 1-of-10 positive pole words and 1-of-10 negative pole words in Table 1, resulting in 100 pairs of pole words. For these 100 pairs, in addition to the two initial pole words, we then use the l closest words (l = 10) of each of them to evaluate the third method. We compare these three methods by quantifying how well SEMAXIS performs for the evaluation task. The average AUC for the two pole words method is 78.2. We find that one of the 100 pairs — ‘good’ and ‘bad’ — shows the highest AUC (92.4). However, another random pair (‘happy’ and ‘evil’) results in the worst performance with the AUC of 67.2. In other words, an axis defined by only two pole words is highly sensitive to the choice of the word pair. By contrast, when an axis is defined by aggregating l closest words, the average AUC increases to 80.6 (the minimum performance is above 71.2). Finally, using preestablished sentiment lexicons results in the worst performance (the AUC of 77.8 for ANEW and 67.5 for LabMT). These results show that identifying an axis is a crucial step in SEMAXIS, and using l closest words in addition to initial pole words is a more robust method to define the axis. 5 SEMAXIS in the Wild We now demonstrate how SEMAXIS can be used in comparative text analysis to capture nuanced linguistic representations beyond the sentiment. As an example, we use Reddit (Reddit, 2005), one of the most popular online communities. Reddit is known to serve diverse sub-communities with different linguistic styles (Zhang et al., 2017). We focus on a few pairs of subreddits that are known to express different views. We also choose them to capture a wide range of topics from politics to religion, entertainment, and daily life to demonstrate the broad applicability of SEMAXIS. 5.1 Dataset, Pre-processing, Reference model, and Hyper-parameters We use Reddit 2016 comment datasets that are publicly available (/u/Dewarim, 2017). We build a corpus from each subreddit by extracting all the comments posted on that subreddit. When the size of two corpora used for comparison is considerably different, we undersample the bigger corpus for a fair comparison. Every corpus then undergoes the same pre-processing, where we first remove punctuation and stop words, then replace URLs with its domain name. Reference model for Reddit data As we discussed earlier, many datasets of our interest are likely too small to obtain good vector representations. For example, two popular subreddits, /r/The Donald and /r/SandersForPresident2, show only 59.8% and 42.1% in analogy test, respectively.3 Therefore, as we proposed, we first create a pretrained word embedding with a larger background corpus and perform additional training with target subreddits. We sample 1 million comments from each of the top 200 subreddits, resulting in 20 million comments. Using this sample, we build a word embedding, denoted as Reddit20M, using the CBOW model with a window size of five, a minimum word count of 10, the negative sampling, and down-sampling of frequent terms as suggested in (Levy et al., 2015). For the subsequent training with the target corpora, we train the model with a small starting learning rate, 0.005; Using different rates, such as 0.01 and 0.001, did not make much difference. We further tune the model with the dimension size of 300 and the number of the epoch of 100 using the analogy test results. Category Reddit20M Google300D World 28.34 70.2 family 94.58 90.06 Gram1-9 70.21 73.40 Total 67.88 77.08 Table 3: Results of analogy tests, comparing 20M sample texts from Reddit vs. Google 100B News. Table 3 shows the results of the analogy test using our Reddit20M in comparison with Google300D, which is the Google News embedding used in previous sections. As one can expect, Reddit20M shows worse performance than Google300D. However, the four categories (capital-common-countries, capital-world, currency, and city-in-state denoted by World), which require some general knowledge on the world, drive the 10% decrease in overall accu2/r/ is a common notation for indicating subreddits. 3For both corpora, continuous bag-of-words (CBOW) model with the dimension size of 300 achieves the highest accuracy in the analogy test. 2456 racy. Other categories show comparable or even better accuracy. For example, Reddit20M outperforms Google300D by 4.52% in the family category. Since Reddit is a US-based online community, the Reddit model may not be able to properly capture semantic relationships in World category. By contrast, for the categories for testing grammar (denoted by Gram1-9), Reddit20M shows comparable performances with Google300D (70.21 vs. 73.4). In this study, we use Reddit20M as a reference model and update it with new target corpora. Figure 1: Changes of word semantics (box plot) and accuracy (line graph) over epoch for the model for /r/SandersForPresident Updating the reference model As we explained in Section 3.2.2, we stop updating the reference model depending on the accuracy of the analogy test and semantic changes of the top 1000 words of the given corpus. In our experiments, we set α = 0.3 and β = 0.001. Figure 1 shows the accuracy of the analogy test over epoch as a line plot and the semantic changes of words as a box plot for the model for /r/SandersForPresident. The model gradually loses general semantic relation over epochs, and the characteristic semantic changes stabilize after about 10 epochs. Given the α and β, we use the embedding after 10 epochs of training with the target subreddit data. We note that the results are consistent when epoch is greater than 10. We choose the number of epoch for other corpora based on the same tactic. 5.2 Confirming well-known language representations Once we have word embeddings for given subreddits by updating the pre-trained model, we can compare the languages of two subreddits. As a case study, we compare supporters of Donald Trump (/r/The Donald) and Bernie Sanders (/r/SandersForPresident)4, and examine the semantic differences in diverse issues, such as gun and minority, based on different axes. This can be easily compared with our educated guess learned from the 2016 U.S. Election. Starting from a topic word (e.g., ‘gun’) and its closest word (e.g., ‘guns’), we compute the average vector of the two words. We then find the closest word from the computed average vector and repeat this process to collect 30 topical terms in each word embedding. Then, we remove words that have appeared less than n times in both corpora. The higher n leads to less coverage of topical terms but eliminate noise. We set n = 100 in the following experiments. We consider the remaining words as topic words. Figure 2 compares how the minority-related terms are depicted in the two subreddits. Figure 2(a) and 2(b) show how minority issues are perceived in two communities with respect to ‘Sentiment’ and ‘Respect’ . The x-axis is the value for each word on the Sentiment axis for Trump supporters, and the y-axis is the difference between the value for Trump and Sanders supporters. If the y-value is greater than 0, then it means the word is more ‘positive’ among Trump supporters compared to that among Sanders supporters. Some terms perceived more positively (e.g., ‘immigration’ and ‘minorities’) while other terms were perceived more negatively (‘black’, ‘latino’, ‘hispanic’) among Trump supporters (Figure 2(a)). As this positive perception on immigration and minorities is unexpected, we examine the actual comments. Through the manual inspection of relevant comments, we find that Trump supporters often mention that they ‘agree’ with or ‘support’ the idea of banning immigration, resulting in having a term ‘immigration’ as more positive than Sanders supporters. However, when examining those words on the ‘Disrespect’ vs. ‘Respect’ axis (Figure 2(b)), most of the minority groups are considered disrespectful by Trump supporters compared to Sanders supporters, demonstrating a benefit of examining multiple semantic axes that can reflect rich semantics beyond basic sentiments. Then, one would expect that ‘Gun’ would be more positively perceived for Trump supporters compared to for Sanders supporters. Beyond the sentiment, we examine how ‘gun’ is perceived in 4Both subreddits have a policy of banning users who post content critical of the candidate. Thus, we assume most of the users in these subreddits are supporters of the candidate. 2457 (a) Minority (Hate vs Amazing) (b) Minority (Disrespect vs Respect) Figure 2: Trump supporters vs Sanders supporters on Minority issue two communities for ‘Arousal’ and ‘Safety’ axes. We find that Trump supporters are generally positive about gun-related issues, and Sanders supporters associate ‘gun’ with more arousal and danger. We also examine two other subreddits: /r/atheism and /r/Christianity. As their names indicate, the former is the subreddit for atheists and the latter is the subreddit for Christians. We expect that the two groups would have different perspectives regarding ‘god’ and ‘evolution.’ When examining the four words ‘god,’ ‘pray,’ ‘evolution,’ and ‘science’ on the ‘Unholy’ vs. ‘Holy’ axis, ‘god’ and ‘pray’ appear to be more ‘holy’ in /r/Christianity while ‘evolution’ and ‘science’ appear more ‘unholy’ than in /r/Atheism, which fits in our intuition. As another example, we examine the /r/PS4 and /r/NintendoSwitch subreddits. PS4 is a video game console released by Sony and Nintendo Switch is released by Nintendo. Although both video game consoles originated from Japan, Nintendo Switch targets more family (children) and casual gamers with more playful and easier games while the games for PS4 target adult and thus tend to be more violent and more difficult to play. We examine three terms (‘Nintendo,’ ‘Mario,’ and ‘Zelda’) from Nintendo Switch and three terms (‘Sony,’ ‘Uncharted,’ and ‘Killzone’) from Sony on the ‘Casual’ vs. ‘Hardcore’ axis.5 We find that ‘Mario’ and ‘Zelda’ are perceived more casual in /r/PS4, and ‘Uncharted’ and ‘Killzone’ are more hardcore in /r/PS4 than /r/NintendoSwitch. Although both ‘Nintendo’ and ‘Sony’ have negative values, ‘Nin5Mario and Zelda are popular Nintendo Switch games, and Uncharted and Killzone are popular PS4 games. tendo’ was considered more casual than ‘Sony’ in /r/PS4. Overall, our method effectively captures context-dependent semantic changes beyond the basic sentiments. 5.3 Comparative Text Analysis with Diverse Axes Let us show how SEMAXIS can find, for a given word, a set of the best axes that describe its semantic. We map the word on our predefined 732 axes, which are explained in Section 3.3.1, and rank the axes based on the projection values on the axes. In other words, the top axes describe the word with the highest strength. Figure 3(a) shows the top 20 axes with the largest projection values for ‘Men’ in /r/AskWomen and /r/AskMen, which are the subreddits where people expect replies from each gender. In /r/AskWomen, compared with /r/AskMen, ‘Men’ seems to be perceived as more vanishing, more established, less systematic, less monogamous, more enthusiastic, less social, more uncluttered, less vulnerable, and more unapologetic. This observed perception of men from women’s perspective seems to concur with the common gender stereotype, demonstrating strong potential of SEMAXIS. Likewise, in Figure 3(b), we examine how a word ‘Mario’ is perceived in two subreddits /r/NintendoSwitch and /r/PS4. In /r/NintendoSwitch, ‘Mario’ is perceived, compared with /r/PS4, as more luxurious, famous, unobjectionable, open, capable, likable, successful, loving, honorable, and controllable. On the other hand, users in /r/PS4 consider ‘Mario’ to be more virtual, creative, durable, 2458 (a) Men in /r/AskWomen and /r/AskMen (b) Mario in /r/NintendoSwitch and /r/PS4 Figure 3: An example of comparative text analysis using SEMAXIS: (a) ‘Men’ in /r/AskWomen and /r/AskMen and (b) ‘Mario’ in /r/NintendoSwitch and /r/PS4 satisfying, popular, undetectable, and unstoppable. ‘Mario’ is perceived more positively in /r/NintendoSwitch than in /r/PS4, as expected. Furthermore, SEMAXIS reveals detailed and nuanced perceptions of different communities. 6 Discussion and Conclusion We have proposed SEMAXIS to examine a nuanced representation of words based on diverse semantic axes. We have shown that SEMAXIS can construct good domain-specific sentiment lexicons by projecting words on the sentiment axis. We have also demonstrated that our approach can reveal nuanced context-dependence of words through the lens of numerous semantic axes. There are two major limitations. First, we performed the quantitative evaluation only with the sentiment axis, even though we supplemented it with more qualitative examples. We used the sentiment axis because it is better studied and more methods exist, but ideally it would be better to perform evaluation across many semantic axes. We hope that SEMAXIS can facilitate research on other semantic axes so that we will have labeled datasets for other axes as well. Secondly, Gaffney and Matias (2018) recently reported the Reddit data used in this study is incomplete. The authors suggest using the data with caution, particularly when analyzing user interactions. Although our work examine communities in Reddit, we focus on the difference of the word semantics. Thus, we believe the effect of deleted comment would be marginal in our analyses. Despite these limitations, we identify the following key implications. First, SEMAXIS offers a framework to examine texts on diverse semantic axes beyond the sentiment axis, through the 732 systematically induced semantic axes that capture common antonyms. Our study may facilitate further investigations on context-dependent text analysis techniques and applications. Second, the unsupervised nature of SEMAXIS provides a powerful way to build lexicons of any semantic axis, including the sentiment axis, for non-English languages, particularly the resourcescarce languages. 7 Acknowledgements The authors thank Jaehyuk Park for his helpful comments. This research has been supported in part by Volkswagen Foundation and in part by the Defense Advanced Research Projects Agency (DARPA), W911NF-17-C-0094. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. 2459 References Douglas Biber and Edward Finegan. 1988. Adverbial stance types in english. Discourse processes, 11(1):1–34. Margaret M Bradley and Peter J Lang. 1999. Affective norms for english words (ANEW): Instruction manual and affective ratings. Technical report, Technical report C-1, the center for research in psychophysiology, University of Florida. Peter Sheridan Dodds, Kameron Decker Harris, Isabel M Kloumann, Catherine A Bliss, and Christopher M Danforth. 2011. Temporal patterns of happiness and information in a global social network: Hedonometrics and Twitter. PLOS ONE, 6(12):e26752. Penelope Eckert and Sally McConnell-Ginet. 2013. Language and gender. Cambridge University Press. A. Esuli and F. Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06). European Language Resources Association (ELRA). Ethan Fast, Binbin Chen, and Michael S Bernstein. 2016. Empath: Understanding topic signals in largescale text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 4647–4657. ACM. John L. Fischer. 1958. Social influences on the choice of a linguistic variant. WORD, 14(1):47–56. Devin Gaffney and J Nathan Matias. 2018. Caveat emptor, computational social science: Large-scale missing data in a widely-published reddit corpus. arXiv preprint arXiv:1803.05046. Google. 2013. Google word2vec. https://code. google.com/archive/p/word2vec/. [Online; accessed February 23, 2018]. Lisa J Green. 2002. African American English: a linguistic introduction. Cambridge University Press. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016a. Inducing domain-specific sentiment lexicons from unlabeled corpora. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 595–605. Association for Computational Linguistics. William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1501. Association for Computational Linguistics. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In 8th Conference of the European Chapter of the Association for Computational Linguistics. Yoonsung Hong, Haewoon Kwak, Youngmin Baek, and Sue Moon. 2013. Tower of babel: A crowdsourcing game building sentiment lexicons for resource-scarce languages. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13 Companion, pages 549–556, New York, NY, USA. ACM. Dirk Hovy. 2015. Demographic factors improve classification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 752–762. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Association for Computational Linguistics. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Tim Loughran and Bill McDonald. 2011. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. The Journal of Finance, 66(1):35–65. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31–41. Association for Computational Linguistics. Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26–34. Association for Computational Linguistics. Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):26:1–26:23. 2460 Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04). Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002). James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, 71. Delip Rao and Deepak Ravichandran. 2009. Semisupervised polarity lexicon induction. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 675–682. Association for Computational Linguistics. Reddit. 2005. Reddit: the front page of the internet. https://www.reddit.com/. [Online; accessed February 23, 2018]. Sara Rosenthal and Kathleen McKeown. 2011. Age prediction in blogs: A study of style, content, and online behavior in pre- and post-social media generations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 763– 772. Association for Computational Linguistics. Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. Semeval-2015 task 10: Sentiment analysis in twitter. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 451–463. Association for Computational Linguistics. Sascha Rothe, Sebastian Ebert, and Hinrich Sch¨utze. 2016. Ultradense word embeddings by orthogonal transformation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 767–777. Association for Computational Linguistics. I˜naki San Vicente, Rodrigo Agerri, and German Rigau. 2014. Simple, robust and (almost) unsupervised generation of polarity lexicons for multiple languages. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 88–97. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Association for Computational Linguistics. SocialSent. 2016. Code and data for inducing domainspecific sentiment lexicons. https://github. com/williamleif/socialsent. [Online; accessed February 23, 2018]. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI, pages 4444–4451. Philip J Stone, Dexter C Dunphy, and Marshall S Smith. 1966. The general inquirer: A computer approach to content analysis. MIT press. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2). Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014. Building large-scale twitter-specific sentiment lexicon : A representation learning approach. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 172–182. Dublin City University and Association for Computational Linguistics. Peter Trudgill. 1974. Linguistic change and diffusion: Description and explanation in sociolinguistic dialect geography. Language in society, 3(2):215– 246. Peter Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. /u/Dewarim. 2017. Reddit comment dataset. https: //files.pushshift.io/. [Online; downloaded July 23, 2017]. Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viability of web-derived polarity lexicons. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 777–785. Association for Computational Linguistics. Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert. 2013. Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior research methods, 45(4):1191–1207. Dominic Widdows and Beate Dorow. 2002. A graph model for unsupervised lexical acquisition. In COLING 2002: The 19th International Conference on Computational Linguistics. Janyce M. Wiebe. 1994. Tracking point of view in narrative. Computational Linguistics, 20(2). 2461 Yi Yang and Jacob Eisenstein. 2015. Putting things in context: Community-specific embedding projections for sentiment analysis. CoRR, abs/1511.06052. Yi Yang and Jacob Eisenstein. 2017. Overcoming language variation in sentiment analysis with social attention. Transactions of the Association for Computational Linguistics, 5:295–307. Justine Zhang, William L Hamilton, Cristian DanescuNiculescu-Mizil, Dan Jurafsky, and Jure Leskovec. 2017. Community identity and user engagement in a multi-community landscape. In Proceedings of The International Conference on Web and Social Media. Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pages 912–919.
2018
228
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2462–2472 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2462 End-to-End Reinforcement Learning for Automatic Taxonomy Induction Yuning Mao1, Xiang Ren2, Jiaming Shen1, Xiaotao Gu1, Jiawei Han1 1Department of Computer Science, University of Illinois Urbana-Champaign, IL, USA 2Department of Computer Science, University of Southern California, CA, USA 1{yuningm2, js2, xiaotao2, hanj}@illinois.edu [email protected] Abstract We present a novel end-to-end reinforcement learning approach to automatic taxonomy induction from a set of terms. While prior methods treat the problem as a two-phase task (i.e., detecting hypernymy pairs followed by organizing these pairs into a tree-structured hierarchy), we argue that such two-phase methods may suffer from error propagation, and cannot effectively optimize metrics that capture the holistic structure of a taxonomy. In our approach, the representations of term pairs are learned using multiple sources of information and used to determine which term to select and where to place it on the taxonomy via a policy network. All components are trained in an end-to-end manner with cumulative rewards, measured by a holistic tree metric over the training taxonomies. Experiments on two public datasets of different domains show that our approach outperforms prior state-ofthe-art taxonomy induction methods up to 19.6% on ancestor F1. 1 1 Introduction Many tasks in natural language understanding (e.g., information extraction (Demeester et al., 2016), question answering (Yang et al., 2017), and textual entailment (Sammons, 2012)) rely on lexical resources in the form of term taxonomies (cf. rightmost column in Fig. 1). However, most existing taxonomies, such as WordNet (Miller, 1995) and Cyc (Lenat, 1995), are manually curated and thus may have limited coverage or become unavailable in some domains and languages. Therefore, recent efforts have been focusing on automatic taxonomy induction, which aims to organize 1Code and data can be found at https://github. com/morningmoni/TaxoRL a set of terms into a taxonomy based on relevant resources such as text corpora. Prior studies on automatic taxonomy induction (Gupta et al., 2017; Camacho-Collados, 2017) often divide the problem into two sequential subtasks: (1) hypernymy detection (i.e., extracting term pairs of “is-a” relation); and (2) hypernymy organization (i.e., organizing is-a term pairs into a tree-structured hierarchy). Methods developed for hypernymy detection either harvest new terms (Yamada et al., 2009; Kozareva and Hovy, 2010) or presume a vocabulary is given and study term semantics (Snow et al., 2005; Fu et al., 2014; Tuan et al., 2016; Shwartz et al., 2016). The hypernymy pairs extracted in the first subtask form a noisy hypernym graph, which is then transformed into a tree-structured taxonomy in the hypernymy organization subtask, using different graph pruning methods including maximum spanning tree (MST) (Bansal et al., 2014; Zhang et al., 2016), minimum-cost flow (MCF) (Gupta et al., 2017) and other pruning heuristics (Kozareva and Hovy, 2010; Velardi et al., 2013; Faralli et al., 2015; Panchenko et al., 2016). However, these two-phase methods encounter two major limitations. First, most of them ignore the taxonomy structure when estimating the probability that a term pair holds the hypernymy relation. They estimate the probability of different term pairs independently and the learned term pair representations are fixed during hypernymy organization. In consequence, there is no feedback from the second phase to the first phase and possibly wrong representations cannot be rectified based on the results of hypernymy organization, which causes the error propagation problem. Secondly, some methods (Bansal et al., 2014; Zhang et al., 2016) do explore the taxonomy space by regarding the induction of taxonomy structure as inferring the conditional distribution of edges. In other words, they use the product of edge proba2463 affenpinscher miniature_pinscher pinscher collie shepherd_dog Appenzeller Sennenhunde working_dog root affenpinscher miniature_pinscher pinscher Appenzeller Sennenhunde working_dog miniature_pinscher pinscher Appenzeller Sennenhunde working_dog t=0 t=5 t=6 t=8 Figure 1: An illustrative example showing the process of taxonomy induction. The input vocabulary V0 is {“working dog”, “pinscher”, “shepherd dog”, ...}, and the initial taxonomy T0 is empty. We use a virtual “root” node to represent T0 at t = 0. At time t = 5, there are 5 terms on the taxonomy T5 and 3 terms left to be attached: Vt = {“shepherd dog”, “collie”, “affenpinscher”}. Suppose the term “affenpinscher” is selected and put under “pinscher”, then the remaining vocabulary Vt+1 at next time step becomes {“shepherd dog”, “collie”}. Finally, after |V0| time steps, all the terms are attached to the taxonomy and V|V0| = V8 = {}. A full taxonomy is then constructed from scratch. bilities to represent the taxonomy quality. However, the edges are treated equally, while in reality, they contribute to the taxonomy differently. For example, a high-level edge is likely to be more important than a bottom-out edge because it has much more influence on its descendants. In addition, these methods cannot explicitly capture the holistic taxonomy structure by optimizing global metrics. To address the above issues, we propose to jointly conduct hypernymy detection and organization by learning term pair representations and constructing the taxonomy simultaneously. Since it is infeasible to estimate the quality of all possible taxonomies, we design an end-to-end reinforcement learning (RL) model to combine the two phases. Specifically, we train an RL agent that employs the term pair representations using multiple sources of information and determines which term to select and where to place it on the taxonomy via a policy network. The feedback from hypernymy organization is propagated back to the hypernymy detection phase, based on which the term pair representations are adjusted. All components are trained in an end-to-end manner with cumulative rewards, measured by a holistic tree metric over the training taxonomies. The probability of a full taxonomy is no longer a simple aggregated probability of its edges. Instead, we assess an edge based on how much it can contribute to the whole quality of the taxonomy. We perform two sets of experiments to evaluate the effectiveness of our proposed approach. First, we test the end-to-end taxonomy induction performance by comparing our approach with the state-of-the-art two-phase methods, and show that our approach outperforms them significantly on the quality of constructed taxonomies. Second, we use the same (noisy) hypernym graph as the input of all compared methods, and demonstrate that our RL approach does better hypernymy organization through optimizing metrics that can capture holistic taxonomy structure. Contributions. In summary, we have made the following contributions: (1) We propose a deep reinforcement learning approach to unify hypernymy detection and organization so as to induct taxonomies in an end-to-end manner. (2) We design a policy network to incorporate semantic information of term pairs and use cumulative rewards to measure the quality of constructed taxonomies holistically. (3) Experiments on two public datasets from different domains demonstrate the superior performance of our approach compared with state-of-the-art methods. We also show that our method can effectively reduce error propagation and capture global taxonomy structure. 2 Automatic Taxonomy Induction 2.1 Problem Definition We define a taxonomy T = (V, R) as a treestructured hierarchy with term set V (i.e., vocabulary), and edge set R (which indicates is-a relationship between terms). A term v ∈V can be either a unigram or a multi-word phrase. The task of end-to-end taxonomy induction takes a set of training taxonomies and related resources (e.g., background text corpora) as input, and aims to learn a model to construct a full taxonomy T by adding terms from a given vocabulary V0 onto an empty hierarchy T0 one at a time. An illustration of the taxonomy induction process is shown in Fig. 1. 2464 2.2 Modeling Hypernymy Relation Determining which term to select from V0 and where to place it on the current hierarchy requires understanding of the semantic relationships between the selected term and all the other terms. We consider multiple sources of information (i.e., resources) for learning hypernymy relation representations of term pairs, including dependency path-based contextual embedding and distributional term embeddings (Shwartz et al., 2016). Path-based Information. We extract the shortest dependency paths between each co-occurring term pair from sentences in the given background corpora. Each path is represented as a sequence of edges that goes from term x to term y in the dependency tree, and each edge consists of the word lemma, the part-of-speech tag, the dependency label and the edge direction between two contiguous words. The edge is represented by the concatenation of embeddings of its four components: Ve = [Vl, , Vpos, Vdep, Vdir]. Instead of treating the entire dependency path as a single feature, we encode the sequence of dependency edges Ve1, Ve2, ..., Vek using an LSTM so that the model can focus on learning from parts of the path that are more informative while ignoring others. We denote the final output of the LSTM for path p as Op, and use P(x, y) to represent the set of all dependency paths between term pair (x, y). A single vector representation of the term pair (x, y) is then computed as PP(x,y), the weighted average of all its path representations by applying an average pooling: PP(x,y) = ∑ p∈P(x,y) c(x,y)(p) · Op ∑ p∈P(x,y) c(x,y)(p) , where c(x,y)(p) denotes the frequency of path p in P(x, y). For those term pairs without dependency paths, we use a randomly initialized empty path to represent them as in Shwartz et al. (2016). Distributional Term Embedding. The previous path-based features are only applicable when two terms co-occur in a sentence. In our experiments, however, we found that only about 17% of term pairs have sentence-level co-occurrences.2 To alleviate the sparse co-occurrence issue, we concatenate the path representation PP(x,y) with the word 2In comparison, more than 70% of term pairs have sentence-level co-occurrences in BLESS (Baroni and Lenci, 2011), a standard hypernymy detection dataset. embeddings of x and y, which capture the distributional semantics of two terms. Surface String Features. In practice, even the embeddings of many terms are missing because the terms in the input vocabulary may be multiword phrases, proper nouns or named entities, which are likely not covered by the external pretrained word embeddings. To address this issue, we utilize several surface features described in previous studies (Yang and Callan, 2009; Bansal et al., 2014; Zhang et al., 2016). Specifically, we employ Capitalization, Ends with, Contains, Suffix match, Longest common substring and Length difference. These features are effective for detecting hypernyms solely based on the term pairs. Frequency and Generality Features. Another feature source that we employ is the hypernym candidates from TAXI3 (Panchenko et al., 2016). These hypernym candidates are extracted by lexico-syntactic patterns and may be noisy. As only term pairs and the co-occurrence frequencies of them (under specific patterns) are available, we cannot recover the dependency paths between these terms. Thus, we design two features that are similar to those used in (Panchenko et al., 2016; Gupta et al., 2017). 4 • Normalized Frequency Diff. For a hyponymhypernym pair (xi, xj) where xi is the hyponym and xj is the hypernym, its normalized frequency is defined as freqn(xi, xj) = freq(xi,xj) maxk freq(xi,xk), where freq(xi, xj) denotes the raw frequency of (xi, xj). The final feature score is defined as freqn(xi, xj) − freqn(xj, xi), which down-ranks synonyms and co-hyponyms. Intuitively, a higher score indicates a higher probability that the term pair holds the hypernymy relation. • Generality Diff. The generality g(x) of a term x is defined as the logarithm of the number of its distinct hyponyms, i.e., g(x) = log(1+|hypo|), where for any hypo ∈hypo, (hypo, x) is a hypernym candidate. A high g(x) of the term x implies that x is general since it has many distinct hyponyms. The generality of a term pair is defined as the difference in generality between xj and xi: g(xj) −g(xi). This feature would 3http://tudarmstadt-lt.github.io/taxi/ 4Since the features use additional resource, we wouldn’t include them unless otherwise specified. 2465 promote term pairs with the right level of generality and penalize term pairs that are either too general or too specific. The surface, frequency, and generality features are binned and their embeddings are concatenated as a part of the term pair representation. In summary, the final term pair representation Rxy has the following form: Rxy = [PP(x,y), Vwx, Vwy, VF(x,y)], where PP(x,y), Vwx, Vwy, VF(x,y) denote the path representation, the word embedding of x and y, and the feature embeddings, respectively. Our approach is general and can be flexibly extended to incorporate different feature representation components introduced by other relation extraction models (Zhang et al., 2017; Lin et al., 2016; Shwartz et al., 2016). We leave in-depth discussion of the design choice of hypernymy relation representation components as future work. 3 Reinforcement Learning for End-to-End Taxonomy Induction We present the reinforcement learning (RL) approach to taxonomy induction in this section. The RL agent employs the term pair representations described in Section 2.2 as input, and explores how to generate a whole taxonomy by selecting one term at each time step and attaching it to the current taxonomy. We first describe the environment, including the actions, states, and rewards. Then, we introduce how to choose actions via a policy network. 3.1 Actions We regard the process of building a taxonomy as making a sequence of actions. Specifically, we define that an action at at time step t is to (1) select a term x1 from the remaining vocabulary Vt; (2) remove x1 from Vt, and (3) attach x1 as a hyponym of one term x2 that is already on the current taxonomy Tt. Therefore, the size of action space at time step t is |Vt| × |Tt|, where |Vt| is the size of the remaining vocabulary Vt, and |Tt| is the number of terms on the current taxonomy. At the beginning of each episode, the remaining vocabulary V0 is equal to the input vocabulary and the taxonomy T0 is empty. During the taxonomy induction process, the following relations always hold: |Vt| = |Vt−1| −1, |Tt| = |Tt−1| + 1, and |Vt| + |Tt| = |V0|. The episode terminates when all the terms are attached to the taxonomy, which makes the length of one episode equal to |V0|. A remaining issue is how to select the first term when no terms are on the taxonomy. One approach that we tried is to add a virtual node as root and consider it as if a real node. The root embedding is randomly initialized and updated with other parameters. This approach presumes that all taxonomies share a common root representation and expects to find the real root of a taxonomy via the term pair representations between the virtual root and other terms. Another approach that we explored is to postpone the decision of root by initializing T with a random term as current root at the beginning of one episode, and allowing the selection of new root by attaching one term as the hypernym of current root. In this way, it overcomes the lack of prior knowledge when the first term is chosen. The size of action space then becomes |At| = |Vt| × |Tt| + |Vt|, and the length of one episode becomes |V0| −1. We compare the performance of the two approaches in Section 4. 3.2 States The state s at time t comprises the current taxonomy Tt and the remaining vocabulary Vt. At each time step, the environment provides the information of current state, based on which the RL agent takes an action. Once a term pair (x1, x2) is selected, the position of the new term x1 is automatically determined since the other term x2 is already on the taxonomy and we can simply attach x1 by adding an edge between x1 and x2. 3.3 Rewards The agent takes a scalar reward as feedback of its actions to learn its policy. One obvious reward is to wait until the end of taxonomy induction, and then compare the predicted taxonomy with gold taxonomy. However, this reward is delayed and difficult to measure individual actions in our scenario. Instead, we use reward shaping, i.e., giving intermediate rewards at each time step, to accelerate the learning process. Empirically, we set the reward r at time step t to be the difference of Edge-F1 (defined in Section 4.2 and evaluated by comparing the current taxonomy with the gold taxonomy) between current and last time step: rt = F1et −F1et−1. If current EdgeF1 is better than that at last time step, the reward would be positive, and vice versa. The cumula2466 Figure 2: The architecture of the policy network. The dependency paths are encoded and concatenated with word embeddings and feature embeddings, and then fed into a two-layer feed-forward network. tive reward from current time step to the end of an episode would cancel the intermediate rewards and thus reflect whether current action improves the overall performance or not. As a result, the agent would not focus on the selection of current term pair but have a long-term view that takes following actions into account. For example, suppose there are two actions at the same time step. One action attaches a leaf node to a high-level node, and the other action attaches a non-leaf node to the same high-level node. Both attachments form a wrong edge but the latter one is likely to receive a higher cumulative reward because its following attachments are more likely to be correct. 3.4 Policy Network After we introduce the term pair representations and define the states, actions, and rewards, the problem becomes how to choose an action from the action space, i.e., which term pair (x1, x2) should be selected given the current state? To solve the problem, we parameterize each action a by a policy network π(a | s; WRL). The architecture of our policy network is shown in Fig. 2. For each term pair, its representation is obtained by the path LSTM encoder, the word embeddings of both terms, and the embeddings of features. By stacking the term pair representations, we can obtain an action matrix At with size (|Vt| × |Tt|) × dim(R), where (|Vt| × |Tt|) denotes the number of possible actions (term pairs) at time t and dim(R) denotes the dimension of term pair representation R. At is then fed into a two-layer feed-forward network followed by a softmax layer which outputs the probability distribution of actions.5 Finally, an action at is sampled based on the probability distribution of the action space: Ht = ReLU(W1 RLAT t + b1 RL), π(a | s; WRL) = softmax(W2 RLHt + b2 RL), at ∼π(a | s; WRL). At the time of inference, instead of sampling an action from the probability distribution, we greedily select the term pair with the highest probability. We use REINFORCE (Williams, 1992), one instance of the policy gradient methods as the optimization algorithm. Specifically, for each episode, the weights of the policy network are updated as follows: WRL = WRL + α T ∑ t=1 ∇logπ(at | s; WRL) · vt, where vi = ∑T t=i γt−irt is the culmulative future reward at time i and γ ∈[0, 1] is a discounting factor of future rewards. To reduce variance, 10 rollouts for each training sample are run and the rewards are averaged. Another common strategy for variance reduction is to use a baseline and give the agent the difference between the real reward and the baseline reward instead of feeding the real reward directly. We use a moving average of the reward as the baseline for simplicity. 5We tried to encode induction history by feeding representations of previously selected term pairs into an LSTM, and leveraging the output of the LSTM as history representation (concatenating it with current term pair representations or passing it to a feed-forward network). However, we didn’t observe clear performance change. 2467 3.5 Implementation Details We use pre-trained GloVe word vectors (Pennington et al., 2014) with dimensionality 50 as word embeddings. We limit the maximum number of dependency paths between each term pair to be 200 because some term pairs containing general terms may have too many dependency paths. We run with different random seeds and hyperparameters and use the validation set to pick the best model. We use an Adam optimizer with initial learning rate 10−3. We set the discounting factor γ to 0.4 as it is shown that using a smaller discount factor than defined can be viewed as regularization (Jiang et al., 2015). Since the parameter updates are performed at the end of each episode, we cache the term pair representations and reuse them when the same term pairs are encountered again in the same episode. As a result, the proposed approach is very time efficient – each training epoch takes less than 20 minutes on a single-core CPU using DyNet (Neubig et al., 2017). 4 Experiments We design two experiments to demonstrate the effectiveness of our proposed RL approach for taxonomy induction. First, we compare our end-toend approach with two-phase methods and show that our approach yields taxonomies with higher quality through reducing error propagation and optimizing towards holistic metrics. Second, we conduct a controlled experiment on hypernymy organization, where the same hypernym graph is used as the input of both our approach and the compared methods. We show that our RL method is more effective at hypernymy organization. 4.1 Experiment Setup Here we introduce the details of our two experiments on validating that (1) the proposed approach can effectively reduce error propagation; and (2) our approach yields better taxonomies via optimizing metrics on holistic taxonomy structure. Performance Study on End-to-End Taxonomy Induction. In the first experiment, we show that our joint learning approach is superior to twophase methods. Towards this goal, we compare with TAXI (Panchenko et al., 2016), a typical two-phase approach, two-phase HypeNET, implemented by pairwise hypernymy detection and hypernymy organization using MST, and Bansal et al. (2014). The dataset we use in this experiment is from Bansal et al. (2014), which is a set of medium-sized full-domain taxonomies consisting of bottom-out full subtrees sampled from WordNet. Terms in different taxonomies are from various domains such as animals, general concepts, daily necessities. Each taxonomy is of height four (i.e., 4 nodes from root to leaf) and contains (10, 50] nodes. The dataset contains 761 nonoverlapped taxonomies in total and is partitioned by 70/15/15% (533/114/114) as training, validation, and test set, respectively. Testing on Hypernymy Organization. In the second experiment, we show that our approach is better at hypernymy organization by leveraging the global taxonomy structure. For a fair comparison, we reuse the hypernym graph as in TAXI (Panchenko et al., 2016) and SubSeq (Gupta et al., 2017) so that the inputs of each model are the same. Specifically, we restrict the action space to be the same as the baselines by considering only term pairs in the hypernym graph, rather than all |V |×|T| possible term pairs. As a result, it is possible that at some point no more hypernym candidates can be found but the remaining vocabulary is still not empty. If the induction terminates at this point, we call it a partial induction. We can also continue the induction by restoring the original action space at this moment so that all the terms in V are eventually attached to the taxonomy. We call this setting a full induction. In this experiment, we use the English environment and science taxonomies in the SemEval-2016 task 13 (TExEval2) (Bordea et al., 2016). Each taxonomy is composed of hundreds of terms, which is much larger than the WordNet taxonomies. The taxonomies are aggregated from existing resources such as WordNet, Eurovoc6, and the Wikipedia Bitaxonomy (Flati et al., 2014). Since this dataset provides no training data, we train our model using the WordNet dataset in the first experiment. To avoid possible overlap between these two sources, we exclude those taxonomies constructed from WordNet. In both experiments, we combine three public corpora – the latest Wikipedia dump, the UMBC web-based corpus (Han et al., 2013) and the One Billion Word Language Modeling Benchmark (Chelba et al., 2013). Only sentences where term pairs co-occur are reserved, which results in 6http://eurovoc.europa.eu/drupal/ 2468 Model Pa Ra F1a Pe Re F1e TAXI 66.1 13.9 23.0 54.8 18.0 27.1 HypeNET 32.8 26.7 29.4 26.1 17.2 20.7 HypeNET+MST 33.7 41.1 37.0 29.2 29.2 29.2 TaxoRL (RE) 35.8 47.4 40.8 35.4 35.4 35.4 TaxoRL (NR) 41.3 49.2 44.9 35.6 35.6 35.6 Bansal et al. (2014) 48.0 55.2 51.4 TaxoRL (NR) + FG 52.9 58.6 55.6 43.8 43.8 43.8 Table 1: Results of the end-to-end taxonomy induction experiment. Our approach significantly outperforms two-phase methods (Panchenko et al., 2016; Shwartz et al., 2016; Bansal et al., 2014). Bansal et al. (2014) and TaxoRL (NR) + FG are listed separately because they use extra resources. a corpus with size 2.6 GB for the WordNet dataset and 810 MB for the TExEval-2 dataset. Dependency paths between term pairs are extracted from the corpus via spaCy7. 4.2 Evaluation Metrics Ancestor-F1. It compares the ancestors (“is-a” pairs) on the predicted taxonomy with those on the gold taxonomy. We use Pa, Ra, F1a to denote the precision, recall, and F1-score, respectively: Pa = |is-asys ∧is-agold| |is-asys| , Ra = |is-asys ∧is-agold| |is-agold| . Edge-F1. It is more strict than Ancestor-F1 since it only compares predicted edges with gold edges. Similarly, we denote edge-based metrics as Pe, Re, and F1e, respectively. Note that Pe = Re = F1e if the number of predicted edges is the same as gold edges. 4.3 Results Comparison on End-to-End Taxonomy Induction. Table 1 shows the results of the first experiment. HypeNET (Shwartz et al., 2016) uses additional surface features described in Section 2.2. HypeNET+MST extends HypeNET by first constructing a hypernym graph using HypeNET’s output as weights of edges and then finding the MST (Chu, 1965) of this graph. TaxoRL (RE) denotes our RL approach which assumes a common Root Embedding, and TaxoRL (NR) denotes its variant that allows a New Root to be added. We can see that TAXI has the lowest F1a while HypeNET performs the worst in F1e. Both TAXI and HypeNET’s F1a and F1e are lower than 30. HypeNET+MST outperforms HypeNET in both 7https://spacy.io/ Model Pa Ra F1a Pe Re F1e Env TAXI (DAG) 50.1 32.7 39.6 33.8 26.8 29.9 TAXI (tree) 67.5 30.8 42.3 41.1 23.1 29.6 SubSeq 22.4 TaxoRL (Partial) 51.6 36.4 42.7 37.5 24.2 29.4 TaxoRL (Full) 47.2 54.6 50.6 32.3 32.3 32.3 Sci TAXI (DAG) 61.6 41.7 49.7 38.8 34.8 36.7 TAXI (tree) 76.8 38.3 51.1 44.8 28.8 35.1 SubSeq 39.9 TaxoRL (Partial) 84.6 34.4 48.9 56.9 33.0 41.8 TaxoRL (Full) 68.3 52.9 59.6 37.9 37.9 37.9 Table 2: Results of the hypernymy organization experiment. Our approach outperforms Panchenko et al. (2016); Gupta et al. (2017) when the same hypernym graph is used as input. The precision of partial induction in both metrics is high. The precision of full induction is relatively lower but its recall is much higher. F1a and F1e, because it considers the global taxonomy structure, although the two phases are performed independently. TaxoRL (RE) uses exactly the same input as HypeNET+MST and yet achieves significantly better performance, which demonstrates the superiority of combining the phases of hypernymy detection and hypernymy organization. Also, we found that presuming a shared root embedding for all taxonomies can be inappropriate if they are from different domains, which explains why TaxoRL (NR) performs better than TaxoRL (RE). Finally, after we add the frequency and generality features (TaxoRL (NR) + FG), our approach outperforms Bansal et al. (2014), even if a much smaller corpus is used.8 Analysis on Hypernymy Organization. Table 2 lists the results of the second experiment. TAXI (DAG) (Panchenko et al., 2016) denotes TAXI’s original performance on the TExEval-2 dataset.9 Since we don’t allow DAG in our setting, we convert its results to trees (denoted by TAXI (tree)) by only keeping the first parent of each node. SubSeq (Gupta et al., 2017) also reuses TAXI’s hypernym candidates. TaxoRL (Partial) and TaxoRL (Full) denotes partial induction and full induction, respectively. Our joint RL approach outperforms baselines in both domains substantially. TaxoRL (Partial) achieves higher precision in both ancestor-based and edge-based metrics but has rel8Bansal et al. (2014) use an unavailable resource (Brants and Franz, 2006) which contains one trillion tokens while our public corpus contains several billion tokens. The frequency and generality features are sparse because the vocabulary that TAXI (in the TExEval-2 competition) used for focused crawling and hypernymy detection was different. 9alt.qcri.org/semeval2016/task13/index.php?id=evaluation 2469 atively lower recall since it discards some terms. In addition, it achieves the best F1e in science domain. TaxoRL (Full) has the highest recall in both domains and metrics, with the compromise of lower precision. Overall, TaxoRL (Full) performs the best in both domains in terms of F1a and achieves best F1e in environment domain. 5 Ablation Analysis and Case Study In this section, we conduct ablation analysis and present a concrete case for better interpreting our model and experimental results. Table 3 shows the ablation study of TaxoRL (NR) on the WordNet dataset. As one may find, different types of features are complementary to each other. Combining distributional and pathbased features performs better than using either of them alone (Shwartz et al., 2016). Adding surface features helps model string-level statistics that are hard to capture by distributional or path-based features. Significant improvement is observed when more data is used, meaning that standard corpora (such as Wikipedia) might not be enough for complicated taxonomies like WordNet. Fig. 3 shows the results of taxonomy about filter. We denote the selected term pair at time step t as (hypo, hyper, t). Initially, the term water filter is randomly chosen as the taxonomy root. Then, a wrong term pair (water filter, air filter, 1) is selected possibly due to the noise and sparsity of features, which makes the term air filter become the new root. (air filter, filter, 2) is selected next and the current root becomes filter that is identical to the real root. After that, term pairs such as (fuel filter, filter, 3), (coffee filter, filter, 4) are selected correctly, mainly because of the substring inclusion intuition. Other term pairs such as (colander, strainer, 13), (glass wool, filter, 16) are discovered later, largely by the information encoded in the dependency paths and embeddings. For those undiscovered relations, (filter tip, air filter) has no dependency path in the corpus. sifter is attached to the taxonomy before its hypernym sieve. There is no co-occurrence between bacteria bed (or drain basket) and other terms. In addition, it is hard to utilize the surface features since they “look different” from other terms. That is also why (bacteria bed, air filter, 17) and (drain basket, air filter, 18) are attached in the end: our approach prefers to select term pairs with high confidence first. Model Pa Ra F1a F1e Distributional Info 27.1 24.3 25.6 13.8 Path-based Info 27.8 48.5 33.7 27.4 D + P 36.6 39.4 37.9 28.3 D + P + Surface Features 41.3 49.2 44.9 35.6 D + P + S + FG 52.9 58.6 55.6 43.8 Table 3: Ablation study on the WordNet dataset (Bansal et al., 2014). Pe and Re are omitted because they are the same as F1e for each model. We can see that our approach benefits from multiple sources of information which are complementary to each other. 6 Related Work 6.1 Hypernymy Detection Finding high-quality hypernyms is of great importance since it serves as the first step of taxonomy induction. In previous works, there are mainly two categories of approaches for hypernymy detection, namely pattern-based and distributional methods. Pattern-based methods consider lexicosyntactic patterns between the joint occurrences of term pairs for hypernymy detection. They generally achieve high precision but suffer from low recall. Typical methods that leverage patterns for hypernym extraction include (Hearst, 1992; Snow et al., 2005; Kozareva and Hovy, 2010; Panchenko et al., 2016; Nakashole et al., 2012). Distributional methods leverage the contexts of each term separately. The co-occurrence of term pairs is hence unnecessary. Some distributional methods are developed in an unsupervised manner. Measures such as symmetric similarity (Lin et al., 1998) and those based on distributional inclusion hypothesis (Weeds et al., 2004; Chang et al., 2017) were proposed. Supervised methods, on the other hand, usually have better performance than unsupervised methods for hypernymy detection. Recent works towards this direction include (Fu et al., 2014; Rimell, 2014; Yu et al., 2015; Tuan et al., 2016; Shwartz et al., 2016). 6.2 Taxonomy Induction There are many lines of work for taxonomy induction in the prior literature. One line of works (Snow et al., 2005; Yang and Callan, 2009; Shen et al., 2012; Jurgens and Pilehvar, 2015) aims to complete existing taxonomies by attaching new terms in an incremental way. Snow et al. (2005) enrich WordNet by maximizing the probability of an extended taxonomy given evidence 2470 of relations from text corpora. Shen et al. (2012) determine whether an entity is on the taxonomy and either attach it to the right category or link it to an existing one based on the results. Another line of works (Suchanek et al., 2007; Ponzetto and Strube, 2008; Flati et al., 2014) focuses on the taxonomy induction of existing encyclopedias (e.g., Wikipedia), mainly by employing the nature that they are already organized into semi-structured data. To deal with the issue of incomplete coverage, some works (Liu et al., 2012; Dong et al., 2014; Panchenko et al., 2016; Kozareva and Hovy, 2010) utilize data from domain-specific resources or the Web. Panchenko et al. (2016) extract hypernyms by patterns from general purpose corpora and domain-specific corpora bootstrapped from the input vocabulary. Kozareva and Hovy (2010) harvest new terms from the Web by employing Hearst-like lexico-syntactic patterns and validate the learned is-a relations by a web-based concept positioning procedure. Many works (Kozareva and Hovy, 2010; Anh et al., 2014; Velardi et al., 2013; Bansal et al., 2014; Zhang et al., 2016; Panchenko et al., 2016; Gupta et al., 2017) cast the task of hypernymy organization as a graph optimization problem. Kozareva and Hovy (2010) begin with a set of root terms and leaf terms and aim to generate intermediate terms by deriving the longest path from the root to leaf in a noisy hypernym graph. Velardi et al. (2013) induct a taxonomy from the hypernym graph via optimal branching and a weighting policy. Bansal et al. (2014) regard the induction of a taxonomy as a structured learning problem by building a factor graph to model the relations between edges and siblings, and output the MST found by the Chu-Liu/Edmond’s algorithm (Chu, 1965). Zhang et al. (2016) propose a probabilistic Bayesian model which incorporates visual features (images) in addition to text features (words) to improve the performance. The optimal taxonomy is also found by the MST. Gupta et al. (2017) extract hypernym subsequences based on hypernym pairs, and regard the task of taxonomy induction as an instance of the minimum-cost flow problem. 7 Conclusion and Future Work This paper presents a novel end-to-end reinforcement learning approach for automatic taxonomy induction. Unlike previous two-phase methods oil_filter filter_tip sieve colander tea-strainer strainer glass_wool light_filter riddle diatomaceous_earth fuel_filter sifter coffee_filter water_filter drain_basket bacteria_bed air_filter filter_bed filter 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 17 16 18 riddle sifter sieve tea-strainer colander strainer fuel_filter diatomaceous_earth coffee_filter light_filter drain_basket filter_tip air_filter bacteria_bed oil_filter glass_wool filter_bed water_filter filter Figure 3: The gold taxonomy in WordNet is on the left. The predicted taxonomy is on the right. The numbers indicate the order of term pair selections. Term pairs with high confidence are selected first. that treat term pairs independently or equally, our approach learns the representations of term pairs by optimizing a holistic tree metric over the training taxonomies. The error propagation between two phases is thus effectively reduced and the global taxonomy structure is better captured. Experiments on two public datasets from different domains show that our approach outperforms state-of-the-art methods significantly. In the future, we will explore more strategies towards term pair selection (e.g., allow the RL agent to remove terms from the taxonomy) and reward function design. In addition, study on how to effectively encode induction history will be interesting. Acknowledgments Research was sponsored in part by U.S. Army Research Lab. under Cooperative Agreement No. W911NF-09-2-0053 (NSCTA), DARPA under Agreement No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 1704532, and IIS-17-41317, and grant 1U54GM114838 awarded by NIGMS through funds provided by the trans-NIH Big Data to Knowledge (BD2K) initiative (www.bd2k.nih.gov). The views and conclusions contained in this document are those of the author(s) and should not be interpreted as representing the official policies of the U.S. Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. We thank Mohit Bansal, Hao Zhang, and anonymous reviewers for valuable feedback. 2471 References Tuan Luu Anh, Jung-jae Kim, and See Kiong Ng. 2014. Taxonomy construction using syntactic contextual evidence. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 810–819. Mohit Bansal, David Burkett, Gerard De Melo, and Dan Klein. 2014. Structured learning for taxonomy induction with belief propagation. In ACL (1), pages 1041–1051. Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 1–10. Association for Computational Linguistics. Georgeta Bordea, Els Lefever, and Paul Buitelaar. 2016. Semeval-2016 task 13: Taxonomy extraction evaluation (texeval-2). In SemEval-2016, pages 1081–1091. Association for Computational Linguistics. Thorsten Brants and Alex Franz. 2006. Web 1t 5-gram corpus version 1.1. Google Inc. Jose Camacho-Collados. 2017. Why we have switched from building full-fledged taxonomies to simply detecting hypernymy relations. arXiv preprint arXiv:1703.04178. Haw-Shiuan Chang, ZiYun Wang, Luke Vilnis, and Andrew McCallum. 2017. Unsupervised hypernym detection by distributional inclusion vector embedding. arXiv preprint arXiv:1710.00880. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Yoeng-Jin Chu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396–1400. Thomas Demeester, Tim Rockt¨aschel, and Sebastian Riedel. 2016. Lifted rule injection for relation embeddings. In EMNLP. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601–610. ACM. Stefano Faralli, Giovanni Stilo, and Paola Velardi. 2015. Large scale homophily analysis in twitter using a twixonomy. In IJCAI, pages 2334–2340. Tiziano Flati, Daniele Vannella, Tommaso Pasini, and Roberto Navigli. 2014. Two is bigger (and better) than one: the wikipedia bitaxonomy project. In ACL (1), pages 945–955. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1199–1209. Amit Gupta, R´emi Lebret, Hamza Harkous, and Karl Aberer. 2017. Taxonomy induction using hypernym subsequences. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 1329–1338. ACM. Lushan Han, Abhay L Kashyap, Tim Finin, James Mayfield, and Jonathan Weese. 2013. Umbc ebiquity-core: Semantic textual similarity systems. In * SEM@ NAACL-HLT, pages 44–52. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguisticsVolume 2, pages 539–545. Association for Computational Linguistics. Nan Jiang, Alex Kulesza, Satinder Singh, and Richard Lewis. 2015. The dependence of effective planning horizon on model accuracy. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 1181–1189. International Foundation for Autonomous Agents and Multiagent Systems. David Jurgens and Mohammad Taher Pilehvar. 2015. Reserating the awesometastic: An automatic extension of the wordnet taxonomy for novel terms. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1459–1465. Zornitsa Kozareva and Eduard Hovy. 2010. A semi-supervised method to learn and construct taxonomies using the web. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 1110–1118. Association for Computational Linguistics. Douglas B Lenat. 1995. Cyc: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38. Dekang Lin et al. 1998. An information-theoretic definition of similarity. In Icml, volume 98, pages 296– 304. Citeseer. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL (1). Xueqing Liu, Yangqiu Song, Shixia Liu, and Haixun Wang. 2012. Automatic taxonomy construction from keywords. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1433–1441. ACM. 2472 George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: a taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1135–1145. Association for Computational Linguistics. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Alexander Panchenko, Stefano Faralli, Eugen Ruppert, Steffen Remus, Hubert Naets, Cedrick Fairon, Simone Paolo Ponzetto, and Chris Biemann. 2016. Taxi at semeval-2016 task 13: a taxonomy induction method based on lexico-syntactic patterns, substrings and focused crawling. In Proceedings of the 10th International Workshop on Semantic Evaluation, San Diego, CA, USA. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Simone Paolo Ponzetto and Michael Strube. 2008. Wikitaxonomy: A large scale knowledge resource. In ECAI, volume 178, pages 751–752. Laura Rimell. 2014. Distributional lexical entailment by topic coherence. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 511–519. Mark Sammons. 2012. Recognizing textual entailment. Wei Shen, Jianyong Wang, Ping Luo, and Min Wang. 2012. A graph-based approach for ontology population with named entities. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 345–354. ACM. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2389–2398. Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Advances in neural information processing systems, pages 1297–1304. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697–706. ACM. Luu A Tuan, Yi Tay, Siu C Hui, and See K Ng. 2016. Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In Proceedings of the EMNLP conference, pages 403–413. Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. Ontolearn reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3):665–707. Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th international conference on Computational Linguistics, page 1015. Association for Computational Linguistics. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Ichiro Yamada, Kentaro Torisawa, Jun’ichi Kazama, Kow Kuroda, Masaki Murata, Stijn De Saeger, Francis Bond, and Asuka Sumida. 2009. Hypernym discovery based on distributional similarity and hierarchical structures. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 929–937. Association for Computational Linguistics. Hui Yang and Jamie Callan. 2009. A metric-based framework for automatic taxonomy induction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 271–279. Association for Computational Linguistics. Shuo Yang, Lei Zou, Zhongyuan Wang, Jun Yan, and Ji-Rong Wen. 2017. Efficiently answering technical questions - a knowledge graph approach. In AAAI. Zheng Yu, Haixun Wang, Xuemin Lin, and Min Wang. 2015. Learning term embeddings for hypernymy identification. In IJCAI, pages 1390–1397. Hao Zhang, Zhiting Hu, Yuntian Deng, Mrinmaya Sachan, Zhicheng Yan, and Eric Xing. 2016. Learning concept taxonomies from multi-modal data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1791–1801. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35–45.
2018
229
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 241–251 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 241 Retrieval of the Best Counterargument without Prior Topic Knowledge Henning Wachsmuth Paderborn University Computational Social Science Group [email protected] Shahbaz Syed and Benno Stein Bauhaus-Universität Weimar Faculty of Media, Webis Group <first>.<last>@uni-weimar.de Abstract Given any argument on any controversial topic, how to counter it? This question implies the challenging retrieval task of finding the best counterargument. Since prior knowledge of a topic cannot be expected in general, we hypothesize the best counterargument to invoke the same aspects as the argument while having the opposite stance. To operationalize our hypothesis, we simultaneously model the similarity and dissimilarity of pairs of arguments, based on the words and embeddings of the arguments’ premises and conclusions. A salient property of our model is its independence from the topic at hand, i.e., it applies to arbitrary arguments. We evaluate different model variations on millions of argument pairs derived from the web portal idebate.org. Systematic ranking experiments suggest that our hypothesis is true for many arguments: For 7.6 candidates with opposing stance on average, we rank the best counterargument highest with 60% accuracy. Even among all 2801 test set pairs as candidates, we still find the best one about every third time. 1 Introduction Many controversial topics in real life divide us into opposing camps, such as whether to ban guns, who should become president, or what phone to buy. When being confronted with arguments against our stance, but also when forming own arguments, we need to think about how they could best be countered. Argumentation theory tells us that — aside from ad-hominem attacks — a counterargument denies either an argument’s premises, its conclusion, or the reasoning between them (Walton, 2009). Take the following argument in favor of the right to bear arms from the web portal idebate.org: Argument “Gun ownership is an integral aspect of the right to self defence. (conclusion) Law-abiding citizens deserve the right to protect their families in their own homes, especially if the police are judged incapable of dealing with the threat of attack. [...]” (premise) While the conclusion seems well-reasoned, the web portal directly provides a counter to the argument: Counterargument “Burglary should not be punished by vigilante killings of the offender. No amount of property is worth a human life. Perversely, the danger of attack by homeowners may make it more likely that criminals will carry their own weapons. If a right to self-defence is granted in this way, many accidental deaths are bound to result. [...]” As in this example, we observe that a counterargument often takes on the aspects of the topic invoked by the argument, while adding a new perspective to its conclusion and/or premises, conveying the opposite stance. Research has tackled the stance of argument units (Bar-Haim et al., 2017) as well as the attack relations between arguments (Cabrio and Villata, 2012). However, existing approaches learn the interplay of aspects and topics on training data or infer it from external knowledge bases (details in Section 2). This does not work for topics unseen before. Moreover, to our knowledge, no work so far aims at actual counterarguments. This paper studies the task of automatically finding the best counterargument to any argument. In the general case, we cannot expect prior knowledge of an argument’s topic. Following the observation above, we thus just hypothesize the best counterargument to invoke the same aspects as the argument while having the opposite stance. Figure 1 sketches how we operationalize the hypothesis. In particular, we simultaneously model the topic similarity and stance dissimilarity of a candidate counterargument 242 ~ argument counterargument simultaneous similarity and dissimilarity similarity to conclusion similarity to premises topic similarity stance similarity (e.g., max) (e.g., sum) Figure 1: Modeling the simultaneous similarity and dissimilarity of a counterargument to an argument. to the argument. Both are inferred — in different ways — from the similarities to the argument’s conclusion and premises, since it is unclear in advance, whether either of these units or the reasoning between them is countered. Thereby, we find the most dissimilar among the most similar arguments. To study counteraguments, we provide a new corpus with 6753 argument-counterargument pairs, taken from 1069 debates on idebate.org, as well as millions of false pairs derived from them. Given the corpus, we define eight retrieval tasks that differ in the types of candidate counterarguments. Based on the words and embeddings of the arguments, we develop similarity functions that realize the outlined model as a ranking approach. In systematic experiments, we evaluate the different building blocks of our model on all defined tasks. The results suggest that our hypothesis is true for many arguments. The best model configuration improves common word and embedding similarity measures by eight to ten points accuracy in all tasks. Inter alia, we rank 60.3% of the best counterarguments highest when given all arguments with opposite stance (7.6 on average). Even with all 2801 test arguments as candidates, we still achieve 32.4% (and a mean rank of 15), fitting the intuition that offtopic arguments are easier to discard. Our analysis reveals notable gaps across topical themes though. Contributions We believe that our findings will be important for applications such as automatic debating technologies (Rinott et al., 2015) and argument search (Wachsmuth et al., 2017b). To summarize, our main contributions are: • A large corpus for studying multiple counterargument retrieval tasks (Sections 3 and 4). • A topic-independent approach to find the best counterargument to any argument (Section 5). • Evidence that many counterarguments can be found without topic knowledge (Section 6). The corpus as well as the Java source code for reproducing the experiments are available at http: //www.arguana.com. 2 Related Work Counterarguments rebut arguments. In the theoretical model of Toulmin (1958), a rebuttal in fact does not attack the argument, but it merely shows exceptions to the argument’s reasoning. Govier (2010) suggests to rather speak of counterconsiderations in such cases. Unlike Damer (2009), who investigates how to attack several kinds of fallacies, we are interested in how to identify attacks. We focus on those that target arguments, excluding personal (ad-hominem) attacks (Habernal et al., 2018). Following Walton (2006), an argument can be attacked in two ways: one is to question its validity — not meaning that its conclusion must be wrong. The other is to rebut it with a counterargument that entails the opposite conclusion, often by revisiting aspects or introducing new ones. This is the type of attack we study. As Walton (2009) details, rebuttals may target an argument’s premises or conclusion, or they may undercut the reasoning between them. Recently, the computational analysis of natural language argumentation is receiving much attention. Most research focuses on argument mining, ranging from segmenting a text into argument units (Ajjour et al., 2017), over identifying unit types (Rinott et al., 2015) and roles (Niculae et al., 2017), to classifying argument schemes (Feng and Hirst, 2011) and relations (Lawrence and Reed, 2017). Some works detect counterconsiderations in a text (Peldszus and Stede, 2015) or their absence (Stab and Gurevych, 2016). Such considerations make arguments more balanced (see above). In contrast, we seek for arguments that defeat others. Many approaches mine attack relations between arguments. Some use deep learning to find attacks in discussions (Cocarascu and Toni, 2017). Closer to this paper, others determine them in a given set of arguments, using textual entailment (Cabrio and Villata, 2012) or a combination of markov logic and stance classification (Hou and Jochim, 2017). In principle, any attacking argument denotes a counterargument. Unlike previous work, however, we aim for the best counterargument to an argument. Classifying the stance of a text towards a topic (pro or con) generally defines an alternative way of addressing counterarguments. Sobhani et al. (2015) specifically classify health-related arguments using 243 supervised learning, while we do not expect to have prior topic knowledge. Bar-Haim et al. (2017) approach the stance of claims towards open-domain topics. Their approach combines aspect-based sentiment with external relations between aspects and topics from Wikipedia. As such, it is in fact limited to the topics covered there. Our model applies to arbitrary arguments and counterarguments. We need to identify only whether arguments oppose each other, not their actual stance. Similarly, Menini et al. (2017) classify only the disagreement of political texts. Part of their approach is to detect topical key aspects in an unsupervised manner, which seems useful for our purposes. Analogously, Beigman Klebanov et al. (2010) study differences in vocabulary choice for the related task of perspective classification, and Tan et al. (2016) find that the best way to persuade opinion holders in the Change my view forum on reddit.com is to use dissimilar words. As we report later, however, our experiments did not show such results for the argument-counterargument pairs we deal with. The goal of persuasion reveals the association of counterarguments to argumentation quality. Many quality criteria have been assessed for arguments, surveyed in (Wachsmuth et al., 2017a). In the study of Habernal and Gurevych (2016), one reason annotators gave for why an argument was more convincing than another was that it tackled flaws in the opposing view. Zhang et al. (2016) even found that debate winners tend to counter opposing arguments rather than focusing on their own arguments. Argument quality assessment is particularly important in retrieval scenarios. Existing approaches aim to retrieve documents that contain many claims (Roitman et al., 2016) or that provide most support for their claims (Braunstain et al., 2016). In Wachsmuth et al. (2017c), we adapt PageRank to argumentative relations, in order to assess argument relevance objectively. While our search engine args for arguments on the web still uses content-based relevance measures in its first version (Wachsmuth et al., 2017b), its long-term idea is to rank the best arguments highest.1 The model present in this work finds the best counterarguments, but it is meant to be integrated into args at some point. Like here, args uses idebate.org arguments. Others take data from that portal for studying support (Boltuži´c and Šnajder, 2014) or for the distant supervision of argument mining (Al-Khatib et al., 1Argument search engine args: http://args.me 2016). Our corpus is not only larger, though, but it is the first to utilize a unique feature of idebate.org: the explicit specification of counterarguments. 3 The ArguAna Counterargs Corpus This section introduces our ArguAna Counterargs corpus with argument-counterargument pairs, created automatically from the structure of idebate.org. The corpus is freely available at http://www. arguana.com/data. We also provide the code to replicate the construction process. 3.1 The Web Portal idebate.org On the portal idebate.org, diverse controversial topics of usually rather general interest are discussed in debates, subsumed under 15 themes, such as “economy” and “health”. Each debate has a title capturing a thesis on a topic, such as “This House would limit the right to bear arms”, followed by an introductory text, a set of mostly elaborated and well-written points that have a pro or a con stance towards the thesis, and a bibliography. A specific feature of idebate.org is that virtually every point comes along with a counter that immediately attacks the point and its stance. Both points and counters can be seen as arguments. While a point consists of a one-sentence claim (the argument’s conclusion) and a few sentences justifying the claim (the premise(s)), the counter’s (opposite) conclusion remains implicit. All arguments on the portal are established by a community with the goal of showing both sides of a topic in a balanced manner. We therefore assume each counter to be the best counterargument available for the respective point, and we use all resulting true argument pairs as the basis of our corpus. Figure 2 illustrates the italicized concepts, showing the structure of idebate.org. An example argument pair has been discussed in Section 1. 3.2 Corpus Construction We crawled all debates from idebate.org that follow the portal’s theme-guided folder structure (last access: January 30, 2018). From each debate, we extracted the thesis, the introductory text, all points and counters, the bibliography, and some metadata. Each was stored separately in one plain text file, and we also created a file with the entire debate in its original order. Only points and counters are used in our experiments in Section 6. The underlying experiment settings are described in Section 4. 244 point true counter other argument pairs in same debate other points with same stance counters to same stance points with opposite stance counters to opposite stance points from other debates counters from other debates other debates from same theme points from other themes counters from other themes debates from other themes argument conclusion argument premise(s) argument premise(s) (conclusion implicit) (a) (b) (c) (e) (d) (f) (h) (g) (i) true argument pair ... ... ... ... Figure 2: Structure of idebate.org for one true argument pair in our corpus. Colors denote matching stance; we assume arguments from other debates to have no stance towards a point. Points have a conclusion and premises, counters only premises. (a)–(i) are used in Section 4 to specify the candidates in different tasks. Theme Debates Points Counters Culture 46 278 278 Digital freedoms 48 341 341 Economy 95 590 588 Education 58 382 381 Environment 36 215 215 Free speech debate 43 274 273 Health 57 334 333 International 196 1315 1307 Law 116 732 730 Philosophy 50 320 320 Politics 155 982 978 Religion 30 179 179 Science 41 271 269 Society 75 436 431 Sport 23 130 130 Training set 644 4083 4065 Validation set 211 1290 1287 Test set 214 1406 1401 counterargs-18 1069 6779 6753 Table 1: Distribution of debates, points, and counters over the themes in the counterargs-18 corpus. The bottom rows show the size of the datasets. 3.3 Corpus Statistics Table 1 lists the number of debates crawled for each theme, along with the numbers of points and counters in the debates. The 26 found points without a counter are included in the corpus, but we do not use them in our experiments. In total, the ArguAna Counterargs corpus consists of 1069 debates with 6753 points that have a counter. The mean length of points is 196.3 words, whereas counters span only 129.6 words, largely due to the missing explicit conclusion. To avoid exploiting this corpus bias, no approach in our experiments captures length differences. 3.4 Datasets We split the corpus into a training set, consisting of the first 60% of all debates of each theme (ordered by alphabet), as well as a validation set and a test set, each covering 20%. The dataset sizes are found at the bottom of Table 1. By putting all arguments from a debate into a single dataset, no specific topic knowledge can be gained from the training or validation set. We include all themes in all datasets, because we expect the set of themes to be stable. We checked for duplicates. Among the 13 532 point and counters, 3407 appear twice, 723 three times, 36 four times, and 1 five times. We ensure that no true pair is used as a false pair in our tasks. 4 Counterargument Retrieval Tasks Based on the new corpus, we define the following eight counterargument retrieval tasks of different complexity. All tasks consider all true argumentcounterargument pairs, while differing in terms of what arguments (points and/or counters) from which context (same debate, same theme, or entire portal) are candidates for a given argument. Same Debate: Opposing Counters All counters in the same debate with stance opposite to the given argument are candidates (Figure 2: a, b). The task is to find the best counterargument among all counters to the argument’s stance. Same Debate: Counters All counters in the same debate irrespective of their stance are candidates (Figure 2: a–c). The task is to find the best counterargument among all on-topic arguments phrased as counters. 245 Training Set Validation Set Test Set Context Candidate Counterarg’s True False Ratio True False Ratio True False Ratio Same debate Opposing counters 4 065 11 672 1:2.9 1 287 3 590 1:2.8 1 401 4 052 1:2.9 Counters 4 065 27 024 1:6.6 1 287 8 348 1:6.5 1 401 9 312 1:6.6 Opposing arguments 4 065 27 026 1:6.6 1 287 8 350 1:6.5 1 401 9 312 1:6.6 Arguments 4 065 54 070 1:13.3 1 287 16 700 1:13.0 1 401 18 630 1:13.3 Same theme Counters 4 065 1 616 000 1:398 1 287 176 266 1:137 1 401 189 870 1:136 Arguments 4 065 3 232 038 1:795 1 287 352 536 1:274 1 401 379 746 1:271 Entire portal Counters 4 065 16 517 994 1:4063 1 287 1 654 878 1:1286 1 401 1 961 182 1:1400 Arguments 4 065 33 038 154 1:8127 1 287 3 309 760 1:2572 1 401 3 922 582 1:2800 Table 2: Number of true and false argument-counterargument pairs as well as their ratio for each evaluated context and type of candidate counterarguments in the three datasets. Each line defines one retrieval task. Same Debate: Opposing Arguments All arguments in the same debate with opposite stance are candidates (Figure 2: a, b, d). The task is to find the best among all on-topic counterarguments. Same Debate: Arguments All arguments in the same debate irrespective of their stance are candidates (Figure 2: a–e). The task is to find the best counterargument among all on-topic arguments. Same Theme: Counters All counters from the same theme are candidates (Figure 2: a–c, f). The task is to find the best counterargument among all on-theme arguments phrased as counters. Same Theme: Arguments All arguments from the same theme are candidates (Figure 2: a–g). The task is to find the best counterargument among all on-theme arguments. Entire Portal: Counters All counters are candidates (Figure 2: a–c, f, h). The task is to find the best counterargument among all arguments phrased as counters. Entire Portal: Arguments All arguments are candidates (Figure 2: a–i). The task is to find the best counterargument among all arguments. Table 2 lists the numbers of true and false pairs for each task. Experiment files containing the file paths of all candidate pairs are provided in our corpus. 5 Retrieval of the Best Counterargument without Prior Topic Knowledge The eight defined tasks indicate the subproblems of retrieving the best counterargument to a given argument: Finding all arguments that address the same topic, filtering those arguments with an opposite stance towards the topic, and identifying the best counter among these arguments. This section presents our approach to solving these problems computationally without prior knowledge of the argument’s topic, based on the simultaneous similarity and dissimilarity of arguments.2 5.1 Topic as Word and Embedding Similarity We do not reinvent the wheel to assess topical relevance, but rather follow common practice. Concretely, we hypothesize a candidate counterargument to be on-topic if it is similar to the argument in terms of its words and its embedding. We capture these two types of similarity as follows. Word Argument Similarity To best represent the words in arguments, we did initial counterargument retrieval tests with token, stem, and lemma n-grams, n ∈{1, 2, 3}. While the differences were not large, stems worked best and stem 1-grams sufficed. Both might be a consequence of the limited data size. In our experiments in Section 6, we determine the stem 1-grams to be considered on the training set of each task. For word similarity computation, we tested four inverse vector-based distance measures: Cosine, Euclidean, Manhattan, and, Jaccard similarity (Cha, 2007). On the validation sets, the Manhattan similarity performed best, closely followed by the Jaccard similarity. Both clearly outperformed Euclidean and especially Cosine similarity. This suggests that the presence and absence of words are equally important and that outliers should not be punished more. For brevity, we report only results for the Manhattan similarity below. 2As indicated above, counters on idebate.org (including all true counterarguments) may also differ linguistically from points (all of which are false). However, we assume this to be a specific corpus bias and hence do not explicitly account for it. Section 6 will show whether having both points and counters as candidates makes counterargument retrieval harder. 246 Embedding Argument Similarity We evaluated five pretrained word embedding models for representing arguments in first tests: GoogleNewsvectors (Mikolov et al., 2013), ConceptNet Numberbatch (Speer et al., 2017), wiki-news-300d-1M, wiki-news-300d-1M-subword, and crawl-300d-2M (Mikolov et al., 2017). The former two were competitive, the others performed notably worse. Since ConceptNet Numberbatch is smaller and supposed to have less bias, we used it in all experiments. To capture argument-level embedding similarity, we compared the four inverse vector-based distance measures above on average word embeddings against the inverse Word Mover’s distance, which quantifies the optimum alignment of two word embedding sequences (Kusner et al., 2015). This Word Mover’s similarity consistently beat the others, so we decided to restrict our view to it. 5.2 Stance as Topic Dissimilarity Stance classification without prior topic knowledge is challenging: While we can compare the topics of any two arguments, it is impossible in general to infer the stance of the specific aspects invoked by one argument to those of the other. As sketched in Section 2, related work employs external knowledge to infer stance relations of aspects and topics (Bar-Haim et al., 2017) or trains classifying attack relations (Cabrio and Villata, 2012). Unfortunately, both does not apply to topics unseen before. For argument pairs invoking similar aspects, a way to go is in principle to assess sentiment polarity; intuitively, two arguments with the same topic but opposite sentiment have opposing stance. However, we tested topic-agnostic sentiment lexicons (Baccianella et al., 2010) and state-of-the-art sentiment classifiers, trained on large-scale multipledomain review data (Prettenhofer and Stein, 2010; Joulin et al., 2017). The correlation between sentiment and stance differences of training arguments was close to zero. A possible explanation is the limited explicitness of sentiment on idebate.org, making the lexicons and classifiers fail there. Other related work suggests that the vocabulary of opposing sides differs (Beigman Klebanov et al., 2010). We thus checked on the training set whether counterarguments are similar in their embeddings but dissimilar in their words. The measures above did not support this hypothesis, i.e., both embedding and word similarity increased the likelihood of a candidate counterargument being the best. Still, there must be a difference between an argument and its counterargument by concept. As a solution, we capture dissimilarity with the same similarity functions as above, but we change the granularity level on which we measure similarity. 5.3 Simultaneous Similarity and Dissimilarity The arising question is how to assess similarity and dissimilarity at the same time. We hypothesize the best counterargument to be very similar in overall terms, but very dissimilar in certain respects. To capture this intuition, we rely on expert knowledge from argumentation theory (see Section 2). Word and Embedding Unit Similarities In particular, we follow the notion that a counterargument attacks either the conclusion of an argument, the argument’s premises, or both. As a consequence, we compute two word and two embedding similarities as specified above for each candidate counterargument; once to the argument’s conclusion (called wc and ec for words and embeddings respectively) and once to the argument’s premises (wp and ep). Now, to capture similarity and dissimilarity simultaneously, we need multiple ways to aggregate conclusion and premise similarities. As we do not generally know which argument unit is attacked, we resort to four standard aggregation functions that generalize over the unit similarities. For words, these are the following word unit similarities: w↓:= min{wc, wp} w× := wc · wp w↑:= max{wc, wp} w+ := wc + wp Accordingly, we define four respective embedding unit similarities, e↓, e↑, e×, and e+. As mentioned above, both word similarity and embedding similarity positively affect the likelihood that a candidate is the best counterargument. Therefore, we combine each pair of similarities as w↓+ e↓, w↑+ e↑, w× + e×, and w+ + e+, but we also evaluate their impact in isolation below.3 Counterargument Scoring Model Based on the unit similarities, we finally define a scoring model for a given pair of argument and candidate counterargument. The model includes two unit similarity values, sim and dissim, but dissim is subtracted from sim, such that it actually favors dissimilarity. Thereby, we realize the topic and 3In principle, other unit similarities could be used for words than for embeddings. However, we decided to couple them to maintain interpretability of our experiment results. 247 stance similarity sketched in Figure 1. We weight the two values with a damping factor α: α · sim −(1 −α) · dissim where sim, dissim ∈{w↓+e↓, w↑+e↑, w×+e×, w+ + e+} and sim ̸= dissim. The general idea of the scoring model is that sim rewards one type of similarity, whereas subtracting dissim punishes another type. We seek to thereby find the most dissimilar candidate among the similar candidates. The model is meant to give a higher score to a pair the more likely the candidate is the best counterargument to the argument, so the scores can be used for ranking. What combination of sim and dissim turns out best, is hard to foresee and may depend on the retrieval task at hand. We hence evaluate different combinations empirically below. The same holds for the damping factor α ∈[0, 1]. If our hypothesis on similarity and dissimilarity is true, then the best α should be close to but lower than 1. Conversely, if α = 1.0 achieves the best performance, then only similarity would be captured by our model. 6 Evaluation We now report on systematic ranking experiments with our counterargument scoring model. The goal is to evaluate on all eight retrieval tasks from Section 4 to what extent our hypothesis holds that the best counterargument to an argument invokes the same aspects while having opposing stance. The Java source code of the experiments is available at: http://www.arguana.com/software 6.1 Experimental Set-up We evaluated the following set-up of tasks, data, measures, baselines, and approaches. Tasks We tackled each of the eight retrieval tasks as a ranking problem, i.e., we aimed to rank the best counterargument to each argument highest, given all candidates. Accordingly, only one candidate counterargument per argument is correct.4 4One alternative would be to see each argument pair as one instance of a classification problem. However, our preliminary tests confirmed the intuition that identifying the best counterargument is hard without knowing the other candidates, i.e., there is no general (dis)similarity threshold that makes an argument the best counterargument. Rather, how similar or dissimilar a counterargument needs to be depends on the topic and on the other candidates. Another alternative would be to treat all candidates for an argument as one instance, but this makes the experimental set-up very intricated. Data Table 2 has shown the true and false argument pairs in all datasets. We undersampled each training set, resulting in 4065 true and 4065 false training pairs in all tasks.5 Our model does not do any learning-to-rank on these pairs, but we derived lexicons for the word similarities from them (all stems included in at least 1% of all pairs). As detailed below, we then determined the best model configurations on the validation sets and evaluated these configurations on the test sets. Measures As only one candidate is true per argument, we report the accuracy@1 of each approach, i.e., the percentage of arguments for which the true counterargument was ranked highest. Besides, we compute the rounded mean rank of the best counterargument in all rankings, reflecting the average performance of an approach. Exemplarily, we also mention the mean reciprocal rank (MRR), which is more sensitive to outliers. Baselines A trivial way to address the given tasks is to pick any candidate by chance for each argument. This random baseline allows quantifying the impact of other approaches. As counterargument retrieval has not been tackled yet, we do not use any existing baseline.6 Instead, we evaluate the effects of the different building blocks of our scoring model. On one hand, we check the need for distinguishing conclusions and premises by comparing to the word argument similarity (w) and the embedding argument similarity (e). On the other hand, we consider all eight word and embedding unit similarities (w↓, w↑, ..., e+) as baselines, in order to see whether and how to best aggregate them. Approaches After initial tests, we reduced the set of tested values of the damping factor α in our scoring model to {0.8, 0.9, 1.0}. On the validation sets of the first six tasks,7 we then analyzed all possible combinations of w↓+e↓, w↑+e↑, w×+e×, w+ + e+, as well as w + e for sim and dissim. Three configurations of the model turned out best: we := 1.0 · (w× + e×) we↓ := 0.9 · (w× + e×) −0.1 · (w↓+ e↓) we↑ := 0.9 · (w+ + e+) −0.1 · (w↑+ e↑) 5Undersampling was done stratified, such that the same number of false counterarguments was taken from each type, b–i, in Figure 2 that is relevant in the respective task. 6Notice, though, that we tested a number of approaches to identify opposing stance, as discussed in Section 5. 7We did not expect “game-changing” validation set results for the last two tasks and, so, left them out for time reasons. 248 Same Debate Same Theme Entire Portal Opp. Ctr.’s Counters Opposing Arguments Counters Arguments Counters Arguments # Baseline / Approach @1 R @1 R @1 R @1 R @1 R @1 R @1 R @1 R w Word argument similarity 65.9 2 48.5 2 42.5 3 30.0 4 44.1 5 28.3 10 39.7 22 21.8 49 e Embedding argument similarity 62.9 2 44.6 2 51.6 2 36.8 4 38.8 7 32.9 10 34.2 39 23.9 55 w↓ Word unit similarity minimum 53.8 2 38.4 3 45.9 3 33.7 5 28.5 22 24.8 42 21.4 206 18.5 403 w↑ Word unit similarity maximum 66.1 2 48.0 2 44.0 3 30.2 4 44.0 5 28.3 9 38.0 21 21.2 44 w× Word unit similarity product 64.9 2 49.5 3 56.1 2 40.7 4 44.3 18 36.8 35 37.8 177 26.8 354 w+ Word unit similarity sum 71.5 1 53.7 2 54.1 2 39.1 4 49.0 4 36.8 7 44.7 17 28.6 33 e↓ Embedding unit sim. minimum 61.6 2 44.9 3 43.4 3 32.1 4 37.8 7 27.4 13 32.5 42 20.7 74 e↑ Embedding unit sim. maximum 63.4 2 44.5 2 47.5 2 33.2 4 39.8 5 29.8 8 32.1 20 20.1 33 e× Embedding unit sim. product 69.7 1 52.0 2 55.4 2 41.0 3 44.3 4 37.1 6 43.2 14 27.8 21 e+ Embedding unit sim. sum 69.7 1 51.8 2 55.4 2 40.5 3 47.5 4 36.8 6 43.0 13 27.6 21 we 1.0·(w×+e×) 72.1 1 55.2 2 ‡60.3 2 †44.9 3 50.4 4 40.9 7 46.0 19 32.2 34 we↓0.9·(w×+e×) −0.1·(w↓+e↓) 72.0 1 55.5 2 59.5 2 44.1 3 51.3 4 †41.0 7 46.3 19 31.7 35 we↑0.9·(w++e+) −0.1·(w↑+e↑)†74.5 1 †57.7 2 59.6 2 44.1 3 ‡54.2 3 40.8 5 ‡50.0 9 ‡32.4 15 r Random baseline 25.7 2 13.1 4 13.1 4 7.0 7 0.7 69 0.4 137 0.1 701 0.0 1401 Table 3: Test set accuracy of ranking the best counterargument highest (@1) and mean rank (R) for 14 baselines and approaches (w, e, w↓, . . . , r) in all eight tasks (given by Context and Candidates). Each best accuracy value (bold) significantly outperforms the best baseline with 99% (†) or 99.9% (‡) confidence. we was best on the validation set of Same Debate: Opposing Arguments (accuracy@1: 62.1) and we↓ on the one of Same Debate: Arguments (49.0). All other tasks were dominated by we↑. Especially, we↑was better than 1.0 · (w+ + e+) in all of them with clear leads of up to 2.2 points. This underlines the importance of modeling dissimilarity for counterargument retrieval. We took we, we↓, and we↑ as our approaches for the test set.8 6.2 Results Table 3 shows the accuracy@1 and the mean rank of all baselines and approaches on each of the eight given retrieval tasks. Overall, the counter-only tasks seem slightly harder within the same debate (comparing Counters to Opposing), i.e., stance is harder to assess than topical relevance. Conversely, the other Counters tasks seem easier, suggesting that topically close but false candidate counterarguments with the same stance as the argument (which are not included in any Counters task) are classified wrongly most often. Besides, these results support that potential differences in the phrasing of counters are not exploited, as desired. The accuracy of the standard similarity measures, w and e, goes from 65.9 and 62.9 respectively in the smallest task down to 21.8 and 23.9 in the largest. 8All validation set results are found in the supplementary material, which we provide at http://www.arguana. com/publications w is stronger when only counters are candidates, e otherwise. This implies that words capture differences between the best and other counters, whereas embeddings rather help discard false candidates with the same stance as the argument. From the eight unit similarity baselines, w+ performs best on five tasks (e× twice, w× once). w+ finds 71.5% true counterarguments among all opposing counters in a debate, and 28.6% among all test arguments from the entire portal. In that task, however, the mean ranks of w+ (33) and particularly of w× (354) are much worse than for e× (21), meaning that words are insufficient to robustly find counterarguments. we, we↓, and we↑outperform all baselines in all tasks, improving the accuracy by 8.1 (Same Theme: Arguments) to 10.3 points (Entire Portal: Counters) over w and e, and at least 3.0 over the best baseline in each task. Among all opposing arguments from the same debate (true-to-false ratio 1:6.6), we finds 60.3% of the best counterarguments, 44.9% when all arguments are given (1:13.3). The winner in our evaluation is we↑, though, being best in five of the eight tasks. It found the true among all opposing counters in 74.5% of all cases, and about every third time (32.4) among all 2801 test set arguments; a setting where the random baseline has virtually no chance. Given all arguments from the same theme, we↑puts the best counterargument at a mean rank of 5 (MRR 0.58), and for the entire portal still at 15 (MRR 0.5). 249 Entire Portal: Arguments Accuracy@1 Mean Rank Theme Arguments w+ we↑ w+ we↑ Culture 69 31.9 36.2 12 9 Digital freedoms 61 37.7 44.3 58 20 Economy 125 27.2 25.6 21 10 Education 81 38.3 39.5 36 17 Environment 46 17.4 21.7 22 7 Free speech debate 58 10.3 12.1 130 55 Health 77 28.6 36.4 26 14 International 271 25.8 31.4 31 19 Law 134 38.8 38.1 16 8 Philosophy 85 34.1 38.8 29 14 Politics 202 28.7 33.2 28 11 Religion 45 24.4 33.3 58 8 Science 57 19.3 28.1 6 5 Society 60 16.7 20.0 45 22 Sport 30 43.3 46.7 35 9 All themes 1401 28.6 32.4 33 15 Table 4: Accuracy@1 and mean rank of the best baseline (w+) and approach (we↑) on each theme when all 2801 test set arguments are candidates. Although our scoring model thus does not solve the retrieval tasks, we conclude that it serves as a robust approach to rank the best counterargument high. To test significance, we separately computed the accuracy@1 for the arguments from each theme. The differences between the 15 values of the best approach on each task and those of the best baseline (w+, w×, or e×) were normally distributed. Since the baselines and approaches are dependent, we used a one-tailed dependent t-test with paired samples. As Table 3 specifies, our approaches are consistently better, partly with at least 99% confidence, partly even with 99.9% confidence. In Table 4, we exemplarily detail the comparison of the best approach (we↑) to the best baseline (w+) on Entire Portal: Arguments. The mean ranks across themes underline the robustness of we↑, being in the top 10 for 7 and in the top 20 even for 13 themes. Still, the accuracy@1 of both w+ and we↑ varies notably, in case of we↑from 12.1 for free speech debate to 46.7 for sport. For free speech debates (e.g., “This House would criminalise blasphemy”), we observed that their arguments tend to be overproportionally long, which might lead to deviating similarities. In case of sports, the topical specificity (e.g., “This House would ban boxing”) reduces the probability of mistakenly choosing candidates from other themes. Free speech debate turned out the hardest theme in seven tasks, health in the remaining one. Besides sports, in some tasks the best results were obtained for religion and science, both of which share the characteristic of dealing with very specific topics.9 7 Conclusion This paper has asked how to find the best counterargument to any argument without prior knowledge of the argument’s topic. We did not aim to engineer the best approach to this retrieval task, but to study whether we can model the simultaneous similarity and dissimilarity of a counterargument to an argument computationally. For the restricted domain of debate portal arguments, our main result is quite intriguing: The best model (we↑) rewards a high overall similarity to the argument’s conclusion and premises while punishing a too high similarity to either of them. Despite its simplicity, we↑found the best counterargument among 2801 candidates in almost a third of all cases, and ranked it into the top 15 on average. This speaks for our hypothesis that the best counterargument often just addresses the same topical aspects with opposite stance. Of course, our hypothesis is simplifying, i.e., there are counterarguments that will not be found based on aspect and stance similarity only. Apart from some hyperparameters, however, our model is unsupervised and it does not make any assumption about an argument’s topic. Hence, it applies to any argument, given a pool of candidate counterarguments. While the model can be considered open-topic, a next step will be to study counterargument retrieval open-source. We are confident that the modeled intuition generalizes beyond idebate.org. To obtain further insights into the nature of counterarguments, deeper linguistic analysis along with supervised learning may be needed, though. We provide a corpus to train respective approaches, but leave the according research to future work. The intended practical application of our model is to retrieve counterarguments in automatic debating technologies (Rinott et al., 2015) and argument search (Wachsmuth et al., 2017b). While debate portal arguments are often suitable in this regard, in general not always a real counterargument exists to an argument. Still, returning one that addresses similar aspects with opposite stance makes sense then. An alternative would be to generate counterarguments, but we believe that humans are better than machines in writing them — currently. 9The individual results of the best approach and baseline on each theme are also found in the supplementary material. 250 References Yamen Ajjour, Wei-Fan Chen, Johannes Kiesel, Henning Wachsmuth, and Benno Stein. 2017. Unit segmentation of argumentative texts. In Proceedings of the 4th Workshop on Argument Mining, pages 118– 128. Association for Computational Linguistics. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas Köhler, and Benno Stein. 2016. Crossdomain mining of argumentative text through distant supervision. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1395–1404. Association for Computational Linguistics. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10). European Languages Resources Association (ELRA). Roy Bar-Haim, Indrajit Bhattacharya, Francesco Dinuzzo, Amrita Saha, and Noam Slonim. 2017. Stance classification of context-dependent claims. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 251–261. Association for Computational Linguistics. Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2010. Vocabulary choice as an indicator of perspective. In Proceedings of the ACL 2010 Conference Short Papers, pages 253–257. Association for Computational Linguistics. Filip Boltuži´c and Jan Šnajder. 2014. Back up your stance: Recognizing arguments in online discussions. In Proceedings of the First Workshop on Argumentation Mining, pages 49–58. Association for Computational Linguistics. Liora Braunstain, Oren Kurland, David Carmel, Idan Szpektor, and Anna Shtok. 2016. Supporting human answers for advice-seeking questions in CQA sites. In Proceedings of the 38th European Conference on IR Research, pages 129–141. Elena Cabrio and Serena Villata. 2012. Combining textual entailment and argumentation theory for supporting online debates interactions. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 208–212. Association for Computational Linguistics. Sung-Hyuk Cha. 2007. Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions. International Journal of Mathematical Models and Methods in Applied Sciences, 1(4):300–307. Oana Cocarascu and Francesca Toni. 2017. Identifying attack and support argumentative relations using deep learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1374–1379. Association for Computational Linguistics. T. Edward Damer. 2009. Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments, 6th edition. Wadsworth, Cengage Learning, Belmont, CA. Vanessa Wei Feng and Graeme Hirst. 2011. Classifying arguments by scheme. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 987–996. Association for Computational Linguistics. Trudy Govier. 2010. A Practical Study of Argument, 7th edition. Wadsworth, Cengage Learning, Belmont, CA. Ivan Habernal and Iryna Gurevych. 2016. What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in web argumentation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1214–1223. Association for Computational Linguistics. Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. Before name-calling: Dynamics and triggers of ad hominem fallacies in web argumentation. In 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, to appear. Yufang Hou and Charles Jochim. 2017. Argument relation classification using a joint inference model. In Proceedings of the 4th Workshop on Argument Mining, pages 60–66. Association for Computational Linguistics. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431. Association for Computational Linguistics. Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, pages 957–966. John Lawrence and Chris Reed. 2017. Mining argumentative structure from natural language text using automatically generated premise-conclusion topic models. In Proceedings of the 4th Workshop on Argument Mining, pages 39–48. Association for Computational Linguistics. 251 Stefano Menini, Federico Nanni, Simone Paolo Ponzetto, and Sara Tonelli. 2017. Topic-based agreement and disagreement in us electoral manifestos. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2938–2944. Association for Computational Linguistics. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2017. Advances in pre-training distributed word representations. CoRR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Volume 2, pages 3111–3119. Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured SVMs and RNNs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 985–995. Association for Computational Linguistics. Andreas Peldszus and Manfred Stede. 2015. Towards detecting counter-considerations in text. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 104–109. Association for Computational Linguistics. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1118–1127. Association for Computational Linguistics. Ruty Rinott, Lena Dankin, Carlos Alzate Perez, M. Mitesh Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence — An automatic method for context dependent evidence detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 440–450. Association for Computational Linguistics. Haggai Roitman, Shay Hummel, Ella Rabinovich, Benjamin Sznajder, Noam Slonim, and Ehud Aharoni. 2016. On the retrieval of wikipedia articles containing claims on controversial topics. In Proceedings of the 25th International Conference on World Wide Web, Companion Volume, pages 991–996. Parinaz Sobhani, Diana Inkpen, and Stan Matwin. 2015. From argumentation mining to stance classification. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 67–77. Association for Computational Linguistics. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 4444–4451. Christian Stab and Iryna Gurevych. 2016. Recognizing the absence of opposing arguments in persuasive essays. In Proceedings of the Third Workshop on Argument Mining (ArgMining2016), pages 113–118. Association for Computational Linguistics. Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of the 25th International World Wide Web Conference, pages 613–624. Stephen E. Toulmin. 1958. The Uses of Argument. Cambridge University Press. Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017a. Computational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 176–187. Association for Computational Linguistics. Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017b. Building an argument search engine for the web. In Proceedings of the 4th Workshop on Argument Mining, pages 49–59. Association for Computational Linguistics. Henning Wachsmuth, Benno Stein, and Yamen Ajjour. 2017c. “PageRank” for Argument Relevance. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1117–1127. Association for Computational Linguistics. Douglas Walton. 2006. Fundamentals of Critical Argumentation. Cambridge University Press. Douglas Walton. 2009. Objections, rebuttals and refutations. pages 1–10. Justine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational flow in Oxford-style debates. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 136–141. Association for Computational Linguistics.
2018
23
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2473–2482 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2473 Incorporating Glosses into Neural Word Sense Disambiguation Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang and Zhifang Sui Key Laboratory of Computational Linguistics, Ministry of Education, School of Electronics Engineering and Computer Science, Peking University, Beijing, China {luofuli, tianyu0421, xql, chbb, szf}@pku.edu.cn Abstract Word Sense Disambiguation (WSD) aims to identify the correct meaning of polysemous words in the particular context. Lexical resources like WordNet which are proved to be of great help for WSD in the knowledge-based methods. However, previous neural networks for WSD always rely on massive labeled data (context), ignoring lexical resources like glosses (sense definitions). In this paper, we integrate the context and glosses of the target word into a unified framework in order to make full use of both labeled data and lexical knowledge. Therefore, we propose GAS: a gloss-augmented WSD neural network which jointly encodes the context and glosses of the target word. GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods. We further extend the original gloss of word sense via its semantic relations in WordNet to enrich the gloss information. The experimental results show that our model outperforms the state-of-theart systems on several English all-words WSD datasets. 1 Introduction Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP). There are several lines of research on WSD. Knowledge-based methods focus on exploiting lexical resources to infer the senses of word in the context. Supervised methods usually train multiple classifiers with manual designed features. Although supervised methods can achieve the state-of-the-art performance (Raganato et al., 2017b,a), there are still two major challenges. Firstly, supervised methods (Zhi and Ng, 2010; Iacobacci et al., 2016) usually train a dedicated classifier for each word individually (often called word expert). So it can not easily scale up to all-words WSD task which requires to disambiguate all the polysemous word in texts 1. Recent neural-based methods (K˚ageb¨ack and Salomonsson, 2016; Raganato et al., 2017a) solve this problem by building a unified model for all the polysemous words, but they still can’t beat the best word expert system. Secondly, all the neural-based methods always only consider the local context of the target word, ignoring the lexical resources like WordNet (Miller, 1995) which are widely used in the knowledge-based methods. The gloss, which extensionally defines a word sense meaning, plays a key role in the well-known Lesk algorithm (Lesk, 1986). Recent studies (Banerjee and Pedersen, 2002; Basile et al., 2014) have shown that enriching gloss information through its semantic relations can greatly improve the accuracy of Lesk algorithm. To this end, our goal is to incorporate the gloss information into a unified neural network for all of the polysemous words. We further consider extending the original gloss through its semantic relations in our framework. As shown in Figure 1, the glosses of hypernyms and hyponyms can enrich the original gloss information as well as help to build better a sense representation. Therefore, we integrate not only the original gloss but also the related glosses of hypernyms and hyponyms into the neural network. 1If there are N polysemous words in texts, they need to train N classifiers individually. 2474 bed2 Original gloss a plot of ground in which plants are growing a small area of ground covered by specific vegetation flowerbed1 a bed in which flowers are growing seedbed1 a bed where seedlings are grown before transplanting turnip_bed1 a bed in which turnips are growing Example sentence the gardener planted a bed of roses Hypernymy Hyponymy plot2 Figure 1: The hypernym (green node) and hyponyms (blue nodes) for the 2nd sense bed2 of bed, which means a plot of ground in which plants are growing, rather than the bed for sleeping in. The figure shows that bed2 is a kind of plot2, and bed2 includes flowerbed1, seedbed1, etc. In this paper, we propose a novel model GAS: a gloss-augmented WSD neural network which is a variant of the memory network (Sukhbaatar et al., 2015b; Kumar et al., 2016; Xiong et al., 2016). GAS jointly encodes the context and glosses of the target word and models the semantic relationship between the context and glosses in the memory module. In order to measure the inner relationship between glosses and context more accurately, we employ multiple passes operation within the memory as the re-reading process and adopt two memory updating mechanisms. The main contributions of this paper are listed as follows: • To the best of our knowledge, our model is the first to incorporate the glosses into an end-to-end neural WSD model. In this way, our model can benefit from not only massive labeled data but also rich lexical knowledge. • In order to model semantic relationship of context and glosses, we propose a glossaugmented neural network (GAS) in an improved memory network paradigm. • We further expand the gloss through its semantic relations to enrich the gloss information and better infer the context. We extend the gloss module in GAS to a hierarchical framework in order to mirror the hierarchies of word senses in WordNet. • The experimental results on several English all-words WSD benchmark datasets show that our model outperforms the state-of-theart systems. 2 Related Work Knowledge-based, supervised and neural-based methods have already been applied to WSD task (Navigli, 2009). Knowledge-based WSD methods mainly exploit two kinds of knowledge to disambiguate polysemous words: 1) The gloss, which defines a word sense meaning, is mainly used in Lesk algorithm (Lesk, 1986) and its variants. 2) The structure of the semantic network, whose nodes are synsets 2 and edges are semantic relations, is mainly used in graph-based algorithms (Agirre et al., 2014; Moro et al., 2014). Supervised methods (Zhi and Ng, 2010; Iacobacci et al., 2016) usually involve each target word as a separate classification problem (often called word expert) and train classifiers based on manual designed features. Although word expert supervised WSD methods perform best in terms of accuray, they are less flexible than knowledge-based methods in the allwords WSD task (Raganato et al., 2017a). To deal with this problem, recent neural-based methods aim to build a unified classifier which shares parameters among all the polysemous words. K˚ageb¨ack and Salomonsson (2016) leverages the bidirectional long short-term memory network which shares model parameters among all the polysemous words. Raganato et al. (2017a) transfers the WSD problem into a neural sequence labeling task. However, none of the neural-based methods can totally beat the best word expert supervised methods on English all-words WSD datasets. What’s more, all of the previous supervised methods and neural-based methods rarely take the lexical resources like WordNet (Fellbaum, 1998) into consideration. Recent studies on sense embeddings have proved that lexical resources are helpful. Chen et al. (2015) trains word sense embeddings through learning sentence level embeddings from glosses using a convolutional neural networks. Rothe and Sch¨utze (2015) extends word embeddings to sense embeddings by using the constraints and semantic relations in WordNet. They achieve an improvement of more than 1% in WSD performance when using sense embeddings as WSD features for SVM classifier. This work shows that integrating structural information of lexical resources can help to word expert supervised methods. However, sense embeddings 2A synset is a set of words that denote the same sense. 2475 can only indirectly help to WSD (as SVM classifier features). Raganato et al. (2017a) shows that the coarse-grained semantic labels in WordNet can help to WSD in a multi-task learning framework. As far as we know, there is no study directly integrates glosses or semantic relations of the WordNet into an end-to-end model. In this paper, we focus on how to integrate glosses into a unified neural WSD system. Memory network (Sukhbaatar et al., 2015b; Kumar et al., 2016; Xiong et al., 2016) is initially proposed to solve question answering problems. Recent researches show that memory network obtains the state-of-the-art results in many NLP tasks such as sentiment classification (Li et al., 2017) and analysis (Gui et al., 2017), poetry generation (Zhang et al., 2017), spoken language understanding (Chen et al., 2016), etc. Inspired by the success of memory network used in many NLP tasks, we introduce it into WSD. We make some adaptations to the initial memory network in order to incorporate glosses and capture the inner relationship between the context and glosses. 3 Incorporating Glosses into Neural Word Sense Disambiguation In this section, we first give an overview of the proposed model GAS: a gloss-augmented WSD neural network which integrates the context and the glosses of the target word into a unified framework. After that, each individual module is described in detail. 3.1 Architecture of GAS The overall architecture of the proposed model is shown in Figure 2. It consists of four modules: • Context Module: The context module encodes the local context (a sequence of surrounding words) of the target word into a distributed vector representation. • Gloss Module: Like the context module, the gloss module encodes all the glosses of the target word into a separate vector representations of the same size. In other words, we can get |st| word sense representations according to |st| 3 senses of the target word, where |st| is the sense number of the target word wt . 3st is the sense set {s1 t, s2 t, . . . , sp t } corresponding to the target word xt Gloss Module Context Module Memory Module Scoring Module Figure 2: Overview of Gloss-augmented Memory Network for Word Sense Disambiguation. • Memory Module: The memory module is employed to model the semantic relationship between the context embedding and gloss embedding produced by context module and gloss module respectively. • Scoring Module: In order to benefit from both labeled contexts and gloss knowledge, the scoring module takes the context embedding from context module and the last step result from the memory module as input. Finally it generates a probability distribution over all the possible senses of the target word. Detailed architecture of the proposed model is shown in Figure 3. The next four sections will show detailed configurations in each module. 3.2 Context Module Context module encodes the context of the target word into a vector representation, which is also called context embedding in this paper. We leverage the bidirectional long short-term memory network (Bi-LSTM) for taking both the preceding and following words of the target word into consideration. The input of this module [x1, . . . , xt−1, xt+1, . . . , xTx] is a sequence of words surrounding the target word xt, where Tx is the length of the context. After applying a lookup operation over the pre-trained word embedding matrix M ∈RD×V , we transfer a one hot vector xi into a D-dimensional vector. Then, the forward LSTM reads the segment (x1, . . . , xt−1) on the left of the target word xt and calculates a sequence of forward hidden states (−→ h1, . . . , −→h t−1). The backward LSTM reads the segment (xTx, . . . , xt+1) on the right of the target word xt and calculates a sequence of backward hidden states (←−h Tx, . . . , ←−h t+1). The context vector c is finally concatenated as c = [−→h t−1 : ←−h t+1] (1) 2476 Scoring Module Gloss Module Gloss Reader Layer h-K Relation Fusion Layer h0 h0 h+1 h+K x 10 0 hG g-1 g-K g+1 g+K hypernyms hyponyms h-1 x 20 x G0 h10 Context Module x 1 x t-1 x T ht+1 Memory Module m2 m1 W + scoreg scorec ht-1 x t-1 i i i i g1 g2 g3 GAS GASext g0i g0i ri ri Original Gloss Extended Glosses gi Figure 3: Detailed architecture of our proposed model, which consists of a context module, a gloss module, a memory module and a scoring module. The context module encodes the adjacent words surrounding the target word into a vector c. The gloss module encodes the original gloss or extended glosses into a vector gi. In the memory module, we calculate the inner relationship (as attention) between context c and each gloss gi and then update the memory as mi at pass i. In the scoring module, we make final predictions based on the last pass attention of memory module and the context vector c. Note that GAS only uses the original gloss, while GASext uses the entended glosses through hypernymy and hyponymy relations. In other words, the relation fusion layer (grey dotted box) only belongs to GASext. where : is the concatenation operator. 3.3 Gloss Module The gloss module encodes each gloss of the target word into a fixed size vector like the context vector c, which is also called gloss embedding. We further enrich the gloss information by taking semantic relations and their associated glosses into consideration. This module contains a gloss reader layer and a relation fusion layer. Gloss reader layer generates a vector representations for a gloss. Relation fusion layer aims at modeling the semantic relations of each gloss in the expanded glosses list which consists of related glosses of the original gloss. Our model GAS with extended glosses is denoted as GASext. GAS only encodes the original gloss, while GASext encodes the expanded glosses from hypernymy and hyponymy relations (details in Figure 3). 3.3.1 Gloss Reader Layer Gloss reader layer contains two parts: gloss expansion and gloss encoder. Gloss expansion is to enrich the original gloss information through its hypernymy and hyponymy relations in WordNet. Gloss encoder is to encode each gloss into a vector representation. Gloss Expansion: We only expand the glosses of nouns and verbs via their corresponding hypernyms and hyponyms. There are two reasons: One is that most of polysemous words (about 80%) are nouns and verbs; the other is that the most frequent relations among word senses for nouns and verbs are the hypernymy and hyponymy relations 4. The original gloss is denoted as g0. Breadthfirst search method with a limited depth K is employed to extract the related glosses. The glosses of hypernyms within K depth are denoted as [g−1, g−2, . . . , g−L1]. The glosses of hyponyms within K depth are denoted as [g+1, g+2, . . . , g+L2] 5. Note that g+1 and g−1 are the glosses of the nearest word sense. Gloss Encoder: We denote the j-th 6 gloss in 4In WordNet, more than 95% of relations for nouns and 80% for verbs are hypernymy and hyponymy relations. 5Since one synset has one or more direct hypernyms and hyponyms, L1 >= K and L2 >= K. 6Since GAS don’t have gloss expansion, j is always 0 and gi = gi 0. See more in Figure 3. 2477 the expanded glosses list for ith sense of the target word as a sequence of G words. Like the context encoder, the gloss encoder also leverages BiLSTM units to process the words sequence of the gloss. The gloss representation gi j is computed as the concatenation of the last hidden states of the forward and backward LSTM. gi j = [−→h i,j G : ←−h i,j 1 ] (2) where j ∈[−L1, . . . , −1, 0, +1, . . . , +L2] and : is the concatenation operator . 3.3.2 Relation Fusion Layer Relation fusion layer models the hypernymy and hyponymy relations of the target word sense. A forward LSTM is employed to encode the hypernyms’ glosses of ith sense (gi −L1, . . . , gi −1, gi 0) as a sequence of forward hidden states (−→h i −L1, . . . , −→h i −1, −→h i 0). A backward LSTM is employed to encode the hyponyms’ glosses of ith sense (gi +L2, . . . , gi +1, gi 0) as a sequence of backward hidden states (←−h i +L2, . . . , ←−h i +1, ←−h i 0). In order to highlight the original gloss gi 0, the enhanced ith sense representation is concatenated as the final state of the forward and backward LSTM. gi = [−→h i 0 : ←−h i 0] (3) 3.4 Memory Module The memory module has two inputs: the context vector c from the context module and the gloss vectors {g1, g2, . . . , g|st|} from the gloss module, where |st| is the number of word senses. We model the inner relationship between the context and glosses by attention calculation. Since onepass attention calculation may not fully reflect the relationship between the context and glosses (details in Section 4.4.2), the memory module adopts a repeated deliberation process. The process repeats reading gloss vectors in the following passes, in order to highlight the correct word sense for the following scoring module by a more accurate attention calculation. After each pass, we update the memory to refine the states of the current pass. Therefore, memory module contains two phases: attention calculation and memory update. Attention Calculation: For each pass k, the attention ek i of gloss gi is generally computed as ek i = f(gi, mk−1, c) (4) where mk−1 is the memory vector in the (k −1)th pass while c is the context vector. The scoring function f calculates the semantic relationship of the gloss and context, taking the vector set (gi, mk−1, c) as input. In the first pass, the attention reflects the similarity of context and each gloss. In the next pass, the attention reflects the similarity of adapted memory and each gloss. A dot product is applied to calculate the similarity of each gloss vector and context (or memory) vector. We treat c as m0. So, the attention αk i of gloss gi at pass k is computed as a dot product of gi and mk−1: ek i = gi · mk−1 (5) αk i = exp(ek i ) P|st| j=1 exp(ej i) (6) Memory Update: After calculating the attention, we store the memory state in uk which is a weighted sum of gloss vectors and is computed as uk = n X i=1 αk i gi (7) where n is the hidden size of LSTM in the context module and gloss module. And then, we update the memory vector mk from last pass memory mk−1, context vector c, and memory state uk. We propose two memory update methods: • Linear: we update the memory vector mk by a linear transformation from mk−1 mk = Hmk−1 + uk (8) where H ∈R2n×2n. • Concatenation: we get a new memory for kth pass by taking both the gloss embedding and context embedding into consideration mk = ReLU(W[mk−1 : uk : c] + b) (9) where : is the concatenation operator, W ∈ Rn×6n and b ∈R2n. 3.5 Scoring Module The scoring module calculates the scores for all the related senses {s1 t , s2 t , . . . , sp t } corresponding to the target word xt and finally outputs a sense probability distribution over all senses. The overall score for each word sense is determined by gloss attention αTM i from the last pass 2478 in the memory module, where TM is the number of passes in the memory module. The eTM ( αTM without Softmax) is regarded as the gloss score. scoreg = eTM (10) Meanwhile, a fully-connected layer is employed to calculate the context score. scorec = Wxtc + bxt (11) where Wxt ∈R|st|×2n, bxt ∈R|st|, |st| is the number of senses for the target word xt and n is the number of hidden units in the LSTM. It’s noteworthy that in Equation 11, each ambiguous word xt has its corresponding weight matrix Wxt and bias bxt in the scoring module. In order to balance the importance of background knowledge and labeled data, we introduce a parameter λ ∈RN 7 in the scoring module which is jointly learned during the training process. The probability distribution ˆy over all the word senses of the target word is calculated as: ˆy = Softmax(λxtscorec + (1 −λxt)scoreg) where λxt is the parameter for word xt, and λxt ∈[0, 1]. During training, all model parameters are jointly learned by minimizing a standard crossentropy loss between ˆy and the true label y. 4 Experiments and Evaluation 4.1 Dataset Evaluation Dataset: we evaluate our model on several English all-words WSD datasets. For fair comparison, we use the benchmark datasets proposed by Raganato et al. (2017b) which includes five standard all-words fine-grained WSD datasets from the Senseval and SemEval competitions. They are Senseval-2 (SE2), Senseval-3 task 1 (SE3), SemEval-07 task 17 (SE7), SemEval-13 task 12 (SE13), and SemEval-15 task 13 (SE15). Following by Raganato et al. (2017a), we choose SE7, the smallest test set as the development (validation) set, which consists of 455 labeled instances. The last four test sets consist of 6798 labeled instances with four types of target words, namely nouns, verbs, adverbs and adjectives. We 7N is the number of polysemous words in the training corpora. extract word sense glosses from WordNet3.0 because Raganato et al. (2017b) maps all the sense annotations 8 from its original version to 3.0. Training Dataset: We choose SemCor 3.0 as the training set, which was also used by Raganato et al. (2017a), Raganato et al. (2017b), Iacobacci et al. (2016), Zhi and Ng (2010), etc. It consists of 226,036 sense annotations from 352 documents, which is the largest manually annotated corpus for WSD. Note that all the systems listed in Table 1 are trained on SemCor 3.0. 4.2 Implementation Details We use the validation set (SE7) to find the optimal settings of our framework: the hidden state size n, the number of passes |TM|, the optimizer, etc. We use pre-trained word embeddings with 300 dimensions9, and keep them fixed during the training process. We employ 256 hidden units in both the gloss module and the context module, which means n=256. Orthogonal initialization is used for weights in LSTM and random uniform initialization with range [-0.1, 0.1] is used for others. We assign gloss expansion depth K the value of 4. We also experiment with the number of passes |TM| from 1 to 5 in our framework, finding |TM| = 3 performs best. We use Adam optimizer (Kingma and Ba, 2014) in the training process with 0.001 initial learning rate. In order to avoid overfitting, we use dropout regularization and set drop rate to 0.5. Training runs for up to 100 epochs with early stopping if the validation loss doesn’t improve within the last 10 epochs. 4.3 Systems to be Compared In this section, we describe several knowledgebased methods, supervised methods and neuralbased methods which perform well on the English all-words WSD datasets for comparison. 4.3.1 Knowledge-based Systems • Leskext+emb: Basile et al. (2014) is a variant of Lesk algorithm (Lesk, 1986) by using a word similarity function defined on a distributional semantic space to calculate the gloss-context overlap. This work shows that glosses are important to WSD and enriching 8The original WordNet version of SE2, SE3, SE7, SE13, SE15 are 1.7, 1.7.1, 2.1, 3.0 and 3.0, respectively. 9We download the pre-trained word embeddings from https://github.com/stanfordnlp/GloVe, and we select the smaller Wikipedia 2014 + Gigaword 5. 2479 Test Datasets Concatenation of Test Datasets System SE2 SE3 SE13 SE15 Noun Verb Adj Adv All MFS baseline 65.6 66.0 63.8 67.1 67.7 49.8 73.1 80.5 65.5 Leskext+emb (Basile et al., 2014) 63.0 63.7 66.2 64.6 70.0 51.1 51.7 80.6 64.2 Babelfy (Moro et al., 2014) 67.0 63.5 66.4 70.3 68.9 50.7 73.2 79.8 66.4 IMS (Zhi and Ng, 2010) 70.9 69.3 65.3 69.5 70.5 55.8 75.6 82.9 68.9 IMS+emb (Iacobacci et al., 2016) 72.2 70.4 65.9 71.5 71.9 56.6 75.9 84.7 70.1 Bi-LSTM (K˚ageb¨ack and Salomonsson, 2016) 71.1 68.4 64.8 68.3 69.5 55.9 76.2 82.4 68.4 Bi-LSTM+att.+LEX (Raganato et al., 2017a)* 72.0 69.4 66.4 72.4 71.6 57.1 75.6 83.2 69.9 Bi-LSTM+att.+LEX+P OS (Raganato et al., 2017a)* 72.0 69.1 66.9 71.5 71.5 57.5 75.0 83.8 69.9 GAS (Linear)* 72.0 70.0 66.7 71.6 71.7 57.4 76.5 83.5 70.1 GAS (Concatenation)* 72.1 70.2 67.0 71.8 72.1 57.2 76.0 84.4 70.3 GASext (Linear)* 72.4 70.1 67.1 72.1 71.9 58.1 76.4 84.7 70.4 GASext (Concatenation)* 72.2 70.5 67.2 72.6 72.2 57.7 76.6 85.0 70.6 Table 1: F1-score (%) for fine-grained English all-words WSD on the test sets. Bold font indicates best systems. The * represents the neural network models using external knowledge. The fives blocks list the MFS baseline, two knowledge-based systems, two supervised systems (feature-based), three neuralbased systems and our models, respectively. . gloss information via its semantic relations can help to WSD. • Babelfy: Moro et al. (2014) exploits the semantic network structure from BabelNet and builds a unified graph-based architecture for WSD and Entity Linking. 4.3.2 Supervised Systems The supervised systems mentioned in this paper refers to traditional feature-based systems which train a dedicated classifier for every word individually (word expert). • IMS: Zhi and Ng (2010) selects a linear Support Vector Machine (SVM) as its classifier and makes use of a set of features surrounding the target word within a limited window, such as POS tags, local words and local collocations. • IMS+emb: Iacobacci et al. (2016) selects IMS as the underlying framework and makes use of word embeddings as features which makes it hard to beat in most of WSD datasets. 4.3.3 Neural-based Systems Neural-based systems aim to build an end-to-end unified neural network for all the polysemous words in texts. • Bi-LSTM: K˚ageb¨ack and Salomonsson (2016) leverages a bidirectional LSTM network which shares model parameters among all words. Note that this model is equivalent to our model if we remove the gloss module and memory module of GAS. • Bi-LSTM+att.+LEX and its variant BiLSTM+att.+LEX+POS: Raganato et al. (2017a) transfers WSD into a sequence learning task and propose a multi-task learning framework for WSD, POS tagging and coarse-grained semantic labels (LEX). These two models have used the external knowledge, for the LEX is based on lexicographer files in WordNet. Moreover, we introduce MFS baseline, which simply selects the most frequent sense in the training data set. 4.4 Results and Discussion 4.4.1 English all-words results In this section, we show the performance of our proposed model in the English all-words task. Table 1 shows the F1-score results on the four test sets mentioned in Section 4.1. The systems in the first four blocks are implemented by Raganato et al. (2017a,b) except for the single Bi-LSTM model. The last block lists the performance of our proposed model GAS and its variant GASext which extends the gloss module in GAS. GAS and GASext achieves the state-of-theart performance on the concatenation of all test datasets. Although there is no one system always performs best on all the test sets 10, we can find that GASext with concatenation memory updating strategy achieves the best results 70.6 on the concatenation of the four test datasets. Compared with other three neural-based methods in the 10 Because the source of the four datasets are extremely different which belongs to different domains. 2480 Context: He plays a pianist in the film Glosses Pass 1 Pass 2 Pass 3 Pass 4 Pass 5 g1: participate in games or sport g2: perform music on a instrument g3: act a role or part Table 2: An example of attention weights in the memory module within 5 passes. Darker colors mean that the attention weight is higher. Case studies show that the proposed multi-pass operation can recognize the correct sense by enlarging the attention gap between correct senses and incorrect ones. Pass SE2 SE3 SE13 SE15 ALL 1 71.6 70.3 67.0 72.5 70.3 2 71.9 70.2 67.1 72.8 70.4 3 72.2 70.5 67.2 72.6 70.6 4 72.1 70.4 67.2 72.4 70.5 5 72.0 70.4 67.1 71.5 70.3 Table 3: F1-score (%) of different passes from 1 to 5 on the test data sets. It shows that appropriate number of passes can boost the performance as well as avoid over-fitting of the model. . fourth block, we can find that our best model outperforms the previous best neural network models (Raganato et al., 2017a) on every individual test set. The IMS+emb, which trains a dedicated classifier for each word individually (word expert) with massive manual designed features including word embeddings, is hard to beat for neural networks models. However, our best model can also beat IMS+emb on the SE3, SE13 and SE15 test sets. Incorporating glosses into neural WSD can greatly improve the performance and extending the original gloss can further boost the results. Compared with the Bi-LSTM baseline which only uses labeled data, our proposed model greatly improves the WSD task by 2.2% F1-score with the help of gloss knowledge. Furthermore, compared with the GAS which only uses original gloss as the background knowledge, GASext can further improve the performance with the help of the extended glosses through the semantic relations. This proves that incorporating extended glosses through its hypernyms and hyponyms into the neural network models can boost the performance for WSD. 4.4.2 Multiple Passes Analysis To better illustrate the influence of multiple passes, we give an example in Table 2. Consider the situation that we meet an unknown word x 11, we look 11x refers to word play in reality. up from the dictionary and find three word senses and their glosses corresponding to x. We try to figure out the correct meaning of x according to its context and glosses of different word senses by the proposed memory module. In the first pass, the first sense is excluded, for there are no relevance between the context and g1. But the g2 and g3 may need repeated deliberation, for word pianist is similar to the word music and role in the two glosses. By re-reading the context and gloss information of the target word in the following passes, the correct word sense g3 attracts much more attention than the other two senses. Such rereading process can be realized by multi-pass operation in the memory module. Furthermore, Table 3 shows the effectiveness of multi-pass operation in the memory module. It shows that multiple passes operation performs better than one pass, though the improvement is not significant. The reason of this phenomenon is that for most target words, one main word sense accounts for the majority of their appearances. Therefore, in most circumstances, one-pass inference can lead to the correct word senses. Case studies in Table 2 show that the proposed multipass inference can help to recognize the infrequent senses like the third sense for word play. In Table 3, with the increasing number of passes, the F1-score increases. However, when the number of passes is larger than 3, the F1-score stops increasing or even decreases due to over-fitting. It shows that appropriate number of passes can boost the performance as well as avoid over-fitting of the model. 5 Conclusions and Future Work In this paper, we seek to address the problem of integrating the glosses knowledge of the ambiguous word into a neural network for WSD. We further extend the gloss information through its semantic relations in WordNet to better infer the context. In 2481 this way, we not only make use of labeled context data but also exploit the background knowledge to disambiguate the word sense. Results on four English all-words WSD data sets show that our best model outperforms the existing methods. There is still one challenge left for the future. We just extract the gloss, missing the structural properties or graph information of lexical resources. In the next step, we will consider integrating the rich structural information into the neural network for Word Sense Disambiguation. Acknowledgments We thank the Lei Sha, Jiwei Tan, Jianmin Zhang and Junbing Liu for their instructive suggestions and invaluable help. The research work is supported by the National Science Foundation of China under Grant No. 61772040 and No. 61751201. The contact authors are Baobao Chang and Zhifang Sui. References Eneko Agirre, Oier Lpez De Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics 40(1):57–84. Satanjeev Banerjee and Ted Pedersen. 2002. An adapted lesk algorithm for word sense disambiguation using wordnet. In International Conference on Intelligent Text Processing and Computational Linguistics. Springer, pages 136–145. Satanjeev Banerjee and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Ijcai. volume 3, pages 805–810. Pierpaolo Basile, Annalina Caputo, and Giovanni Semeraro. 2014. An enhanced lesk word sense disambiguation algorithm through a distributional semantic model. In Roceedings of COLING 2014, the International Conference on Computational Linguistics: Technical Papers. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A unified multilingual semantic representation of concepts. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). volume 1, pages 741–751. T. Chen, R. Xu, Y. He, and X. Wang. 2015. Improving distributed representation of word sense via wordnet gloss composition and context clustering. Atmospheric Measurement Techniques 4(3):5211–5251. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Conference on Empirical Methods in Natural Language Processing. pages 1025– 1035. Yun Nung Chen, Dilek Hakkani-Tr, Gokhan Tur, Jianfeng Gao, and Li Deng. 2016. End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding. In The Meeting of the International Speech Communication Association. Christiane Fellbaum. 1998. WordNet. Wiley Online Library. Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering approach to emotion cause extraction. arXiv preprint arXiv:1708.05482 . Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In The Meeting of the Association for Computational Linguistics. Mikael K˚ageb¨ack and Hans Salomonsson. 2016. Word sense disambiguation using a bidirectional lstm. arXiv preprint arXiv:1606.03568 . Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Computer Science . Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning. pages 1378–1387. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries:how to tell a pine cone from an ice cream cone. In Acm Special Interest Group for Design of Communication. pages 24–26. Qi Li, Tianshi Li, and Baobao Chang. 2016. Learning word sense embeddings from word sense definitions . Zheng Li, Yu Zhang, Ying Wei, Yuxiang Wu, and Qiang Yang. 2017. End-to-end adversarial memory network for cross-domain sentiment classification. In Twenty-Sixth International Joint Conference on Artificial Intelligence. pages 2237–2243. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39– 41. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: a unified approach. Transactions of the Association for Computational Linguistics 2:231– 244. 2482 Roberto Navigli. 2009. Word sense disambiguation:a survey. Acm Computing Surveys 41(2):1–69. Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017a. Neural sequence learning models for word sense disambiguation. In Conference on Empirical Methods in Natural Language Processing. Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017b. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proc. of EACL. pages 99–110. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. arXiv preprint arXiv:1507.01127 . Lei Sha, Feng Qian, and Zhifang Sui. 2017. Will repeated reading benefit natural language understanding? In National CCF Conference on Natural Language Processing and Chinese Computing. pages 366–379. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015a. End-to-end memory networks. Computer Science . Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015b. End-to-end memory networks. In Advances in neural information processing systems. pages 2440–2448. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding . Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In International Conference on Machine Learning. pages 2397–2406. Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, and Andi Zhang. 2017. Flexible and creative chinese poetry generation using neural memory pages 1364–1373. Zhong Zhi and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In ACL 2010, Proceedings of the Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, System Demonstrations. pages 78–83.
2018
230
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2483–2493 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2483 Bilingual Sentiment Embeddings: Joint Projection of Sentiment Across Languages Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde Institut f¨ur Maschinelle Sprachverarbeitung University of Stuttgart Pfaffenwaldring 5b, 70569 Stuttgart, Germany {barnesjy,klinger,schulte}@ims.uni-stuttgart.de Abstract Sentiment analysis in low-resource languages suffers from a lack of annotated corpora to estimate high-performing models. Machine translation and bilingual word embeddings provide some relief through cross-lingual sentiment approaches. However, they either require large amounts of parallel data or do not sufficiently capture sentiment information. We introduce Bilingual Sentiment Embeddings (BLSE), which jointly represent sentiment information in a source and target language. This model only requires a small bilingual lexicon, a source-language corpus annotated for sentiment, and monolingual word embeddings for each language. We perform experiments on three language combinations (Spanish, Catalan, Basque) for sentencelevel cross-lingual sentiment classification and find that our model significantly outperforms state-of-the-art methods on four out of six experimental setups, as well as capturing complementary information to machine translation. Our analysis of the resulting embedding space provides evidence that it represents sentiment information in the resource-poor target language without any annotated data in that language. 1 Introduction Cross-lingual approaches to sentiment analysis are motivated by the lack of training data in the vast majority of languages. Even languages spoken by several million people, such as Catalan, often have few resources available to perform sentiment analysis in specific domains. We therefore aim to harness the knowledge previously collected in resource-rich languages. Previous approaches for cross-lingual sentiment analysis typically exploit machine translation based methods or multilingual models. Machine translation (MT) can provide a way to transfer sentiment information from a resource-rich to resourcepoor languages (Mihalcea et al., 2007; Balahur and Turchi, 2014). However, MT-based methods require large parallel corpora to train the translation system, which are often not available for underresourced languages. Examples of multilingual methods that have been applied to cross-lingual sentiment analysis include domain adaptation methods (Prettenhofer and Stein, 2011), delexicalization (Almeida et al., 2015), and bilingual word embeddings (Mikolov et al., 2013; Hermann and Blunsom, 2014; Artetxe et al., 2016). These approaches however do not incorporate enough sentiment information to perform well cross-lingually, as we will show later. We propose a novel approach to incorporate sentiment information in a model, which does not have these disadvantages. Bilingual Sentiment Embeddings (BLSE) are embeddings that are jointly optimized to represent both (a) semantic information in the source and target languages, which are bound to each other through a small bilingual dictionary, and (b) sentiment information, which is annotated on the source language only. We only need three resources: (i) a comparably small bilingual lexicon, (ii) an annotated sentiment corpus in the resourcerich language, and (iii) monolingual word embeddings for the two involved languages. We show that our model outperforms previous state-of-the-art models in nearly all experimental settings across six benchmarks. In addition, we offer an in-depth analysis and demonstrate that our model is aware of sentiment. Finally, we provide a qualitative analysis of the joint bilingual sentiment space. Our implementation is publicly available at https://github.com/jbarnesspain/blse. 2484 2 Related Work Machine Translation: Early work in cross-lingual sentiment analysis found that machine translation (MT) had reached a point of maturity that enabled the transfer of sentiment across languages. Researchers translated sentiment lexicons (Mihalcea et al., 2007; Meng et al., 2012) or annotated corpora and used word alignments to project sentiment annotation and create target-language annotated corpora (Banea et al., 2008; Duh et al., 2011; Demirtas and Pechenizkiy, 2013; Balahur and Turchi, 2014). Several approaches included a multi-view representation of the data (Banea et al., 2010; Xiao and Guo, 2012) or co-training (Wan, 2009; Demirtas and Pechenizkiy, 2013) to improve over a naive implementation of machine translation, where only the translated data is used. There are also approaches which only require parallel data (Meng et al., 2012; Zhou et al., 2016; Rasooli et al., 2017), instead of machine translation. All of these approaches, however, require large amounts of parallel data or an existing high quality translation tool, which are not always available. A notable exception is the approach proposed by Chen et al. (2016), an adversarial deep averaging network, which trains a joint feature extractor for two languages. They minimize the difference between these features across languages by learning to fool a language discriminator, which requires no parallel data. It does, however, require large amounts of unlabeled data. Bilingual Embedding Methods: Recently proposed bilingual embedding methods (Hermann and Blunsom, 2014; Chandar et al., 2014; Gouws et al., 2015) offer a natural way to bridge the language gap. These particular approaches to bilingual embeddings, however, require large parallel corpora in order to build the bilingual space, which are not available for all language combinations. An approach to create bilingual embeddings that has a less prohibitive data requirement is to create monolingual vector spaces and then learn a projection from one to the other. Mikolov et al. (2013) find that vector spaces in different languages have similar arrangements. Therefore, they propose a linear projection which consists of learning a rotation and scaling matrix. Artetxe et al. (2016, 2017) improve upon this approach by requiring the projection to be orthogonal, thereby preserving the monolingual quality of the original word vectors. Given source embeddings S, target embeddings T, and a bilingual lexicon L, Artetxe et al. (2016) learn a projection matrix W by minimizing the square of Euclidean distances arg min W X i ||S′W −T ′||2 F , (1) where S′ ∈S and T ′ ∈T are the word embedding matrices for the tokens in the bilingual lexicon L. This is solved using the Moore-Penrose pseudoinverse S′+ = (S′T S′)−1S′T as W = S′+T ′, which can be computed using SVD. We refer to this approach as ARTETXE. Gouws and Søgaard (2015) propose a method to create a pseudo-bilingual corpus with a small taskspecific bilingual lexicon, which can then be used to train bilingual embeddings (BARISTA). This approach requires a monolingual corpus in both the source and target languages and a set of translation pairs. The source and target corpora are concatenated and then every word is randomly kept or replaced by its translation with a probability of 0.5. Any kind of word embedding algorithm can be trained with this pseudo-bilingual corpus to create bilingual word embeddings. These last techniques have the advantage of requiring relatively little parallel training data while taking advantage of larger amounts of monolingual data. However, they are not optimized for sentiment. Sentiment Embeddings: Maas et al. (2011) first explored the idea of incorporating sentiment information into semantic word vectors. They proposed a topic modeling approach similar to latent Dirichlet allocation in order to collect the semantic information in their word vectors. To incorporate the sentiment information, they included a second objective whereby they maximize the probability of the sentiment label for each word in a labeled document. Tang et al. (2014) exploit distantly annotated tweets to create Twitter sentiment embeddings. To incorporate distributional information about tokens, they use a hinge loss and maximize the likelihood of a true n-gram over a corrupted n-gram. They include a second objective where they classify the polarity of the tweet given the true n-gram. While these techniques have proven useful, they are not easily transferred to a cross-lingual setting. Zhou et al. (2015) create bilingual sentiment embeddings by translating all source data to the 2485 target language and vice versa. This requires the existence of a machine translation system, which is a prohibitive assumption for many under-resourced languages, especially if it must be open and freely accessible. This motivates approaches which can use smaller amounts of parallel data to achieve similar results. 3 Model In order to project not only semantic similarity and relatedness but also sentiment information to our target language, we propose a new model, namely Bilingual Sentiment Embeddings (BLSE), which jointly learns to predict sentiment and to minimize the distance between translation pairs in vector space. We detail the projection objective in Section 3.1, the sentiment objective in Section 3.2, and the full objective in Section 3.3. A sketch of the model is depicted in Figure 1. 3.1 Cross-lingual Projection We assume that we have two precomputed vector spaces S = Rv×d and T = Rv′×d′ for our source and target languages, where v (v′) is the length of the source vocabulary (target vocabulary) and d (d′) is the dimensionality of the embeddings. We also assume that we have a bilingual lexicon L of length n which consists of word-to-word translation pairs L = {(s1, t1), (s2, t2), . . . , (sn, tn)} which map from source to target. In order to create a mapping from both original vector spaces S and T to shared sentimentinformed bilingual spaces z and ˆz, we employ two linear projection matrices, M and M′. During training, for each translation pair in L, we first look up their associated vectors, project them through their associated projection matrix and finally minimize the mean squared error of the two projected vectors. This is very similar to the approach taken by Mikolov et al. (2013), but includes an additional target projection matrix. The intuition for including this second matrix is that a single projection matrix does not support the transfer of sentiment information from the source language to the target language. Without M′, any signal coming from the sentiment classifier (see Section 3.2) would have no affect on the target embedding space T, and optimizing M to predict sentiment and projection would only be detrimental to classification of the target language. We analyze this further in Section 6.3. Note that in this configuration, we do not need to update the original vector spaces, which would be problematic with such small training data. The projection quality is ensured by minimizing the mean squared error12 MSE = 1 n n X i=1 (zi −ˆzi)2 , (2) where zi = Ssi ·M is the dot product of the embedding for source word si and the source projection matrix and ˆzi = Tti · M′ is the same for the target word ti. 3.2 Sentiment Classification We add a second training objective to optimize the projected source vectors to predict the sentiment of source phrases. This inevitably changes the projection characteristics of the matrix M, and consequently M′ and encourages M′ to learn to predict sentiment without any training examples in the target language. To train M to predict sentiment, we require a source-language corpus Csource = {(x1, y1), (x2, y2), . . . , (xi, yi)} where each sentence xi is associated with a label yi. For classification, we use a two-layer feedforward averaging network, loosely following Iyyer et al. (2015)3. For a sentence xi we take the word embeddings from the source embedding S and average them to ai ∈Rd. We then project this vector to the joint bilingual space zi = ai · M. Finally, we pass zi through a softmax layer P to get our prediction ˆyi = softmax(zi · P). To train our model to predict sentiment, we minimize the cross-entropy error of our predictions H = − n X i=1 yi log ˆyi −(1 −yi) log(1 −ˆyi) . (3) 3.3 Joint Learning In order to jointly train both the projection component and the sentiment component, we combine the two loss functions to optimize the parameter 1We omit parameters in equations for better readability. 2We also experimented with cosine distance, but found that it performed worse than Euclidean distance. 3Our model employs a linear transformation after the averaging layer instead of including a non-linearity function. We choose this architecture because the weights M and M ′ are also used to learn a linear cross-lingual projection. 2486 This hotel is nice fun No está muy bien Embedding Layer Averaging Layer Projection Layer Softmax Layer divertido Source Language Annotated Sentences Translation Dictionary Target Language Unnanotated Sentences Minimize Euclidean Distance TRAINING TEST Minimize Crossentropy Loss Figure 1: Bilingual Sentiment Embedding Model (BLSE) EN ES CA EU Binary + 1258 1216 718 956 − 473 256 467 173 Total 1731 1472 1185 1129 4-class ++ 379 370 256 384 + 879 846 462 572 − 399 218 409 153 −− 74 38 58 20 Total 1731 1472 1185 1129 Table 1: Statistics for the OpeNER English (EN) and Spanish (ES) as well as the MultiBooked Catalan (CA) and Basque (EU) datasets. matrices M, M′, and P by J = X (x,y)∈Csource X (s,t)∈L αH(x, y)+(1−α)·MSE(s, t) , (4) where α is a hyperparameter that weights sentiment loss vs. projection loss. 3.4 Target-language Classification For inference, we classify sentences from a targetlanguage corpus Ctarget. As in the training procedure, for each sentence, we take the word embeddings from the target embeddings T and average them to ai ∈Rd. We then project this vector to the joint bilingual space ˆzi = ai · M′. Finally, we pass Spanish Catalan Basque Sentences 23 M 9.6 M 0.7 M Tokens 610 M 183 M 25 M Embeddings 0.83 M 0.4 M 0.14 M Table 2: Statistics for the Wikipedia corpora and monolingual vector spaces. ˆzi through a softmax layer P to get our prediction ˆyi = softmax(ˆzi · P). 4 Datasets and Resources 4.1 OpeNER and MultiBooked To evaluate our proposed model, we conduct experiments using four benchmark datasets and three bilingual combinations. We use the OpeNER English and Spanish datasets (Agerri et al., 2013) and the MultiBooked Catalan and Basque datasets (Barnes et al., 2018). All datasets contain hotel reviews which are annotated for aspect-level sentiment analysis. The labels include Strong Negative (−−), Negative (−), Positive (+), and Strong Positive (++). We map the aspect-level annotations to sentence level by taking the most common label and remove instances of mixed polarity. We also create a binary setup by combining the strong and weak classes. This gives us a total of six experiments. The details of the sentence-level datasets are summarized in Table 1. For each of the experi2487 Figure 2: Binary and four class macro F1 on Spanish (ES), Catalan (CA), and Basque (EU). ments, we take 70 percent of the data for training, 20 percent for testing and the remaining 10 percent are used as development data for tuning. 4.2 Monolingual Word Embeddings For BLSE, ARTETXE, and MT, we require monolingual vector spaces for each of our languages. For English, we use the publicly available GoogleNews vectors4. For Spanish, Catalan, and Basque, we train skip-gram embeddings using the Word2Vec toolkit4 with 300 dimensions, subsampling of 10−4, window of 5, negative sampling of 15 based on a 2016 Wikipedia corpus5 (sentence-split, tokenized with IXA pipes (Agerri et al., 2014) and lowercased). The statistics of the Wikipedia corpora are given in Table 2. 4.3 Bilingual Lexicon For BLSE, ARTETXE, and BARISTA, we also require a bilingual lexicon. We use the sentiment lexicon from Hu and Liu (2004) (to which we refer in the following as Bing Liu) and its translation into each target language. We translate the lexicon using Google Translate and exclude multi-word expressions.6 This leaves a dictionary of 5700 translations in Spanish, 5271 in Catalan, and 4577 in Basque. We set aside ten percent of the translation pairs as a development set in order to check that the distances between translation pairs not seen during training are also minimized during training. 4https://code.google.com/archive/p/word2vec/ 5http://attardi.github.io/wikiextractor/ 6Note that we only do that for convenience. Using a machine translation service to generate this list could easily be replaced by a manual translation, as the lexicon is comparably small. 5 Experiments 5.1 Setting We compare BLSE (Sections 3.1–3.3) to ARTETXE (Section 2) and BARISTA (Section 2) as baselines, which have similar data requirements and to machine translation (MT) and monolingual (MONO) upper bounds which request more resources. For all models (MONO, MT, ARTETXE, BARISTA), we take the average of the word embeddings in the source-language training examples and train a linear SVM7. We report this instead of using the same feed-forward network as in BLSE as it is the stronger upper bound. We choose the parameter c on the target language development set and evaluate on the target language test set. Upper Bound MONO. We set an empirical upper bound by training and testing a linear SVM on the target language data. As mentioned in Section 5.1, we train the model on the averaged embeddings from target language training data, tuning the c parameter on the development data. We test on the target language test data. Upper Bound MT. To test the effectiveness of machine translation, we translate all of the sentiment corpora from the target language to English using the Google Translate API8. Note that this approach is not considered a baseline, as we assume not to have access to high-quality machine translation for low-resource languages of interest. Baseline ARTETXE. We compare with the approach proposed by Artetxe et al. (2016) which has shown promise on other tasks, such as word similarity. In order to learn the projection matrix W, we need translation pairs. We use the same word-to-word bilingual lexicon mentioned in Section 3.1. We then map the source vector space S to the bilingual space ˆS = SW and use these embeddings. Baseline BARISTA. We also compare with the approach proposed by Gouws and Søgaard (2015). The bilingual lexicon used to create the pseudobilingual corpus is the same word-to-word bilingual lexicon mentioned in Section 3.1. We follow the authors’ setup to create the pseudo-bilingual corpus. We create bilingual embeddings by training skip-gram embeddings using the Word2Vec toolkit on the pseudo-bilingual corpus using the same parameters from Section 4.2. 7LinearSVC implementation from scikit-learn. 8https://translate.google.com 2488 Our method: BLSE. We implement our model BLSE in Pytorch (Paszke et al., 2016) and initialize the word embeddings with the pretrained word embeddings S and T mentioned in Section 4.2. We use the word-to-word bilingual lexicon from Section 4.3, tune the hyperparameters α, training epochs, and batch size on the target development set and use the best hyperparameters achieved on the development set for testing. ADAM (Kingma and Ba, 2014) is used in order to minimize the average loss of the training batches. Binary 4-class ES CA EU ES CA EU Upper Bounds MONO P 75.0 79.0 74.0 55.2 50.0 48.3 R 72.3 79.6 67.4 42.8 50.9 46.5 F1 73.5 79.2 69.8 45.5 49.9 47.1 MT P 82.3 78.0 75.6 51.8 58.9 43.6 R 76.6 76.8 66.5 48.5 50.5 45.2 F1 79.0 77.2 69.4 48.8 52.7 43.6 BLSE P 72.1 **72.8 **67.5 **60.0 38.1 *42.5 R **80.1 **73.0 **72.7 *43.4 38.1 37.4 F1 **74.6 **72.9 **69.3 *41.2 35.9 30.0 Baselines Artetxe P 75.0 60.1 42.2 40.1 21.6 30.0 R 64.3 61.2 49.5 36.9 29.8 35.7 F1 67.1 60.7 45.6 34.9 23.0 21.3 Barista P 64.7 65.3 55.5 44.1 36.4 34.1 R 59.8 61.2 54.5 37.9 38.5 34.3 F1 61.2 60.1 54.8 39.5 36.2 33.8 Ensemble Artetxe P 65.3 63.1 70.4 43.5 46.5 50.1 R 61.3 63.3 64.3 44.1 48.7 50.7 F1 62.6 63.2 66.4 43.8 47.6 49.9 Barista P 60.1 63.4 50.7 48.3 52.8 50.8 R 55.5 62.3 50.4 46.6 53.7 49.8 F1 56.0 62.5 49.8 47.1 53.0 47.8 BLSE P 79.5 84.7 80.9 49.5 54.1 50.3 R 78.7 85.5 69.9 51.2 53.9 51.4 F1 80.3 85.0 73.5 50.3 53.9 50.5 Table 3: Precision (P), Recall (R), and macro F1 of four models trained on English and tested on Spanish (ES), Catalan (CA), and Basque (EU). The bold numbers show the best results for each metric per column and the highlighted numbers show where BLSE is better than the other projection methods, ARTETXE and BARISTA (** p < 0.01, * p < 0.05). Ensembles We create an ensemble of MT and each projection method (BLSE, ARTETXE, BARISTA) by training a random forest classifier on the predictions from MT and each of these approaches. This allows us to evaluate to what extent each projection model adds complementary information to the machine translation approach. 5.2 Results In Figure 2, we report the results of all four methods. Our method outperforms the other projection methods (the baselines ARTETXE and BARISTA) on four of the six experiments substantially. It performs only slightly worse than the more resourcecostly upper bounds (MT and MONO). This is especially noticeable for the binary classification task, where BLSE performs nearly as well as machine translation and significantly better than the other methods. We perform approximate randomization tests (Yeh, 2000) with 10,000 runs and highlight the results that are statistically significant (**p < 0.01, *p < 0.05) in Table 3. In more detail, we see that MT generally performs better than the projection methods (79–69 F1 on binary, 52–44 on 4-class). BLSE (75–69 on binary, 41–30 on 4-class) has the best performance of the projection methods and is comparable with MT on the binary setup, with no significant difference on binary Basque. ARTETXE (67–46 on binary, 35–21 on 4-class) and BARISTA (61– 55 on binary, 40–34 on 4-class) are significantly worse than BLSE on all experiments except Catalan and Basque 4-class. On the binary experiment, ARTETXE outperforms BARISTA on Spanish (67.1 vs. 61.2) and Catalan (60.7 vs. 60.1) but suffers more than the other methods on the four-class experiments, with a maximum F1 of 34.9. BARISTA Model voc mod neg know other total MT bi 49 26 19 14 5 113 4 147 94 19 21 12 293 ARTETXE bi 80 44 27 14 7 172 4 182 141 19 24 19 385 BARISTA bi 89 41 27 20 7 184 4 191 109 24 31 15 370 BLSE bi 67 45 21 15 8 156 4 146 125 29 22 19 341 Table 4: Error analysis for different phenomena. See text for explanation of error classes. 2489 Figure 3: Macro F1 for translation pairs in the Spanish 4-class setup. is relatively stable across languages. ENSEMBLE performs the best, which shows that BLSE adds complementary information to MT. Finally, we note that all systems perform successively worse on Catalan and Basque. This is presumably due to the quality of the word embeddings, as well as the increased morphological complexity of Basque. 6 Model and Error Analysis We analyze three aspects of our model in further detail: (i) where most mistakes originate, (ii) the effect of the bilingual lexicon, and (iii) the effect and necessity of the target-language projection matrix M′. 6.1 Phenomena In order to analyze where each model struggles, we categorize the mistakes and annotate all of the test phrases with one of the following error classes: vocabulary (voc), adverbial modifiers (mod), negation (neg), external knowledge (know) or other. Table 4 shows the results. Vocabulary: The most common way to express sentiment in hotel reviews is through the use of polar adjectives (as in “the room was great) or the mention of certain nouns that are desirable (“it had a pool”). Although this phenomenon has the largest total number of mistakes (an average of 71 per model on binary and 167 on 4-class), it is mainly due to its prevalence. MT performed the best on the test examples which according to the annotation require a correct understanding of the vocabulary (81 F1 on binary /54 F1 on 4-class), with BLSE (79/48) slightly worse. ARTETXE (70/35) and BARISTA (67/41) perform significantly worse. This suggests that BLSE is better ARTETXE and BARISTA at transferring sentiment of the most important sentiment bearing words. Negation: Negation is a well-studied phenomenon in sentiment analysis (Pang et al., 2002; Wiegand et al., 2010; Zhu et al., 2014; Reitan et al., 2015). Therefore, we are interested in how these four models perform on phrases that include the negation of a key element, for example “In general, this hotel isn’t bad”. We would like our models to recognize that the combination of two negative elements “isn’t” and “bad” lead to a Positive label. Given the simple classification strategy, all models perform relatively well on phrases with negation (all reach nearly 60 F1 in the binary setting). However, while BLSE performs the best on negation in the binary setting (82.9 F1), it has more problems with negation in the 4-class setting (36.9 F1). Adverbial Modifiers: Phrases that are modified by an adverb, e. g., the food was incredibly good, are important for the four-class setup, as they often differentiate between the base and Strong labels. In the binary case, all models reach more than 55 F1. In the 4-class setup, BLSE only achieves 27.2 F1 compared to 46.6 or 31.3 of MT and BARISTA, respectively. Therefore, presumably, our model does currently not capture the semantics of the target adverbs well. This is likely due to the fact that it assigns too much sentiment to functional words (see Figure 6). External Knowledge Required: These errors are difficult for any of the models to get correct. Many of these include numbers which imply positive or negative sentiment (350 meters from the beach is Positive while 3 kilometers from the beach is Negative). BLSE performs the best (63.5 F1) while MT performs comparably well (62.5). BARISTA performs the worst (43.6). Binary vs. 4-class: All of the models suffer when moving from the binary to 4-class setting; an average of 26.8 in macro F1 for MT, 31.4 for ARTETXE, 22.2 for BARISTA, and for 36.6 BLSE. The two vector projection methods (ARTETXE and BLSE) suffer the most, suggesting that they are currently more apt for the binary setting. 6.2 Effect of Bilingual Lexicon We analyze how the number of translation pairs affects our model. We train on the 4-class Spanish setup using the best hyper-parameters from the previous experiment. 2490 1.0 0.5 0 -.0.5 10 20 30 40 50 60 70 10 20 30 40 50 60 70 10 20 30 40 50 60 70 source synonyms source antonyms translation cosine target synonyms target antonyms Cosine Similarity (a) BLSE (b) Artetxe (c) Barista Figure 4: Average cosine similarity between a subsample of translation pairs of same polarity (“sentiment synonyms”) and of opposing polarity (“sentiment antonyms”) in both target and source languages in each model. The x-axis shows training epochs. We see that BLSE is able to learn that sentiment synonyms should be close to one another in vector space and sentiment antonyms should not. Research into projection techniques for bilingual word embeddings (Mikolov et al., 2013; Lazaridou et al., 2015; Artetxe et al., 2016) often uses a lexicon of the most frequent 8–10 thousand words in English and their translations as training data. We test this approach by taking the 10,000 wordto-word translations from the Apertium Englishto-Spanish dictionary9. We also use the Google Translate API to translate the NRC hashtag sentiment lexicon (Mohammad et al., 2013) and keep the 22,984 word-to-word translations. We perform the same experiment as above and vary the amount of training data from 0, 100, 300, 600, 1000, 3000, 6000, 10,000 up to 20,000 training pairs. Finally, we compile a small hand translated dictionary of 200 pairs, which we then expand using target language morphological information, finally giving us 657 translation pairs10. The macro F1 score for the Bing Liu dictionary climbs constantly with the increasing translation pairs. Both the Apertium and NRC dictionaries perform worse than the translated lexicon by Bing Liu, while the expanded hand translated dictionary is competitive, as shown in Figure 3. While for some tasks, e. g., bilingual lexicon induction, using the most frequent words as translation pairs is an effective approach, for sentiment analysis, this does not seem to help. Using a translated sentiment lexicon, even if it is small, gives better results. 9http://www.meta-share.org 10The translation took approximately one hour. We can extrapolate that hand translating a sentiment lexicon the size of the Bing Liu lexicon would take no more than 5 hours. 1.0 0.5 0 -.0.5 10 20 30 40 50 60 70 Cosine Similarity Epochs BLSE No M' translation translation source F1 source F1 target F1 target F1 Figure 5: BLSE model (solid lines) compared to a variant without target language projection matrix M′ (dashed lines). “Translation” lines show the average cosine similarity between translation pairs. The remaining lines show F1 scores for the source and target language with both varints of BLSE. The modified model cannot learn to predict sentiment in the target language (red lines). This illustrates the need for the second projection matrix M′. 6.3 Analysis of M′ The main motivation for using two projection matrices M and M′ is to allow the original embeddings to remain stable, while the projection matrices have the flexibility to align translations and separate these into distinct sentiment subspaces. To justify this design decision empirically, we perform an experiment to evaluate the actual need for the target language projection matrix M′: We create a simplified version of our model without M′, using M to project from the source to target and then P to classify sentiment. 2491 The results of this model are shown in Figure 5. The modified model does learn to predict in the source language, but not in the target language. This confirms that M′ is necessary to transfer sentiment in our model. 7 Qualitative Analyses of Joint Bilingual Sentiment Space In order to understand how well our model transfers sentiment information to the target language, we perform two qualitative analyses. First, we collect two sets of 100 positive sentiment words and one set of 100 negative sentiment words. An effective cross-lingual sentiment classifier using embeddings should learn that two positive words should be closer in the shared bilingual space than a positive word and a negative word. We test if BLSE is able to do this by training our model and after every epoch observing the mean cosine similarity between the sentiment synonyms and sentiment antonyms after projecting to the joint space. We compare BLSE with ARTETXE and BARISTA by replacing the Linear SVM classifiers with the same multi-layer classifier used in BLSE and observing the distances in the hidden layer. Figure 4 shows this similarity in both source and target language, along with the mean cosine similarity between a held-out set of translation pairs and the macro F1 scores on the development set for both source and target languages for BLSE, BARISTA, and ARTETXE. From this plot, it is clear that BLSE is able to learn that sentiment synonyms should be close to one another in vector space and antonyms should have a negative cosine similarity. While the other models also learn this to some degree, jointly optimizing both sentiment and projection gives better results. Secondly, we would like to know how well the projected vectors compare to the original space. Our hypothesis is that some relatedness and similarity information is lost during projection. Therefore, we visualize six categories of words in t-SNE (Van der Maaten and Hinton, 2008): positive sentiment words, negative sentiment words, functional words, verbs, animals, and transport. The t-SNE plots in Figure 6 show that the positive and negative sentiment words are rather clearly separated after projection in BLSE. This indicates that we are able to incorporate sentiment information into our target language without any labeled data in the target language. However, the downside BLSE Original Figure 6: t-SNE-based visualization of the Spanish vector space before and after projection with BLSE. There is a clear separation of positive and negative words after projection, despite the fact that we have used no labeled data in Spanish. of this is that functional words and transportation words are highly correlated with positive sentiment. 8 Conclusion We have presented a new model, BLSE, which is able to leverage sentiment information from a resource-rich language to perform sentiment analysis on a resource-poor target language. This model requires less parallel data than MT and performs better than other state-of-the-art methods with similar data requirements, an average of 14 percentage points in F1 on binary and 4 pp on 4-class crosslingual sentiment analysis. We have also performed a phenomena-driven error analysis which showed that BLSE is better than ARTETXE and BARISTA at transferring sentiment, but assigns too much sentiment to functional words. In the future, we will extend our model so that it can project multi-word phrases, as well as single words, which could help with negations and modifiers. Acknowledgements We thank Sebastian Pad´o, Sebastian Riedel, Eneko Agirre, and Mikel Artetxe for their conversations and feedback. 2492 References Rodrigo Agerri, Josu Bermudez, and German Rigau. 2014. Ixa pipeline: Efficient and ready to use multilingual nlp tools. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14). pages 3823–3828. Rodrigo Agerri, Montse Cuadros, Sean Gaines, and German Rigau. 2013. OpeNER: Open polarity enhanced named entity recognition. Sociedad Espa˜nola para el Procesamiento del Lenguaje Natural 51(Septiembre):215–218. Mariana S. C. Almeida, Claudia Pinto, Helena Figueira, Pedro Mendes, and Andr´e F. T. Martins. 2015. Aligning opinions: Cross-lingual opinion mining with dependencies. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pages 408–418. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 2289–2294. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 451–462. Alexandra Balahur and Marco Turchi. 2014. Comparative experiments using supervised learning and machine translation for multilingual sentiment analysis. Computer Speech & Language 28(1):56–75. Carmen Banea, Rada Mihalcea, and Janyce Wiebe. 2010. Multilingual subjectivity: Are more languages better? In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). pages 28–36. Carmen Banea, Rada Mihalcea, Janyce Wiebe, and Samer Hassan. 2008. Multilingual subjectivity analysis using machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. pages 127–135. Jeremy Barnes, Patrik Lambert, and Toni Badia. 2018. Multibooked: A corpus of basque and catalan hotel reviews annotated for aspect-level sentiment classification. In Proceedings of 11th Language Resources and Evaluation Conference (LREC’18). Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pages 1853–1861. Xilun Chen, Ben Athiwaratkun, Yu Sun, Kilian Q. Weinberger, and Claire Cardie. 2016. Adversarial deep averaging networks for cross-lingual sentiment classification. CoRR abs/1606.01614. http://arxiv.org/abs/1606.01614. Erkin Demirtas and Mykola Pechenizkiy. 2013. Crosslingual polarity detection with machine translation. Proceedings of the International Workshop on Issues of Sentiment Discovery and Opinion Mining - WISDOM ’13 pages 9:1–9:8. Kevin Duh, Akinori Fujino, and Masaaki Nagata. 2011. Is machine translation ripe for cross-lingual sentiment classification? Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers 2:429–433. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. BilBOWA: Fast bilingual distributed representations without word alignments. Proceedings of The 32nd International Conference on Machine Learning pages 748–756. Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1386–1390. Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pages 58– 68. Minqing Hu and Bing Liu. 2004. Mining opinion features in customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2004). pages 168–177. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daume III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 1681–1691. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR) . Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: delving into cross-space mapping for zero-shot learning. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing pages 270–280. 2493 Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. pages 142–150. Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Ge Xu, and Houfeng Wang. 2012. Cross-lingual mixture model for sentiment classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Jeju Island, Korea, pages 572–581. http://www.aclweb.org/anthology/P12-1060. Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. pages 976–983. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR abs/1309.4168. http://arxiv.org/abs/1309.4168. Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-ofthe-art in sentiment analysis of tweets. In Proceedings of the seventh international workshop on Semantic Evaluation Exercises (SemEval-2013). Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of the ACL-02 Conference on Empirical methods in natural language processing-Volume 10. Association for Computational Linguistics, pages 79–86. Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. 2016. Pytorch deeplearning framework. http://pytorch.org. Accessed: 2017-08-10. Peter Prettenhofer and Benno Stein. 2011. Crosslingual adaptation using structural correspondence learning. ACM Transactions on Intelligent Systems and Technology 3(1):1–22. Mohammad Sadegh Rasooli, Noura Farra, Axinia Radeva, Tao Yu, and Kathleen McKeown. 2017. Cross-lingual sentiment transfer with limited resources. Machine Translation . Johan Reitan, Jørgen Faret, Bj¨orn Gamb¨ack, and Lars Bungum. 2015. Negation scope detection for twitter sentiment analysis. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. pages 99–108. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 1555–1565. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research 9:2579–2605. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. pages 235– 243. Michael Wiegand, Alexandra Balahur, Benjamin Roth, Dietrich Klakow, and Andr´es Montoyo. 2010. A survey on the role of negation in sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing. pages 60– 68. Min Xiao and Yuhong Guo. 2012. Multi-view adaboost for multilingual subjectivity analysis. In Proceedings of COLING 2012. pages 2851–2866. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the 18th Conference on Computational linguistics (COLING). pages 947–953. Guangyou Zhou, Zhiyuan Zhu, Tingting He, and Xiaohua Tony Hu. 2016. Cross-lingual sentiment classification with stacked autoencoders. Knowledge and Information Systems 47(1):27–44. HuiWei Zhou, Long Chen, Fulin Shi, and Degen Huang. 2015. Learning bilingual sentiment word embeddings for cross-language sentiment classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pages 430–440. Xiaodan Zhu, Hongyu Guo, Saif Mohammad, and Svetlana Kiritchenko. 2014. An empirical study on the effect of negation words on sentiment. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pages 304–313.
2018
231
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2494–2504 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2494 Learning Domain-Sensitive and Sentiment-Aware Word Embeddings∗ Bei Shi1, Zihao Fu1, Lidong Bing2 and Wai Lam1 1Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong, Hong Kong 2Tencent AI Lab, Shenzhen, China {bshi,zhfu,wlam}@se.cuhk.edu.hk [email protected] Abstract Word embeddings have been widely used in sentiment classification because of their efficacy for semantic representations of words. Given reviews from different domains, some existing methods for word embeddings exploit sentiment information, but they cannot produce domainsensitive embeddings. On the other hand, some other existing methods can generate domain-sensitive word embeddings, but they cannot distinguish words with similar contexts but opposite sentiment polarity. We propose a new method for learning domain-sensitive and sentimentaware embeddings that simultaneously capture the information of sentiment semantics and domain sensitivity of individual words. Our method can automatically determine and produce domain-common embeddings and domain-specific embeddings. The differentiation of domaincommon and domain-specific words enables the advantage of data augmentation of common semantics from multiple domains and capture the varied semantics of specific words from different domains at the same time. Experimental results show that our model provides an effective way to learn domain-sensitive and sentimentaware word embeddings which benefit sentiment classification at both sentence level and lexicon term level. ∗This work was partially done when Bei Shi was an intern at Tencent AI Lab. This project is substantially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14203414). 1 Introduction Sentiment classification aims to predict the sentiment polarity, such as “positive” or “negative”, over a piece of review. It has been a long-standing research topic because of its importance for many applications such as social media analysis, ecommerce, and marketing (Liu, 2012; Pang et al., 2008). Deep learning has brought in progress in various NLP tasks, including sentiment classification. Some researchers focus on designing RNN or CNN based models for predicting sentence level (Kim, 2014) or aspect level sentiment (Li et al., 2018; Chen et al., 2017; Wang et al., 2016). These works directly take the word embeddings pre-trained for general purpose as initial word representations and may conduct fine tuning in the training process. Some other researchers look into the problem of learning taskspecific word embeddings for sentiment classification aiming at solving some limitations of applying general pre-trained word embeddings. For example, Tang et al. (2014b) develop a neural network model to convey sentiment information in the word embeddings. As a result, the learned embeddings are sentiment-aware and able to distinguish words with similar syntactic context but opposite sentiment polarity, such as the words “good” and “bad”. In fact, sentiment information can be easily obtained or derived in large scale from some data sources (e.g., the ratings provided by users), which allows reliable learning of such sentiment-aware embeddings. Apart from these words (e.g. “good” and “bad”) with consistent sentiment polarity in different contexts, the polarity of some sentiment words is domain-sensitive. For example, the word “lightweight” usually connotes a positive sentiment in the electronics domain since a lightweight device is easier to carry. In contrast, in the movie 2495 domain, the word “lightweight” usually connotes a negative opinion describing movies that do not invoke deep thoughts among the audience. This observation motivates the study of learning domainsensitive word representations (Yang et al., 2017; Bollegala et al., 2015, 2014). They basically learn separate embeddings of the same word for different domains. To bridge the semantics of individual embedding spaces, they select a subset of words that are likely to be domain-insensitive and align the dimensions of their embeddings. However, the sentiment information is not exploited in these methods although they intend to tackle the task of sentiment classification. In this paper, we aim at learning word embeddings that are both domain-sensitive and sentiment-aware. Our proposed method can jointly model the sentiment semantics and domain specificity of words, expecting the learned embeddings to achieve superior performance for the task of sentiment classification. Specifically, our method can automatically determine and produce domain-common embeddings and domainspecific embeddings. Domain-common embeddings represent the fact that the semantics of a word including its sentiment and meaning in different domains are very similar. For example, the words “good” and “interesting” are usually domain-common and convey consistent semantic meanings and positive sentiments in different domains. Thus, they should have similar embeddings across domains. On the other hand, domain-specific word embeddings represent the fact that the sentiments or meanings across domains are different. For example, the word “lightweight” represents different sentiment polarities in the electronics domain and the movie domain. Moreover, some polysemous words have different meanings in different domains. For example, the term “apple” refers to the famous technology company in the electronics domain or a kind of fruit in the food domain. Our model exploits the information of sentiment labels and context words to distinguish domain-common and domain-specific words. If a word has similar sentiments and contexts across domains, it indicates that the word has common semantics in these domains, and thus it is treated as domain-common. Otherwise, the word is considered as domain-specific. The learning of domain-common embeddings can allow the advantage of data augmentation of common semantics of multiple domains, and meanwhile, domainspecific embeddings allow us to capture the varied semantics of specific words in different domains. Specifically, for each word in the vocabulary, we design a distribution to depict the probability of the word being domain-common. The inference of the probability distribution is conducted based on the observed sentiments and contexts. As mentioned above, we also exploit the information of sentiment labels for the learning of word embeddings that can distinguish words with similar syntactic context but opposite sentiment polarity. To demonstrate the advantages of our domainsensitive and sentiment-aware word embeddings, we conduct experiments on four domains, including books, DVSs, electronics, and kitchen appliances. The experimental results show that our model can outperform the state-of-the-art models on the task of sentence level sentiment classification. Moreover, we conduct lexicon term sentiment classification in two common sentiment lexicon sets to evaluate the effectiveness of our sentiment-aware embeddings learned from multiple domains, and it shows that our model outperforms the state-of-the-art models on most domains. 2 Related Works Traditional vector space models encode individual words using the one-hot representation, namely, a high-dimensional vector with all zeroes except in one component corresponding to that word (Baeza-Yates et al., 1999). Such representations suffer from the curse of dimensionality, as there are many components in these vectors due to the vocabulary size. Another drawback is that semantic relatedness of words cannot be modeled using such representations. To address these shortcomings, Rumelhart et al. (1988) propose to use distributed word representation instead, called word embeddings. Several techniques for generating such representations have been investigated. For example, Bengio et al. propose a neural network architecture for this purpose (Bengio et al., 2003; Bengio, 2009). Later, Mikolov et al. (2013) propose two methods that are considerably more efficient, namely skip-gram and CBOW. This work has made it possible to learn word embeddings from large data sets, which has led to the current popularity of word embed2496 dings. Word embedding models have been applied to many tasks, such as named entity recognition (Turian et al., 2010), word sense disambiguation (Collobert et al., 2011; Iacobacci et al., 2016; Zhang and Hasan, 2017; Dave et al., 2018), parsing (Roth and Lapata, 2016), and document classification (Tang et al., 2014a,b; Shi et al., 2017). Sentiment classification has been a longstanding research topic (Liu, 2012; Pang et al., 2008; Chen et al., 2017; Moraes et al., 2013). Given a review, the task aims at predicting the sentiment polarity on the sentence level (Kim, 2014) or the aspect level (Li et al., 2018; Chen et al., 2017). Supervised learning algorithms have been widely used in sentiment classification (Pang et al., 2002). People usually use different expressions of sentiment semantics in different domains. Due to the mismatch between domainspecific words, a sentiment classifier trained in one domain may not work well when it is directly applied to other domains. Thus cross-domain sentiment classification algorithms have been explored (Pan et al., 2010; Li et al., 2009; Glorot et al., 2011). These works usually find common feature spaces across domains and then share learned parameters from the source domain to the target domain. For example, Pan et al. (2010) propose a spectral feature alignment algorithm to align words from different domains into unified clusters. Then the clusters can be used to reduce the gap between words of the two domains, which can be used to train sentiment classifiers in the target domain. Compared with the above works, our model focuses on learning both domain-common and domain-specific embeddings given reviews from all the domains instead of only transferring the common semantics from the source domain to the target domain. Some researchers have proposed some methods to learn task-specific word embeddings for sentiment classification (Tang et al., 2014a,b). Tang et al. (2014b) propose a model named SSWE to learn sentiment-aware embedding via incorporating sentiment polarity of texts in the loss functions of neural networks. Without the consideration of varied semantics of domain-specific words in different domains, their model cannot learn sentiment-aware embeddings across multiple domains. Some works have been proposed to learn word representations considering multiple domains (Yang et al., 2017; Bach et al., 2016; Bollegala et al., 2015). Most of them learn separate embeddings of the same word for different domains. Then they choose pivot words according to frequency-based statistical measures to bridge the semantics of individual embedding spaces. A regularization formulation enforcing that word representations of pivot words should be similar in different domains is added into the original word embedding framework. For example, Yang et al. (2017) use Sørensen-Dice coefficient (Sørensen, 1948) for detecting pivot words and learn word representations across domains. Even though they evaluate the model via the task of sentiment classification, sentiment information associated with the reviews are not considered in the learned embeddings. Moreover, the selection of pivot words is according to frequency-based statistical measures in the above works. In our model, the domain-common words are jointly determined by sentiment information and context words. 3 Model Description We propose a new model, named DSE, for learning Domain-sensitive and Sentiment-aware word Embeddings. For presentation clarity, we describe DSE based on two domains. Note that it can be easily extended for more than two domains, and we remark on how to extend near the end of this section. 3.1 Design of Embeddings We assume that the input consists of text reviews of two domains, namely Dp and Dq. Each review r in Dp and Dq is associated with a sentiment label y which can take on the value of 1 and 0 denoting that the sentiment of the review is positive and negative respectively. In our DSE model, each word w in the whole vocabulary Λ is associated with a domaincommon vector U c w and two domain-specific vectors, namely U p w specific to the domain p and U q w specific to the domain q. The dimension of these vectors is d. The design of U c w, U p w and U q w reflects one characteristic of our model: allowing a word to have different semantics across different domains. The semantic of each word includes not only the semantic meaning but also the sentiment orientation of the word. If the semantic of w is consistent in the domains p and q, we use the vector U c w for both domains. Otherwise, w is repre2497 sented by U p w and U q w for p and q respectively. In traditional cross-domain word embedding methods (Yang et al., 2017; Bollegala et al., 2015, 2016), each word is represented by different vectors in different domains without differentiation of domain-common and domain-specific words. In contrast to these methods, for each word w, we use a latent variable zw to depict its domain commonality. When zw = 1, it means that w is common in both domains. Otherwise, w is specific to the domain p or the domain q. In the standard skip-gram model (Mikolov et al., 2013), the probability of predicting the context words is only affected by the relatedness with the target words. In our DSE model, predicting the context words also depends on the domain-commonality of the target word, i.e zw. For example, assume that there are two domains, e.g. the electronics domain and the movie domain. If zw = 1, it indicates a high probability of generating some domain-common words such as “good”, “bad” or “satisfied”. Otherwise, the domain-specific words are more likely to be generated such as “reliable”, “cheap” or “compacts” for the electronics domain. For a word w, we assume that the probability of predicting the context word wt is formulated as follows: p(wt|w) = X k∈{0,1} p(wt|w, zw = k)p(zw = k) (1) If w is a domain-common word without differentiating p and q, the probability of predicting wt can be defined as: p(wt|w, zw = 1) = exp(U c w · Vwt) P w′∈Λ exp(U cw · Vw′) (2) where Λ is the whole vocabulary and Vw′ is the output vector of the word w′. If w is a domain-specific word, the probability of p(wt|w, zw = 0) is specific to the occurrence of w in Dp or Dq. For individual training instances, the occurrences of w in Dp or Dq have been established. Then the probability of p(wt|w, zw = 0) can be defined as follows: p(wt|w, zw = 0) =        exp(Up w·Vwt) P w′∈Λ exp(Up w·Vw′), if w ∈Dp exp(Uq w·Vwt) P w′∈Λ exp(Uq w·Vw′), if w ∈Dq (3) 3.2 Exploiting Sentiment Information In our DSE model, the prediction of review sentiment depends on not only the text information but also the domain-commonality. For example, the domain-common word “good” has high probability to be positive in different reviews across multiple domains. However, for the word “lightweight”, it would be positive in the electronics domain, but negative in the movie domain. We define the polarity yw of each word w to be consistent with the sentiment label of the review: if we observe that a review is associated with a positive label, the words in the review are associated with a positive label too. Then, the probability of predicting the sentiment for the word w can be defined as: p(yw|w) = X k∈{0,1} p(yw|w, zw = k)p(zw = k) (4) If zw = 1, the word w is a domain-common word. The probability p(yw = 1|w, zw = 1) can be defined as: p(yw = 1|w, zw = 1) = σ(U c w · s) (5) where σ(·) is the sigmoid function and the vector s with dimension d represents the boundary of the sentiment. Moreover, we have: p(yw = 0|w, zw = 1) = 1−p(yw = 1|w, zw = 1) (6) If w is a domain-specific word, similarly, the probability p(yw = 1|w, zw = 0) is defined as: p(yw = 1|w, zw = 0) = ( σ(U p w · s) if w ∈Dp σ(U q w · s) if w ∈Dq (7) 3.3 Inference Algorithm We need an inference method that can learn, given Dp and Dq, the values of the model parameters, namely, the domain-common embedding U c w, and the domain-specific embeddings U p w and U q w, as well as the domain-commonality distribution p(zw) for each word w. Our inference method combines the Expectation-Maximization (EM) method with a negative sampling scheme. It is summarized in Algorithm 1. In the E-step, we use the Bayes rule to evaluate the posterior distribution of zw for each word and derive the objective function. In the M-step, we maximize the objective function with the gradient descent method and 2498 Algorithm 1 EM negative sampling for DSE 1: Initialize U c w, U p w, U q w, V , s, p(zw) 2: for iter = 1 to Max iter do 3: for each review r in Dp and Dq do 4: for each word w in r do 5: Sample negative instances from the distribution P. 6: Update p(zw|w, cw, yw) by Eq. 11 and Eq. 15 respectively. 7: end for 8: end for 9: Update p(zw) using Eq. 13 10: Update U c w, U p w, U q w, V , s via Maximizing Eq. 14 11: end for update the corresponding embeddings U c w, U p w and U q w. With the input of Dp and Dq, the likelihood function of the whole training set is: L = Lp + Lq (8) where Lp and Lq are the likelihood of Dp and Dq respectively. For each review r from Dp, to learn domainspecific and sentiment-aware embeddings, we wish to predict the sentiment label and context words together. Therefore, the likelihood function is defined as follows: Lp = X r∈Dp X w∈r log p(yw, cw|w) (9) where yw is the sentiment label and cw is the set of context words of w. For the simplification of the model, we assume that the sentiment label yw and the context words cw of the word w are conditionally dependent. Then the likelihood Lp can be rewritten as: Lp = X r∈Dp X w∈r X wt∈cw log p(wt|w)+ X r∈Dp X w∈r log p(yw|w) (10) where p(wt|w) and p(yw|w) are defined in Eq. 1 and Eq. 4 respectively. The likelihood of the reviews from Dq, i.e Lq, is defined similarly. For each word w in the review r, in the E-step, the posterior probability of zw given cw and yw is: p(zw = k|w, cw, yw) = p(zw = k)p(yw|w, zw = k) Q wt∈cw p(wt|w, zw = k) P k′∈{0,1} p(zw = k′)p(yw|w, zw = k′) Q wt∈cw p(wt|w, zw = k′) (11) In the M-step, given the posterior distribution of zw in Eq. 11, the goal is to maxmize the following Q function: Q = X r∈{Dp,Dq} X w∈r X zw p(zw|w, yw, wt+j) × log(p(zw)p(cw, y|z, wt)) = X r∈{Dp,Dq} X w∈r X zw p(zw|w, yw, cw) [log p(zw) + log(yw|z, w)+ X wt∈cw log p(wt|zw, w)] (12) Using the Lagrange multiplier, we can obtain the update rule of p(zw), satisfying the normalization constraints that P zw∈0,1 p(zw) = 1 for each word w: p(zw) = P r∈{Dp,Dq} P w∈r p(zw|w, yw, cw) P r∈{Dp,Dq} n(w, r) (13) where n(w, r) is the number of occurrence of the word w in the review r. To obtain U c w, U p w and U q w, we collect the related items in Eq. 12 as follows: QU = X r∈{Dp,Dq} X w∈r X zw p(zw|w, yw, wt+j) [log(yw|zw, w) + X wt∈cw log p(wt|zw, w)] (14) Note that computing the value p(wt|w, zw) based on Eq. 2 and Eq. 3 is not feasible in practice, given that the computation cost is proportional to the size of Λ. However, similar to the skip-gram model, we can rely on negative sampling to address this issue. Therefore we estimate the probability of predicting the context word p(wt|w, zw = 1) as follows: log p(wt|w, zw = 1) ∝log σ(U c w · Vwt) + n X i=1 Ewi∼P [log σ(−U c w · Vwi)] (15) 2499 where wi is a negative instance which is sampled from the word distribution P(.). Mikolov et al. (2013) have investigated many choices for P(w) and found that the best P(w) is equal to the unigram distribution Unigram(w) raised to the 3/4rd power. We adopt the same setting. The probability p(wt|w, zw = 0) in Eq. 3 can be approximated in a similar manner. After the substitution of p(wt|w, zw), we use the Stochastic Gradient Descent method to maximize Eq. 14, and obtain the update of U c w, U p w and U q w. 3.4 More Discussions In our model, for simplifying the inference algorithm and saving the computational cost, we assume that the target word wt in the context and the sentiment label yw of the word w are conditionally independent. Such technique has also been used in other popular models such as the bi-gram language model. Otherwise, we need to consider the term p(wt|w, yw), which complicates the inference algorithm. We define the formulation of the term p(wt|w, z) to be similar to the original skipgram model instead of the CBOW model. The CBOW model averages the context words to predict the target word. The skip-gram model uses pairwise training examples which are much easier to integrate with sentiment information. Note that our model can be easily extended to more than two domains. Similarly, we use a domain-specific vector for each word in each domain and each word is also associated with a domain-common vector. We just need to extend the probability distribution of zw from Bernoulli distribution to Multinomial distribution according to the number of domains. 4 Experiment 4.1 Experimental Setup We conducted experiments on the Amazon product reviews collected by Blitzer et al. (2007). We use four product categories: books (B), DVDs (D), electronic items (E), and kitchen appliances (K). A category corresponds to a domain. For each domain, there are 17,457 unlabeled reviews on average associated with rating scores from 1.0 to 5.0 for each domain. We use unlabeled reviews with rating score higher than 3.0 as positive reviews and unlabeled reviews with rating score lower than 3.0 as negative reviews for embedding learning. We first remove reviews whose length is less than 5 words. We also remove punctuations and the stop words. We also stem each word to its root form using Porter Stemmer (Porter, 1980). Note that this review data is used for embedding learning, and the learned embeddings are used as feature vectors of words to conduct the experiments in the later two subsections. Given the reviews from two domains, namely, Dp and Dq, we compare our results with the following baselines and state-of-the-art methods: SSWE The SSWE model1 proposed by Tang et al. (2014b) can learn sentiment-aware word embeddings from tweets. We employ this model on the combined reviews from Dp and Dq and then obtain the embeddings. Yang’s Work Yang et al. (2017) have proposed a method2 to learn domain-sensitive word embeddings. They choose pivot words and add a regularization item into the original skipgram objective function enforcing that word representations of pivot words for the source and target domains should be similar. The method trains the embeddings of the source domain first and then fixes the learned embedding to train the embedding of the target domain. Therefore, the learned embedding of the target domain benefits from the source domain. We denote the method as Yang in short. EmbeddingAll We learn word embeddings from the combined unlabeled review data of Dp and Dq using the skip-gram method (Mikolov et al., 2013). EmbeddingCat We learn word embeddings from the unlabeled reviews of Dp and Dq respectively. To represent a word for review sentiment classification, we concatenate its learned word embeddings from the two domains. EmbeddingP and EmbeddingQ In EmbeddingP, we use the original skip-gram method (Mikolov et al., 2013) to learn word 1We use the implementation from https: //github.com/attardi/deepnl/wiki/ Sentiment-Specific-Word-Embeddings. 2We use the implementation from http://statnlp. org/research/lr/. 2500 B & D B & E B & K D & E D & K E & K Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 BOW 0.680 0.653 0.738 0.720 0.734 0.725 0.705 0.685 0.706 0.689 0.739 0.715 EmbeddingP 0.753 0.740 0.752 0.745 0.742 0.741 0.740 0.746 0.707 0.702 0.761 0.760 EmbeddingQ 0.736 0.732 0.697 0.697 0.706 0.701 0.762 0.759 0.758 0.759 0.783 0.780 EmbeddingCat 0.769 0.731 0.768 0.763 0.763 0.763 0.787 0.773 0.770 0.770 0.807 0.803 EmbeddingAll 0.769 0.759 0.765 0.740 0.775 0.767 0.783 0.779 0.779 0.776 0.819 0.815 Yang 0.767 0.752 0.775 0.766 0.760 0.755 0.791 0.785 0.762 0.760 0.805 0.804 SSWE 0.783 0.772 0.791 0.780 0.801 0.792 0.825 0.815 0.795 0.790 0.835 0.824 DSEc 0.773 0.750 0.783 0.781 0.775 0.773 0.797 0.792 0.784 0.776 0.806 0.800 DSEw 0.794†♮0.793†♮ 0.806†♮0.802†♮ 0.797† 0.793† 0.843†♮0.832†♮ 0.829†♮0.827†♮ 0.856†♮0.853†♮ Table 1: Results of review sentiment classification. The markers † and ♮refer to p-value < 0.05 when comparing with Yang and SSWE respectively. embeddings only from the unlabeled reviews of Dp. Similarly, we only adopt the unlabeled reviews from Dq to learn embeddings in EmbeddingQ. BOW We use the traditional bag of words model to represent each review in the training data. For our DSE model, we have two variants to represent each word. The first variant DSEc represents each word via concatenating the domaincommon vector and the domain-specific vector. The second variant DSEw concatenates domaincommon word embeddings and domain-specific word embeddings by considering the domaincommonality distribution p(zw). For individual review instances, the occurrences of w in Dp or Dq have been established. The representation of w is specific to the occurrence of w in Dp or Dq. Specifically, each word w can be represented as follows: Uw =            if w ∈Dp U c w × p(zw) ⊕U p w × (1.0 −p(zw)) if w ∈Dq U c w × p(zw) ⊕U q w × (1.0 −p(zw)) (16) where ⊕denotes the concatenation operator. For all word embedding methods, we set the dimension to 200. For the skip-gram based methods, we sample 5 negative instances and the size of the windows for each target word is 3. For our DSE model, the number of iterations for the whole reviews is 100 and the learning rate is set to 1.0. 4.2 Review Sentiment Classification For the task of review sentiment classification, we use 1000 positive and 1000 negative sentiment reviews labeled by Blitzer et al. (2007) for each domain to conduct experiments. We randomly select 800 positive and 800 negative labeled reviews from each domain as training data, and the remaining 200 positive and 200 negative labeled reviews as testing data. We use the SVM classifier (Fan et al., 2008) with linear kernel to train on the training reviews for each domain, with each review represented as the average vector of its word embeddings. We use two metrics to evaluate the performance of sentiment classification. One is the standard accuracy metric. The other one is Macro-F1, which is the average of F1 scores for both positive and negative reviews. We conduct multiple trials by selecting every possible two domains from books (B), DVDs (D), electronic items (E) and kitchen appliances (K). We use the average of the results of each two domains. The experimental results are shown in Table 1. From Table 1, we can see that compared with other baseline methods, our DSEw model can achieve the best performance of sentiment classification across most combinations of the four domains. Our statistical t-tests for most of the combinations of domains show that the improvement of our DSEw model over Yang and SSWE is statistically significant respectively (p-value < 0.05) at 95% confidence level. It shows that our method can capture the domain-commonality and sentiment information at the same time. Even though both of the SSWE model and our DSE model can learn sentiment-aware word embeddings, our DSEw model can outperform SSWE. It demonstrates that compared with general sentiment-aware embeddings, our learned domain-common and domain-specific word em2501 B & D B & E B & K D & E D & K E & K HL MPQA HL MPQA HL MPQA HL MPQA HL MPQA HL MPQA EmbeddingP 0.740 0.733 0.742 0.734 0.747 0.735 0.744 0.701 0.745 0.709 0.628 0.574 EmbeddingQ 0.743 0.701 0.627 0.573 0.464 0.453 0.621 0.577 0.462 0.450 0.465 0.453 EmbeddingCat 0.780 0.772 0.773 0.756 0.772 0.751 0.744 0.728 0.755 0.702 0.683 0.639 EmbeddingAll 0.777 0.769 0.773 0.730 0.762 0.760 0.712 0.707 0.749 0.724 0.670 0.658 Yang 0.780 0.775 0.789 0.762 0.781 0.770 0.762 0.736 0.756 0.713 0.634 0.614 SSWE 0.816 0.801 0.831 0.817 0.822 0.808 0.826 0.785 0.784 0.772 0.707 0.659 DSE 0.802 0.788 0.833 0.828 0.832 0.799 0.804 0.797 0.796 0.786 0.725 0.683 Table 2: Results of lexicon term sentiment classification. beddings can capture semantic variations of words across multiple domains. Compared with the method of Yang which learns cross-domain embeddings, our DSEw model can achieve better performance. It is because we exploit sentiment information to distinguish domain-common and domain-specific words during the embedding learning process. The sentiment information can also help the model distinguish the words which have similar contexts but different sentiments. Compared with EmbeddingP and EmbeddingQ, the methods of EmbeddingAll and EmbeddingCat can achieve better performance. The reason is that the data augmentation from other domains helps sentiment classification in the original domain. Our DSE model also benefits from such kind of data augmentation with the use of reviews from Dp and Dq. We observe that our DSEw variant performs better than the variant of DSEc. Compared with DSEc, our DSEw variant adds the item of p(zw) as the weight to combine domain-common embeddings and domain-specific embeddings. It shows that the domain-commonality distribution in our DSE model, i.e p(wz), can effectively model the domain-sensitive information of each word and help review sentiment classification. 4.3 Lexicon Term Sentiment Classification To further evaluate the quality of the sentiment semantics of the learned word embeddings, we also conduct lexicon term sentiment classification on two popular sentiment lexicons, namely HL (Hu and Liu, 2004) and MPQA (Wilson et al., 2005). The words with neutral sentiment and phrases are removed. The statistics of HL and MPQA are shown in Table 3. We conduct multiple trials by selecting every possible two domains from books (B), DVDs (D), electronics items (E) and kitchen appliances (K). Lexicon Positive Negative Total HL 1,331 2,647 3,978 MPQA 1,932 2,817 3,075 Table 3: Statistics of the sentiment lexicons. For each trial, we learn the word embeddings. For our DSE model, we only use the domain-common part to represent each word because the lexicons are usually not associated with a particular domain. For each lexicon, we select 80% to train the SVM classifier with linear kernel and the remaining 20% to test the performance. The learned embedding is treated as the feature vector for the lexicon term. We conduct 5-fold cross validation on all the lexicons. The evaluation metric is MacroF1 of positive and negative lexicons. Table 2 shows the experimental results of lexicon term sentiment classification. Our DSE method can achieve competitive performance among all the methods. Compared with SSWE, our DSE is still competitive because both of them consider the sentiment information in the embeddings. Our DSE model outperforms other methods which do not consider sentiments such as Yang, EmbeddingCat and EmbeddingAll. Note that the advantage of domain-sensitive embeddings would be insufficient for this task because the sentiment lexicons are not domain-specific. 5 Case Study Table 4 shows the probabilities of “lightweight”, “die”, “mysterious”, and “great” to be domaincommon for different domain combinations. For “lightweight”, its domain-common probability for the books domain and the DVDs domain (“B & D”) is quite high, i.e. p(z = 1) = 0.999, and the review examples in the last column show that the word “lightweight” expresses the meaning of lacking depth of content in books or movies. Note that most reviews of DVDs are about movies. 2502 Term Domain p(z = 1) Sample Reviews “lightweight” B & D 0.999 - I find Seth Godin’s books incredibly lightweight. There is really nothing of any substance here.(B) - I love the fact that it’s small and lightweight and fits into a tiny pocket on my camera case so I never lose track of it.(E) - These are not ”lightweight” actors. (D) - This vacuum does a pretty good job. It is lightweight and easy to use.(K) B & E 0.404 B & K 0.241 D & E 0.380 D & K 0.013 E & K 0.696 “die” B & E 0.435 - I’m glad Brando lived long enough to get old and fat, and that he didn’t die tragically young like Marilyn, JFK, or Jimi Hendrix.(B) - Like many others here, my CD-changer died after a couple of weeks and it wouldn’t read any CD.(E) - I had this toaster for under 3 years when I came home one day and it smoked and died. (K) B & K 0.492 E & K 0.712 “mysterious” - This novel really does cover the gamut: stunning twists, genuine love, beautiful settings, desire for riches, mysterious murders, detective investigations, false accusations, and self vindication.(B) - Caller ID functionality for Vonage mysteriously stopped working even though this phone’s REN is rated at 0.1b. (E) B & E 0.297 “great” B & D 0.760 - This is a great book for anyone learning how to handle dogs.(B) - This is a great product, and you can get it, along with any other products on Amazon up to $500 Free!(E) - I grew up with drag racing in the 50s, 60s & 70s and this film gives a great view of what it was like.(D) - This is a great mixer its a little loud but worth it for the power you get.(K) B & E 0.603 B & K 0.628 D & E 0.804 D & K 0.582 E & K 0.805 Table 4: Learned domain-commonality for some words. p(z = 1) denotes the probability that the word is domain-common. The letter in parentheses indicates the domain of the review. In the electronics domain and the kitchen appliances domain (“E & K”), “lightweight” means light material or weighing less than average, thus the domain-common probability for these two domains is also high, 0.696. In contrast, for the other combinations, the probability of “lightweight” to be domain-common is much smaller, which indicates that the meaning of “lightweight” varies. Similarly, “die” in the domains of electronics and kitchen appliances (“E & K”) means that something does not work any more, thus, we have p(z = 1) = 0.712. While for the books domain, it conveys meaning that somebody passed away in some stories. The word “mysterious” conveys a positive sentiment in the books domain, indicating how wonderful a story is, but it conveys a negative sentiment in the electronics domain typically describing that a product breaks down unpredictably. Thus, its domain-common probability is small. The last example is the word “great”, and it usually has positive sentiment in all domains, thus has large values of p(z = 1) for all domain combinations. 6 Conclusions We propose a new method of learning domainsensitive and sentiment-aware word embeddings. Compared with existing sentiment-aware embeddings, our model can distinguish domain-common and domain-specific words with the consideration of varied semantics across multiple domains. Compared with existing domain-sensitive methods, our model detects domain-common words according to not only similar context words but also sentiment information. Moreover, our learned embeddings considering sentiment information can distinguish words with similar syntactic context but opposite sentiment polarity. We have conducted experiments on two downstream sentiment classification tasks, namely review sentiment classification and lexicon term sentiment classification. The experimental results demonstrate the advantages of our approach. References Ngo Xuan Bach, Vu Thanh Hai, and Tu Minh Phuong. 2016. Cross-domain sentiment classification with word embeddings and canonical correlation analysis. In Proceedings of the Seventh Symposium on Information and Communication Technology. ACM, pages 159–166. Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern Information Retrieval, volume 463. ACM press New York. Yoshua Bengio. 2009. Learning deep architectures for ai. Foundations and Trends in Machine Learning 2(1):1–127. 2503 Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research 3:1137–1155. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. pages 440–447. Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2015. Unsupervised cross-domain word representation learning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. pages 730–740. Danushka Bollegala, Tingting Mu, and John Yannis Goulermas. 2016. Cross-domain sentiment classification using sentiment sensitive embeddings. IEEE Transactions on Knowledge and Data Engineering 28(2):398–410. Danushka Bollegala, David Weir, and John Carroll. 2014. Learning to predict distributions of words across domains. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. pages 613–623. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 452–461. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537. Vachik Dave, Baichuan Zhang, Pin-Yu Chen, and Mohammad Al Hasan. 2018. Neural-brane: Neural bayesian personalized ranking for attributed network embedding. arXiv preprint arXiv:1804.08774 . Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. Journal of Machine Learning Research 9:1871–1874. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning. pages 513–520. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 168–177. Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pages 897–907. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. pages 1746–1751. Tao Li, Vikas Sindhwani, Chris Ding, and Yi Zhang. 2009. Knowledge transformation for cross-domain sentiment classification. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval. pages 716–717. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies 5(1):1–167. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. pages 3111–3119. Rodrigo Moraes, Jo˜aO Francisco Valiati, and Wilson P Gavi˜aO Neto. 2013. Document-level sentiment classification: An empirical comparison between svm and ann. Expert Systems with Applications 40(2):621–633. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th International Conference on World Wide Web. pages 751–760. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. pages 79–86. Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval 2(1–2):1–135. Martin F Porter. 1980. An algorithm for suffix stripping. Program 14(3):130–137. Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pages 1192–1202. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by backpropagating errors. Cognitive modeling 5(3):1. 2504 Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, and Kwun Ping Lai. 2017. Jointly learning word embeddings and latent topics. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. pages 375–384. Thorvald Sørensen. 1948. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on danish commons. Biologiske Skrifter 5:1–34. Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014a. Building large-scale twitter-specific sentiment lexicon: A representation learning approach. In Proceedings of the 25th International Conference on Computational Linguistics. pages 172–182. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014b. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. volume 1, pages 1555–1565. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. pages 384–394. Yequan Wang, Minlie Huang, Li Zhao, et al. 2016. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 606–615. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the 2005 Conference on Empirical Methods in Natural Language Processing. pages 347–354. Wei Yang, Wei Lu, and Vincent Zheng. 2017. A simple regularization-based algorithm for learning crossdomain word embeddings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2898–2904. Baichuan Zhang and Mohammad Al Hasan. 2017. Name disambiguation in anonymized graphs using network embedding. In Proceedings of the 26th ACM International on Conference on Information and Knowledge Management. pages 1239–1248.
2018
232
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2505–2513 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2505 Cross-Domain Sentiment Classification with Target Domain Specific Information Minlong Peng, Qi Zhang∗, Yu-gang Jiang, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {mlpeng16,qz,ygj,xjhuang}@fudan.edu.cn Abstract The task of adopting a model with good performance to a target domain that is different from the source domain used for training has received considerable attention in sentiment analysis. Most existing approaches mainly focus on learning representations that are domain-invariant in both the source and target domains. Few of them pay attention to domain specific information, which should also be informative. In this work, we propose a method to simultaneously extract domain specific and invariant representations and train a classifier on each of the representation, respectively. And we introduce a few target domain labeled data for learning domain-specific information. To effectively utilize the target domain labeled data, we train the domain-invariant representation based classifier with both the source and target domain labeled data and train the domain-specific representation based classifier with only the target domain labeled data. These two classifiers then boost each other in a co-training style. Extensive sentiment analysis experiments demonstrated that the proposed method could achieve better performance than state-of-the-art methods. 1 Introduction Sentiment classification aims to automatically predict sentiment polarity of user generated sentiment data like movie reviews. The exponential increase in the availability of online reviews and recommendations makes it an interesting topic in research and industrial areas. However, reviews ∗Corresponding author. excellent great disappointing insightful delicious confusing fast Book Review Kitchen Review Figure 1: Top indicators extracted with logistic regression for Book and Kitchen domains. The overlap between the two ellipses denotes the shared features between these two domains. can span so many different domains that it is difficult to gather annotated training data for all of them. This has motivated much research on crossdomain sentiment classification which transfers the knowledge from label rich domain (source domain) to the label few domain (target domain). In recent years, the most popular cross-domain sentiment classification approach is to extract domain invariant features, whose distribution in the source domain is close to that in the target domain. (Glorot et al., 2011; Fernando et al., 2013; Kingma and Welling, 2013; Aljundi et al., 2015; Baochen Sun, 2015; Long et al., 2015; Ganin et al., 2016; Zellinger et al., 2017). And based on this representation, it trains a classifier with the source rich labeled data. Specifically, for data of the source domain Xs and data of the target domain Xt, it trains a feature generator G(·) with restriction P(G(Xs)) ≈P(G(Xt)). And the classifier is trained on G(Xs) with the source labels Ys. The main difference of these approaches is the mechanism to incorporate the restriction on G(·) into the system. The major limitation of this framework is that it losses the domain specific information. As depicted in Figure1, even if it can perfectly extract the domain 2506 invariant features (e.g., excellent), it will loss some strong indicators (e.g., delicious, fast) of the target Kitchen domain. We believe that it can achieve greater improvement if it can effectively make use of this information. Thus, in this work, we try to explore a path to use the target domain specific information with as few as possible target labeled data. Specifically, we first introduce a novel method to extract the domain invariant and domain specific features of target domain data. Then, we treat these two representations as two different views of the target domain data and accordingly train a domain invariant classifier and a target domain specific classifier, respectively. Because the domain invariant representation is compatible with both source data and target data, we train the domain invariant classifier with both source and target labeled data. And for the target domain specific classifier, we train it with target labeled data only. Based on these two classifiers, we perform co-training on target unlabeled data, which can further improve the usage of target data in a bootstrap style. In summary, the contributions of this paper include: (i) This is the first work to explore the usage of target domain specific information in cross-domain sentiment classification task. (ii) We propose a novel to extract the domain specific representation of target domain data, which encodes the individual characteristics of the target domain. 2 Related Work Domain adaptation aims to generalize a classifier that is trained on a source domain, for which typically plenty of labeled data is available, to a target domain, for which labeled data is scarce. In supervised domain adaptation, cross-domain classifiers are learnt by using labeled source samples and a small number of labeled target samples (Hoffman et al., 2014). A common practice is training the cross-domain classifiers with the labeled source data and then fine-tuning the classifier with the target labeled data (Pan and Yang, 2010). Meanwhile, some unsupervised and semi-supervised cross domain methods (Ganin et al., 2016; Louizos et al., 2015; Zellinger et al., 2017) are proposed by combining the transfer of classifiers with the match of distributions. These methods focus on extracting the domaininvariant features with the help of unlabeled data. Specifically, Ganin et al., (2016) incorporated an adversarial framework to perform this task. It trained the feature generator to minimize the classification loss and simutaneously deceive the discriminator, which is trained to distinguish the domain of the input data coming from. Louizos et al., (2015) used the Maximum Mean Discrepancy (Borgwardt et al., 2006) regularizer to constrain the feature generator to extract the domain invariant features. And similarly, Zellinger et al., (2017) proposed the central moment discrepancy (CMD) metric for the role of domain regularizer. The above methods either treat it no difference between domain specific information and domain invariant information or just ignore the domain specific information during in the process of learning adaptive classifiers. One of the most related work is the DSN model (Bousmalis et al., 2016). It proposed to extract the domain specific and the domain invariant representations, simultaneously. However, It does not explored the usage of the domain specific information. Its classifier was still only trained on the domain invariant representation. This work differs from it in the following two aspects. First, we make use of the source and target unlabeled data to extract domain specific information, instead of relying on the orthogonality constraint between the extra representation and the domain invariant counterpart. It is achieved by forcing the distribution of the source examples and that of the target examples in the domain specific space to be different. We argue that this can avoid the potential problem of the orthogonality constraint in that the domain specific representation can be well predicted by the domain invariant representation, while simultaneously meeting the orthogonality constraint. For example, let X = (0, Z) be the domain invariant representation and Y = (Z, 0) be the domain specific representation, then X can be uniquely determined by Y , while in the meanwhile X ⊥Y . Second, we apply a cotraining framework to make use of the domain specific representation, rather than simply treating it as a regularizer for extracting the domain invariant representation. Another related work is the CODA model (Chen et al., 2011). It also applied a co-training framework for semi-supervised domain adaptation. However, instead of dividing the feature space into domain invariant and domain specific 2507 Xs Xt Xs Xt Co-training on Ut Invariant Encoder Ec(x) Target Specific Encoder Et(x) Invariant ClassfierFc(x) Specific Classfier Ft(x) Dt(x) Decoder Lsim Ldiff Lc Lrecon Lt Ht common Hs common Hs target Ht target Figure 2: The general architecture of the proposed model. The source data Xs and target data Xt are mapped to a domain invariant representation and a target domain specific representation by feature maps Ec and Et, respectively. In the space of the domain invariant representation, the distributions of source data Hs inv and target data Ht common are forced to be similar by minimizing a certain distance Lsim. In contrast, in the space of the target domain specific representation, the distributions of source data Hs spec and target data Ht spec are forced to be different by minimizing the distance Ldiff. Based on the domain invariant representation, a classifier Fc is trained with the source rich labeled data and some of the target labeled data. In addition, based on the target domain specific representation, a classifier Ft is trained with the target labeled data only. These two classifiers teach each other in a co-training framework based on the target unlabeled data Ut. parts, it randomly separated the features space. 3 Approach We consider the following domain adaptation setting. The source domain consists of a set of ns fully labeled points Ds = {(xs 1, ys 1), · · · , (xs ns, ys ns)} ⊂Rd × Y drawn from the distribution Ps(X, Y). And the target data is divided into nl (nl ≪ns) labeled points Dl t = {(xt 1, yt 1), · · · , (xt nl, yt nl} ⊂Rd × Y from the distribution Pt(X, Y) and nu (nu ≫ nl) unlabeled points Du t = {(xt nl+1, yt nl+1), · · · , (xt nl+nu, yt nl+nu} ⊂ Rd from the marginal distribution Pt(X). The goal is to build a classifier for the target domain data using the source domain data and a few labeled target domain data. In the following section, we first introduce the CMD metric, which is used to measure the probability distribution discrepancy between two random variables. Then, we describe our method to extract the domain specific and domain invariant representations of target domain examples, using the CMD-based regularizer. Finally, we show how to combine these two representations using a co-training framework. 3.1 Central Moment Discrepancy (CMD) The CMD metric was proposed by Zellinger et al.(2017) to measure the discrepancy between the probability distributions of two (highdimensional) random variables. It is one of the state-of-the-art metrics and is used as a domain regularizer for domain adaptation. Here, we introduce its definition as a domain regularizer. Definition 1 (CMD regularizer). Let X and Y be bounded random samples with respective probability distributions p and q on the interval [a, b]N. The CMD regularizer CMDK is defined by CMDK(X, Y ) = 1 |b −a| ∥E(X) −E(Y ) ∥2 + 1 |b −a|k K X k=2 ∥Ck(X) −Ck(Y ) ∥2, (1) where E(X) = 1 |X| P x∈X x is the empirical expectation vector computed on the sample X and Ck(X) = E( N Y i=1 (Xi −E(Xi))ri ! ri≥0,PN i ri=k , is the vector of all kth order sample central moments of the coordinates of X. 2508 An intuitive understanding of this metric is that if two probability distributions are similar, their central moment of each order should be close. 3.2 Extract Domain Invariant and Domain Specific Representations In this work, we aim to extract a domain invariant representation, as well as a domain specific counterpart, for each target example. This makes our work different from most of the existing works, which only focus on the domain invariant representation. The general architecture of the proposed model is illustrated in Figure 2. Data are mapped into a domain invariant hidden space and target domain specific hidden space using two different mappers Et and Ec, respectively: Hs spec = Et(Xs; θt e) Ht spec = Et(Xt; θt e) Hs inv = Ec(Xs; θc e) Ht inv = Ec(Xt; θc e). (2) Here, Et refers to the domain invariant mapper and Ec is the target domain specific mapper. θt e and θc e denote their corresponding parameters. The subscript e denotes encode. Based on the hidden presentations Ht inv and Ht spec, we build an autoencoder for the target domain examples: ˆ Xt = Dt(Ht inv, Ht spec; θt d), (3) with respect to parameters θt d, where the subscript d denotes decode. The corresponding reconstruction loss is defined by the mean square error: Lrecon = 1 nt nt X i 1 k||Xi t −ˆ Xt i||2 2, (4) where k is the dimension of the input feature vector, and Xi t denotes the ith example of the target domain data. Note that in this work, only target examples are passed to the auto-encoder because we only want to extract target domain specific information. For Ec, we hope that it only encodes features shared by both the source and target domains. From the distribution view, we hope that the distributions of the mapped outputs, by Ec, of source and target data are similar. To this end, we apply the CMD regularizer onto the hidden representation of source data Hs inv and that of target data Ht inv. The corresponding loss is defined by: Lsim = CMDK(Hs inv, Ht inv). (5) Minimizing this loss will force the distribution of Hs inv and Ht inv to be similar, which in turn encourages Ec to encode domain invariant features. And for the domain specific encoder Et, we hope that it only encodes features dominated by the target domain. Ideally, these features should commonly appear in the target domain while hardly appear in the source domain. We argue that this can also be obtained by forcing the distribution of these features in the target domain to differ from that in the source domain, because the target specific auto-encoder Dt should filter out features that hardly appear in the target domain while commonly appear in the source domain. Based on this intuition, we apply a signal flipped CMD regularizer onto the mapped representation of source data Hs spec and that of target data Ht spec. The corresponding loss is defined by: Ldiff = −CMDK(Hs spec, Ht spec). (6) Minimizing this loss encourages the distribution of Hs spec to differ from that of Ht spec, which in turn encourage Et to encode domain specific features. 3.3 Co-Training with Domain Invariant and Domain Specific Representations The co-training algorithm assumes that the data set is presented in two separate views, and two classifiers are trained for each view. In each iteration, some unlabeled examples that are confidently predicted according to exactly one of the two classifiers are moved to the training set. In this way, one classifier provides the predicted labels to the unlabeled examples, on which the other classifier may be uncertain. In this work, we treat the domain invariant representation and the domain specific representation as the two separate views of target domain examples. Based on the domain invariant representation, we train a domain invariant classifier, Fc, with respect to parameters θc. In addition, based on the domain specific representation, we train a domain specific classifier, Ft, with respect to parameters θt. Because the distribution of the source examples is compatible with that of the target examples in 2509 input: Ls: labeled source domain examples Lt: labeled target domain examples Ut: unlabeled target domain examples Hs inv: Invariant representation of Ls Ht inv: Invariant representation of Lt Ht spec: Specific representation of Lt repeat Train classifier Fc with Ls and Lt based on Hs inv and Ht inv; Apply classifier Fc to label Ut; Select p positive and n negative the most confidently predicted examples U c t from Ut; Train classifier Ft with Lt based on Ht spec; Apply classifier Ft to label Ut; Select p positive and n negative the most confidently predicted examples U t t from Ut; Remove examples U c t ∪U t t from Ut; Add examples U c t ∪U t t and their corresponding labels to Lt; until best performance obtained on the developing data set; Algorithm 1: Co-Training for Domain Adaptation the domain invariant hidden space, we use both the source rich labels and a few target labels to train the classifier Fc. To train the classifier Ft, only the target labels are used. The entire procedure is described in Algorithm 1. 3.4 Model Learning The training of this model is divided into two parts with one for the domain invariant classifier, Fc, and another one for the domain specific classifier, Ft. For Fc, the goal of training is to minimize the following loss with respect to parameters Θ = {θc e, θc e, θt d, θc}: L = Lrecon(θc e, θt e, θt d) + αLc(θc e, θc) + γLsim(θc e) + λLdiff(θt e), (7) where α, γ, and λ are weights that control the interaction of the loss terms. L(θ) means that loss, L, is optimized on the parameters θ during training. And Lc denotes the classification loss on the domain invariant representation, which is defined by the negative log-likelihood of the ground truth class for examples of both source and target domains: Lc = 1 ns + lt ns X i=1 −Y i s log Fc(Y i s |Ec(Li s)) + 1 ns + lt lt X i=1 −Y i t log Fc(Y i t |Ec(Li t)), (8) where Y i s is the one-hot encoding of the class label for the ith source example, Y i t is that for the ith labeled target example, and lt denotes the dynamic number of target labeled data in each iteration. For Ft, the goal of training is to minimize the following loss with respect to parameters Θ = {θc e, θt e, θt d, θt}: L = Lrecon(θc e, θt e, θt d) + βLt(θt e, θt) + γLsim(θc e) + λLdiff(θt e), (9) where γ and λ are the same weights as those for the classifier Fc, and β is the weight that controls the portion of classification loss, Lt, on the domain specific representation, which is defined by the negative log-likelihood of the ground truth class for examples of the target domain only: Lt = 1 lt lt X i=1 −Y i t log Ft(Y i t |Et(Li t)) (10) 4 Experiment 4.1 Dataset Domain adaptation for sentiment classification has been widely studied in the NLP community. The major experiments were performed on the benchmark made of reviews of Amazon products gathered by Blitzer et al. (2007). This data set1 contains Amazon product reviews from four different domains: Books, DVD, Electronics, and Kitchen appliances from Amazon.com. Each review was originally associated with a rating of 15 stars. For simplicity, we are only concerned with whether or not a review is positive (higher than 3 stars) or negative (3 stars or lower). Reviews are encoded in 5,000 dimensional tf-idf feature vectors of bag-of-words unigrams and bigrams. From this data, we constructed 12 cross-domain binary classification tasks. Each domain adaptation task consists of 2,000 labeled source examples, 1https://www.cs.jhu.edu/ mdredze/datasets/sentiment/ 2510 S→T Supervised Learning Unsupervised Transfer Semi-supervised Transfer SO ST CMD DSN CMD-ft DSN-ft CODA CoCMD (p-value) B→D 81.7±0.2 81.6±0.4 82.6±0.3 82.8±0.4 82.7±0.1 82.7±0.6 81.9±0.4 83.1±0.1(.003) B→E 74.0±0.6 75.8±0.2 81.5±0.6 81.9±0.5 82.4±0.6 82.3±0.8 77.5±2.0 83.0±0.6(.061) B→K 76.4±1.0 78.2±0.6 84.4±0.3 84.4±0.6 84.7±0.5 84.8±0.9 80.4±0.8 85.3±0.7(.039) D→B 79.5±0.3 80.0±0.4 80.7±0.6 80.1±1.3 81.0±0.7 81.1±1.2 80.6±0.3 81.8±0.5(.022) D→E 75.6±0.7 77.0±0.3 82.2±0.5 81.4±1.1 82.5±0.7 81.3±1.2 79.4±0.7 83.4±0.6(.019) D→K 79.5±0.4 80.4±0.6 84.8±0.2 83.3±0.7 84.5±0.9 83.8±0.8 82.4±0.5 85.5±0.8(.055) E→B 72.3±1.5 74.7±0.4 74.9±0.6 75.1±0.4 76.2±0.6 76.3±1.4 73.6±0.7 76.9±0.6(.094) E→D 74.2±0.6 75.4±0.4 77.4±0.3 77.1±0.3 77.7±0.7 77.1±1.1 75.9±0.2 78.3±0.1(.079) E→K 85.6±0.6 85.7±0.7 86.4±0.9 87.2±0.7 86.7±0.3 87.1±0.9 86.1±0.4 87.3±0.4(.093) K→B 73.1±0.1 73.8±0.3 75.8±0.3 76.4±0.5 76.4±0.5 76.2±0.3 74.3±1.0 77.2±0.4(.016) K→D 75.2±0.7 76.6±0.9 77.7±0.4 78.0±1.4 78.8±0.4 78.5±0.5 77.5±0.4 79.6±0.5(.039) K→E 85.4±1.0 85.3±1.6 86.7±0.6 86.7±0.7 87.3±0.3 87.2±0.4 86.4±0.5 87.2±0.4(.512) Table 1: Average prediction accuracy with 5 runs on target domain testing data set. The left group of models refer to previous state-of-the-art methods and the right group of models refer to the proposed model and some of its variants. We list the p-values of the T-test between CoCMD and CMD-ft for more intuitive understanding. 2,000 unlabeled target examples, and 50 labeled target examples for training. To fine-tune the hyper-parameters, we randomly select 500 target examples as developing data set, leaving 2,5005,500 examples for testing. All of the compared methods and CoCMD share this setting. 4.2 Compared Methods CoCMD is systematically compared with: 1) neural network classifier without any domain adaptation trained on labeled source data only (SO); 2) neural network classifier without any domain adaptation trained on the union of labeled source and target data (ST); 3) unsupervised central moment discrepancy trained with labeled source data only (CMD) (Zellinger et al., 2017); 4) unsupervised domain separation network (DSN) (Bousmalis et al., 2016); 5) semi-supervised CMD trained on labeled source data and then finetuned on labeled target data (CMD-ft); 6) semisupervised DSN trained on labeled source data and then fine-tuned on labeled target data (DSNft); 7) semi-supervised Co-training for domain adaptation (CODA) (Chen et al., 2011). 4.3 Implementation Detail CoCMD was imeplented with a similar architecture to that of Ganin et al., (2016) and Zellinger et al., (2017), with one dense hidden layer with 50 hidden nodes and sigmoid activation functions. The classifiers consist of a softmax layer with two dimensional outputs. And the decoder was implemented with a multilayer perceptron (MLP) with one dense hidden layer, tanh activation functions, and relu output functions. Model optimization was performed using the RmsProp (Tieleman and Hinton, 2012) update rule with learning rate set to 0.005 for all of the tasks.Hyper-parameter K of the CMD regularizer was set to 3 for all of the tasks, according to the experiment result of Zellinger et al. (2017). For the hyper-parameters α, β, γ, and λ, we took the values that achieve the best performance on the developing data set via a grid search {0.01, 0.1, 1, 10, 100}. However, instead of building grids on α, β, γ, and λ all at the same time, we first fine-tuned the values of α and β with the values of γ and λ fixed at 1. After that, we finetuned the values of γ and λ with α and β fixed at the best values obtained at last step. Though, this practice may miss the best combination of these hyper-parameters, it can greatly reduce the time consuming for fine-tuning and still obtain acceptable results. And for each iteration of the co-training, we set p = n = 5. 4.4 Result Table 1 shows the average classification accuracy of our proposed model and the baselines over all 12 domain adaptation tasks. We can first observe that the proposed model CoCMD outperforms the compared methods over almost all of the 2511 Source-only CoCMD: Invariant CoCMD: Specific (a) books →dvd (e) electronics →kitchen (h) books →kitchen Source-only CoCMD: Invariant CoCMD: Specific Source-only CoCMD: Invariant CoCMD: Specific (b) books →dvd (c) books →dvd (f) electronics →kitchen (g) electronics →kitchen (i) books →kitchen (j) books →kitchen Figure 3: The distribution of source and target data in the hidden space of different representations. The red points denote the source examples and the blue ones denote the target examples. The pictures of each row correspond to the B→D, E→K, and B→K task. The pictures of each column correspond to the hidden space, He c , of the source-only model, the domain invariant representation, and the target specific representation of the proposed model. 12 tasks except for the K→E task. And by comparing the results of CMD-based methods and DSN-based methods, we can find out that just extracting the domain specific information but not making further usage does not offer much improvement to the adaptation performance for sentiment classification task. This approves the necessary to explore the usage of domain specific information. If organizing the domain B and D into a group and organizing the domain E and K into another group, we can observe that the domain adaptation methods achieve greater improvement on the standard classifiers over cross-group tasks (e.g., B →K) than over within-group tasks (e.g., B →D). Similar observation can also be observed by comparing ST with SO, CMD with CMD-ft, and DSN with DSN-ft. The possible explanation is that domains within the same group are more close. Thus adapting over within group tasks is easier than adapting over cross group tasks, if without any domain adaptation regularizer. In addition, we can also observe that CoCMD achieve relatively greater improvement on CMD baseline over the cross-group tasks that over the within-group tasks. We argue that this is because domains in the same group contain relatively less domain individual characteristic. While for domains cross the groups, the domain specific information usually takes a larger share of all of the information. Because the additional part of our proposed method compared to the CMD baseline, is built on the domain specific information, the improvement should be relatively less for withingroup tasks. Further analysis of the proposed model in the next section empirically proves this explanation. 4.5 Model Analysis In this section, we look into how similar two domains are to each other in the space of domain invariant representation and domain specific representation. A-distance Study: Some of previous works proposed to make use of a proxy of the Adistance (Ben-David et al., 2007) to measure 2512 0.4 0.6 0.8 1.0 1.2 1.4 Proxy A-distance on SO 0.5 1.0 1.5 2.0 Proxy A-distance on CoCMD EK BD BK DE EK BD BK DE Invariant Space Specific Space Figure 4: Proxy A-distance between domains of the Amazon benchmark for the 4 different tasks. the distance of two domains. The proxy was defined by 2(1 −2ϵ), where ϵ is the generalization error of a linear SVM classifier trained on the binary classification problem to distinguish inputs between the source and target domains. Figure 4 shows the results of each pair of domains. We observe several trends: Firstly, the proxy A-distance of within-group domain pairs (i.e., BD and EK) is consistently smaller than that of the cross-group domain pairs (i.e., BK and DE) on all of the hidden spaces. Secondly, the proxy A-distance on the domain specific space is consistently larger than its corresponding value on the hidden space of SO model, as expected. While the proxy A-distance value on domain invariant space is generally smaller than its corresponding value on the hidden space of SO model, except for BK domain pair. A possible explanation is that the balance of classification loss and domain discrepancy loss makes there is still some target domain specific information in the domain invariant space, introduced by the target unlabeled data. Visualization: For more intuitive understanding of the behaviour of the proposed model, we further perform a visualization of the domain invariant representation and the domain specific representation, respectively. For this purpose, we reduce the dimension of the hidden space to 2 using principle component analysis (PCA) (Wold et al., 1987). Due to space constraints we choose three tasks: two within-group tasks (B→D and E→K) and a cross-group task (B→K). For comparison, we also display the distribution of each domain in the hidden space of the SO model. The results are shown in Figure 3. Pictures of the first column in Figure 3 show the original distribution of the source and target examples in the hidden space of SO model. As can be seen, there is a great overlap between the distributions of the domain B and the domain D domains and between the distributions of the domain E and the domain K. While there is quite a gap between the distribution of the domain B and the domain K. This strengthens our argument that within-group domains share relatively more common information than cross-group domains. Pictures of the second column show the distribution of the source and target examples in the domain invariant hidden space of the proposed model. From these pictures we can see that the distributions of the source and target data are quite similar in this presentation. This demonstrates the effectiveness of the CMD regularizer for extracting domain invariant representation. Pictures of the third column show the distribution of the source and target examples in the domain specific hidden space of the proposed model. As can be seen from these pictures, examples of the source and target domains are separated very well. This demonstrates the effectiveness of our proposed method for extracting domain specific information. 5 Conclusion In this work, we investigated the importance of domain specific information for domain adaptation. In contrast with most of the previous methods, which pay more attention to domain invariant information, we showed that domain specific information could also be beneficially used in the domain adaptation task with a small amount of in-domain labeled data. Specifically, we proposed a novel method, based on the CMD metric, to simultaneously extract domain invariant feature and domain specific feature for target domain data. With these two different features, we performed co-training with labeled data from the source domain and a small amount of labeled data from the target domain. Sentiment analysis experiments demonstrated the effectiveness of this method. 6 Acknowledgments The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural 2513 Science Foundation of China (No. 61751201, 61532011, 61473092, and 61472088), and STCSM (No.16JC1420401,17JC1420200). References Rahaf Aljundi, R´emi Emonet, Damien Muselet, and Marc Sebban. 2015. Landmarks-based kernelized subspace alignment for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 56–63. Jiashi Feng Kate Saenko Baochen Sun. 2015. Return of frustratingly easy domain adaptation. In Thirtieth AAAI Conference on Artificial Intelligence. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2007. Analysis of representations for domain adaptation. In Advances in neural information processing systems. pages 137–144. John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. volume 7, pages 440–447. Karsten M Borgwardt, Arthur Gretton, Malte J Rasch, Hans-Peter Kriegel, Bernhard Sch¨olkopf, and Alex J Smola. 2006. Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22(14):e49–e57. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, Curran Associates, Inc., pages 343–351. http://papers.nips.cc/paper/6254domain-separation-networks.pdf. Minmin Chen, Kilian Q Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Advances in neural information processing systems. pages 2456–2464. Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. 2013. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision. pages 2960–2967. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research 17(59):1– 35. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th international conference on machine learning (ICML-11). pages 513–520. Judy Hoffman, Erik Rodner, Jeff Donahue, Brian Kulis, and Kate Saenko. 2014. Asymmetric and category invariant feature transformations for domain adaptation. International journal of computer vision 109(1-2):28–41. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 . Mingsheng Long, Jianmin Wang, Jiaguang Sun, and S Yu Philip. 2015. Domain invariant transfer kernel learning. IEEE Transactions on Knowledge and Data Engineering 27(6):1519–1532. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The variational fair autoencoder. arXiv preprint arXiv:1511.00830 . Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22(10):1345–1359. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4(2):26–31. Svante Wold, Kim Esbensen, and Paul Geladi. 1987. Principal component analysis. Chemometrics and intelligent laboratory systems 2(1-3):37–52. Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschl¨ager, and Susanne Saminger-Platz. 2017. Central moment discrepancy (cmd) for domain-invariant representation learning. arXiv preprint arXiv:1702.08811 .
2018
233
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2514–2523 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2514 Aspect Based Sentiment Analysis with Gated Convolutional Networks Wei Xue and Tao Li School of Computing and Information Sciences Florida International University, Miami, FL, USA {wxue004, taoli}@cs.fiu.edu Abstract Aspect based sentiment analysis (ABSA) can provide more detailed information than general sentiment analysis, because it aims to predict the sentiment polarities of the given aspects or entities in text. We summarize previous approaches into two subtasks: aspect-category sentiment analysis (ACSA) and aspect-term sentiment analysis (ATSA). Most previous approaches employ long short-term memory and attention mechanisms to predict the sentiment polarity of the concerned targets, which are often complicated and need more training time. We propose a model based on convolutional neural networks and gating mechanisms, which is more accurate and efficient. First, the novel Gated Tanh-ReLU Units can selectively output the sentiment features according to the given aspect or entity. The architecture is much simpler than attention layer used in the existing models. Second, the computations of our model could be easily parallelized during training, because convolutional layers do not have time dependency as in LSTM layers, and gating units also work independently. The experiments on SemEval datasets demonstrate the efficiency and effectiveness of our models. 1 1 Introduction Opinion mining and sentiment analysis (Pang and Lee, 2008) on user-generated reviews can provide valuable information for providers and consumers. Instead of predicting the overall sen1The code and data is available at https://github. com/wxue004cs/GCAE timent polarity, fine-grained aspect based sentiment analysis (ABSA) (Liu and Zhang, 2012) is proposed to better understand reviews than traditional sentiment analysis. Specifically, we are interested in the sentiment polarity of aspect categories or target entities in the text. Sometimes, it is coupled with aspect term extractions (Xue et al., 2017). A number of models have been developed for ABSA, but there are two different subtasks, namely aspect-category sentiment analysis (ACSA) and aspect-term sentiment analysis (ATSA). The goal of ACSA is to predict the sentiment polarity with regard to the given aspect, which is one of a few predefined categories. On the other hand, the goal of ATSA is to identify the sentiment polarity concerning the target entities that appear in the text instead, which could be a multi-word phrase or a single word. The number of distinct words contributing to aspect terms could be more than a thousand. For example, in the sentence “Average to good Thai food, but terrible delivery.”, ATSA would ask the sentiment polarity towards the entity Thai food; while ACSA would ask the sentiment polarity toward the aspect service, even though the word service does not appear in the sentence. Many existing models use LSTM layers (Hochreiter and Schmidhuber, 1997) to distill sentiment information from embedding vectors, and apply attention mechanisms (Bahdanau et al., 2014) to enforce models to focus on the text spans related to the given aspect/entity. Such models include Attention-based LSTM with Aspect Embedding (ATAE-LSTM) (Wang et al., 2016b) for ACSA; Target-Dependent Sentiment Classification (TD-LSTM) (Tang et al., 2016a), Gated Neural Networks (Zhang et al., 2016) and Recurrent Attention Memory Network (RAM) (Chen et al., 2017) for ATSA. Attention mechanisms has been successfully used in many 2515 NLP tasks. It first computes the alignment scores between context vectors and target vector; then carry out a weighted sum with the scores and the context vectors. However, the context vectors have to encode both the aspect and sentiment information, and the alignment scores are applied across all feature dimensions regardless of the differences between these two types of information. Both LSTM and attention layer are very timeconsuming during training. LSTM processes one token in a step. Attention layer involves exponential operation and normalization of all alignment scores of all the words in the sentence (Wang et al., 2016b). Moreover, some models needs the positional information between words and targets to produce weighted LSTM (Chen et al., 2017), which can be unreliable in noisy review text. Certainly, it is possible to achieve higher accuracy by building more and more complicated LSTM cells and sophisticated attention mechanisms; but one has to hold more parameters in memory, get more hyper-parameters to tune and spend more time in training. In this paper, we propose a fast and effective neural network for ACSA and ATSA based on convolutions and gating mechanisms, which has much less training time than LSTM based networks, but with better accuracy. For ACSA task, our model has two separate convolutional layers on the top of the embedding layer, whose outputs are combined by novel gating units. Convolutional layers with multiple filters can efficiently extract n-gram features at many granularities on each receptive field. The proposed gating units have two nonlinear gates, each of which is connected to one convolutional layer. With the given aspect information, they can selectively extract aspect-specific sentiment information for sentiment prediction. For example, in the sentence “Average to good Thai food, but terrible delivery.”, when the aspect food is provided, the gating units automatically ignore the negative sentiment of aspect delivery from the second clause, and only output the positive sentiment from the first clause. Since each component of the proposed model could be easily parallelized, it has much less training time than the models based on LSTM and attention mechanisms. For ATSA task, where the aspect terms consist of multiple words, we extend our model to include another convolutional layer for the target expressions. We evaluate our models on the SemEval datasets, which contains restaurants and laptops reviews with labels on aspect level. To the best of our knowledge, no CNNbased model has been proposed for aspect based sentiment analysis so far. 2 Related Work We present the relevant studies into following two categories. 2.1 Neural Networks Recently, neural networks have gained much popularity on sentiment analysis or sentence classification task. Tree-based recursive neural networks such as Recursive Neural Tensor Network (Socher et al., 2013) and Tree-LSTM (Tai et al., 2015), make use of syntactic interpretation of the sentence structure, but these methods suffer from time inefficiency and parsing errors on review text. Recurrent Neural Networks (RNNs) such as LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014) have been used for sentiment analysis on data instances having variable length (Tang et al., 2015; Xu et al., 2016; Lai et al., 2015). There are also many models that use convolutional neural networks (CNNs) (Collobert et al., 2011; Kalchbrenner et al., 2014; Kim, 2014; Conneau et al., 2016) in NLP, which also prove that convolution operations can capture compositional structure of texts with rich semantic information without laborious feature engineering. 2.2 Aspect based Sentiment Analysis There is abundant research work on aspect based sentiment analysis. Actually, the name ABSA is used to describe two different subtasks in the literature. We classify the existing work into two main categories based on the descriptions of sentiment analysis tasks in SemEval 2014 Task 4 (Pontiki et al., 2014): Aspect-Term Sentiment Analysis and Aspect-Category Sentiment Analysis. Aspect-Term Sentiment Analysis. In the first category, sentiment analysis is performed toward the aspect terms that are labeled in the given sentence. A large body of literature tries to utilize the relation or position between the target words and the surrounding context words either by using the tree structure of dependency or by simply counting the number of words between them as a relevance information (Chen et al., 2017). Recursive neural networks (Lakkaraju et al., 2014; Dong et al., 2014; Wang et al., 2016a) rely 2516 on external syntactic parsers, which can be very inaccurate and slow on noisy texts like tweets and reviews, which may result in inferior performance. Recurrent neural networks are commonly used in many NLP tasks as well as in ABSA problem. TD-LSTM (Tang et al., 2016a) and gated neural networks (Zhang et al., 2016) use two or three LSTM networks to model the left and right contexts of the given target individually. A fullyconnected layer with gating units predicts the sentiment polarity with the outputs of LSTM layers. Memory network (Weston et al., 2014) coupled with multiple-hop attention attempts to explicitly focus only on the most informative context area to infer the sentiment polarity towards the target word (Tang et al., 2016b; Chen et al., 2017). Nonetheless, memory network simply bases its knowledge bank on the embedding vectors of individual words (Tang et al., 2016b), which makes itself hard to learn the opinion word enclosed in more complicated contexts. The performance is improved by using LSTM, attention layer and feature engineering with word distance between surrounding words and target words to produce target-specific memory (Chen et al., 2017). Aspect-Category Sentiment Analysis. In this category, the model is asked to predict the sentiment polarity toward a predefined aspect category. Attention-based LSTM with Aspect Embedding (Wang et al., 2016b) uses the embedding vectors of aspect words to selectively attend the regions of the representations generated by LSTMs. 3 Gated Convolutional Network with Aspect Embedding In this section, we present a new model for ACSA and ATSA, namely Gated Convolutional network with Aspect Embedding (GCAE), which is more efficient and simpler than recurrent network based models (Wang et al., 2016b; Tang et al., 2016a; Ma et al., 2017; Chen et al., 2017). Recurrent neural networks sequentially compose hidden vectors hi = f(hi−1), which does not enable parallelization over inputs. In the attention layer, softmax normalization also has to wait for all the alignment scores computed by a similarity function. Hence, they cannot take advantage of highly-parallelized modern hardware and libraries. Our model is built on convolutional layers and gating units. Each convolutional filter computes n-gram features at different granularities from the embedding vectors at each position individually. The gating units on top of the convolutional layers at each position are also independent from each other. Therefore, our model is more suitable to parallel computing. Moreover, our model is equipped with two kinds of effective filtering mechanisms: the gating units on top of the convolutional layers and the max pooling layer, both of which can accurately generate and select aspect-related sentiment features. We first briefly review the vanilla CNN for text classification (Kim, 2014). The model achieves state-of-the-art performance on many standard sentiment classification datasets (Le et al., 2017). The CNN model consists of an embedding layer, a one-dimension convolutional layer and a max-pooling layer. The embedding layer takes the indices wi ∈{1, 2, . . . , V } of the input words and outputs the corresponding embedding vectors vi ∈RD. D denotes the dimension size of the embedding vectors. V is the size of the word vocabulary. The embedding layer is usually initialized with pre-trained embeddings such as GloVe (Pennington et al., 2014), then they are fine-tuned during the training stage. The onedimension convolutional layer convolves the inputs with multiple convolutional kernels of different widths. Each kernel corresponds a linguistic feature detector which extracts a specific pattern of n-gram at various granularities (Kalchbrenner et al., 2014). Specifically, the input sentence is represented by a matrix through the embedding layer, X = [v1, v2, . . . , vL], where L is the length of the sentence with padding. A convolutional filter Wc ∈RD×k maps k words in the receptive field to a single feature c. As we slide the filter across the whole sentence, we obtain a sequence of new features c = [c1, c2, . . . , cL]. ci = f(Xi:i+K ∗Wc + bc) , (1) where bc ∈R is the bias, f is a non-linear activation function such as tanh function, ∗denotes convolution operation. If there are nk filters of the same width k, the output features form a matrix C ∈Rnk×Lk. For each convolutional filter, the max-over-time pooling layer takes the maximal value among the generated convolutional features, resulting in a fixed-size vector whose size is equal to the number of filters nk. Finally, a softmax layer uses the vector to predict the sentiment polarity of the input sentence. Figure 1 illustrates our model architecture. The 2517 sushi rolls are great · · · · · · Aspect Embedding · · · · · · · · · · · · Sentiment softmax Word Embeddings Convolutions GTRU Max Pooling Figure 1: Illustration of our model GCAE for ACSA task. A pair of convolutional neuron computes features for a pair of gates: tanh gate and ReLU gate. The ReLU gate receives the given aspect information to control the propagation of sentiment features. The outputs of two gates are element-wisely multiplied for the max pooling layer. Gated Tanh-ReLU Units (GTRU) with aspect embedding are connected to two convolutional neurons at each position t. Specifically, we compute the features ci as ai = relu(Xi:i+k ∗Wa + Vava + ba) (2) si = tanh(Xi:i+k ∗Ws + bs) (3) ci = si × ai , (4) where va is the embedding vector of the given aspect category in ACSA or computed by another CNN over aspect terms in ATSA. The two convolutions in Equation 2 and 3 are the same as the convolution in the vanilla CNN, but the convolutional features ai receives additional aspect information va with ReLU activation function. In other words, si and ai are responsible for generating sentiment features and aspect features respectively. The above max-over-time pooling layer generates a fixed-size vector e ∈Rdk, which keeps the most salient sentiment features of the whole sentence. The final fully-connected layer with softmax function uses the vector e to predict the sentiment polarity ˆy. The model is trained by minimizing the cross-entropy loss between the ground-truth y and the predicted value ˆy for all data samples. L = − X i X j yj i log ˆyj i , (5) where i is the index of a data sample, j is the index of a sentiment class. 4 Gating Mechanisms The proposed Gated Tanh-ReLU Units control the path through which the sentiment information flows towards the pooling layer. The gating mechanisms have proven to be effective in LSTM. In aspect based sentiment analysis, it is very common that different aspects with different sentiments appear in one sentence. The ReLU gate in Equation 2 does not have upper bound on positive inputs but strictly zero on negative inputs. Therefore, it can output a similarity score according to the relevance between the given aspect information va and the aspect feature ai at position t. If this score is zero, the sentiment features si would be blocked at the gate; otherwise, its magnitude would be amplified accordingly. The max-over-time pooling further removes the sentiment features which are not significant over the whole sentence. In language modeling (Dauphin et al., 2017; Kalchbrenner et al., 2016; van den Oord et al., 2016; Gehring et al., 2017), Gated Tanh Units (GTU) and Gated Linear Units (GLU) have shown effectiveness of gating mechanisms. GTU is represented by tanh(X ∗W + b) × σ(X ∗V + c), in which the sigmoid gates control features for predicting the next word in a stacked convolutional block. To overcome the gradient vanishing problem of GTU, GLU uses (X∗W+b)×σ(X∗V+c) instead, so that the gradients would not be downscaled to propagate through many stacked convolutional layers. However, a neural network that has only one convolutional layer would not suffer from gradient vanish problem during training. We show that on text classification problem, our GTRU is more effective than these two gating units. 5 GCAE on ATSA ATSA task is defined to predict the sentiment polarity of the aspect terms in the given sentence. We simply extend GCAE by adding a small convolutional layer on aspect terms, as shown in Figure 2. In ACSA, the aspect information controlling the flow of sentiment features in GTRU is from one aspect word; while in ATSA, such information is provided by a small CNN on aspect terms [wi, wi+1, . . . , wi+k]. The additional CNN extracts the important features from multiple words 2518 sushi rolls are great <PAD> sushi rolls <PAD> · · · · · · · · · · · · · · · Sentiment softmax Max Pooling Context Embeddings Target Embeddings Convolutions GTRU Max Pooling Figure 2: Illustration of model GCAE for ATSA task. It has an additional convolutional layer on aspect terms. while retains the ability of parallel computing. 6 Experiments 6.1 Datasets and Experiment Preparation We conduct experiments on public datasets from SemEval workshops (Pontiki et al., 2014), which consist of customer reviews about restaurants and laptops. Some existing work (Wang et al., 2016b; Ma et al., 2017; Chen et al., 2017) removed “conflict” labels from four sentiment labels, which makes their results incomparable to those from the workshop report (Kiritchenko et al., 2014). We reimplemented the compared methods, and used hyper-parameter settings described in these references. The sentences which have different sentiment labels for different aspects or targets in the sentence are more common in review data than in standard sentiment classification benchmark. The sentence in Table 1 shows the reviewer’s different attitude towards two aspects: food and delivery. Therefore, to access how the models perform on review sentences more accurately, we create small but difficult datasets, which are made up of the sentences having opposite or different sentiments on different aspects/targets. In Table 1, the two identical sentences but with different sentiment labels are both included in the dataset. If a sentence has 4 aspect targets, this sentence would have 4 copies in the data set, each of which is associated with different target and sentiment label. For ACSA task, we conduct experiments on restaurant review data of SemEval 2014 Task 4. There are 5 aspects: food, price, service, ambience, and misc; 4 sentiment polarities: positive, negative, neutral, and conflict. By merging restaurant reviews of three years 2014 - 2016, we obtain a larger dataset called “Restaurant-Large”. Incompatibilities of data are fixed during merging. We replace conflict labels with neutral labels in the 2014 dataset. In the 2015 and 2016 datasets, there could be multiple pairs of “aspect terms” and “aspect category” in one sentence. For each sentence, let p denote the number of positive labels minus the number of negative labels. We assign a sentence a positive label if p > 0, a negative label if p < 0, or a neutral label if p = 0. After removing duplicates, the statistics are show in Table 2. The resulting dataset has 8 aspects: restaurant, food, drinks, ambience, service, price, misc and location. For ATSA task, we use restaurant reviews and laptop reviews from SemEval 2014 Task 4. On each dataset, we duplicate each sentence na times, which is equal to the number of associated aspect categories (ACSA) or aspect terms (ATSA) (Ruder et al., 2016b,a). The statistics of the datasets are shown in Table 2. The sizes of hard data sets are also shown in Table 2. The test set is designed to measure whether a model can detect multiple different sentiment polarities in one sentence toward different entities. Without such sentences, a classifier for overall sentiment classification might be good enough 2519 Sentence aspect category/term sentiment label Average to good Thai food, but terrible delivery. food positive Average to good Thai food, but terrible delivery. delivery negative Table 1: Two example sentences in one hard test set of restaurant review dataset of SemEval 2014. for the sentences associated with only one sentiment label. In our experiments, word embedding vectors are initialized with 300-dimension GloVe vectors which are pre-trained on unlabeled data of 840 billion tokens (Pennington et al., 2014). Words out of the vocabulary of GloVe are randomly initialized with a uniform distribution U(−0.25, 0.25). We use Adagrad (Duchi et al., 2011) with a batch size of 32 instances, default learning rate of 1e−2, and maximal epochs of 30. We only fine tune early stopping with 5-fold cross validation on training datasets. All neural models are implemented in PyTorch. 6.2 Compared Methods To comprehensively evaluate the performance of GCAE, we compare our model against the following models. NRC-Canada (Kiritchenko et al., 2014) is the top method in SemEval 2014 Task 4 for ACSA and ATSA task. SVM is trained with extensive feature engineering: various types of n-grams, POS tags, and lexicon features. The sentiment lexicons improve the performance significantly, but it requires large scale labeled data: 183 thousand Yelp reviews, 124 thousand Amazon laptop reviews, 56 million tweets, and 3 sentiment lexicons labeled manually. CNN (Kim, 2014) is widely used on text classification task. It cannot directly capture aspectspecific sentiment information on ACSA task, but it provides a very strong baseline for sentiment classification. We set the widths of filters to 3, 4, 5 with 100 features each. TD-LSTM (Tang et al., 2016a) uses two LSTM networks to model the preceding and following contexts of the target to generate target-dependent representation for sentiment prediction. ATAE-LSTM (Wang et al., 2016b) is an attention-based LSTM for ACSA task. It appends the given aspect embedding with each word embedding as the input of LSTM, and has an attention layer above the LSTM layer. IAN (Ma et al., 2017) stands for interactive attention network for ATSA task, which is also based on LSTM and attention mechanisms. RAM (Chen et al., 2017) is a recurrent attention network for ATSA task, which uses LSTM and multiple attention mechanisms. GCN stands for gated convolutional neural network, in which GTRU does not have the aspect embedding as an additional input. 6.3 Results and Analysis 6.3.1 ACSA Following the SemEval workshop, we report the overall accuracy of all competing models over the test datasets of restaurant reviews as well as the hard test datasets. Every experiment is repeated five times. The mean and the standard deviation are reported in Table 4. LSTM based model ATAE-LSTM has the worst performance of all neural networks. Aspect-based sentiment analysis is to extract the sentiment information closely related to the given aspect. It is important to separate aspect information and sentiment information from the extracted information of sentences. The context vectors generated by LSTM have to convey the two kinds of information at the same time. Moreover, the attention scores generated by the similarity scoring function are for the entire context vector. GCAE improves the performance by 1.1% to 2.5% compared with ATAE-LSTM. First, our model incorporates GTRU to control the sentiment information flow according to the given aspect information at each dimension of the context vectors. The element-wise gating mechanism works at fine granularity instead of exerting an alignment score to all the dimensions of the context vectors in the attention layer of other models. Second, GCAE does not generate a single context vector, but two vectors for aspect and sentiment features respectively, so that aspect and sentiment information is unraveled. By comparing the performance on the hard test datasets against CNN, it is easy to see the convolutional layer of GCAE is able to differentiate the sentiments of multiple entities. Convolutional neural networks CNN and GCN 2520 Positive Negative Neutral Conflict Train Test Train Test Train Test Train Test Restaurant-Large 2710 1505 1198 680 757 241 Restaurant-Large-Hard 182 92 178 81 107 61 Restaurant-2014 2179 657 839 222 500 94 195 52 Restaurant-2014-Hard 139 32 136 26 50 12 40 19 Table 2: Statistics of the datasets for ACSA task. The hard dataset is only made up of sentences having multiple aspect labels associated with multiple sentiments. Positive Negative Neutral Conflict Train Test Train Test Train Test Train Test Restaurant 2164 728 805 196 633 196 91 14 Restaurant-Hard 379 92 323 62 293 83 43 8 Laptop 987 341 866 128 460 169 45 16 Laptop-Hard 159 31 147 25 173 49 17 3 Table 3: Statistics of the datasets for ATSA task. are not designed for aspect based sentiment analysis, but their performance exceeds that of ATAELSTM. The performance of SVM (Kiritchenko et al., 2014) depends on the availability of the features it can use. Without the large amount of sentiment lexicons, SVM perform worse than neural methods. With multiple sentiment lexicons, the performance is increased by 7.6%. This inspires us to work on leveraging sentiment lexicons in neural networks in the future. The hard test datasets consist of replicated sentences with different sentiments towards different aspects. The models which cannot utilize the given aspect information such as CNN and GCN perform poorly as expected, but GCAE has higher accuracy than other neural network models. GCAE achieves 4% higher accuracy than ATAE-LSTM on Restaurant-Large and 5% higher on SemEval-2014 on ACSA task. However, GCN, which does not have aspect modeling part, has higher score than GCAE on the original restaurant dataset. It suggests that GCN performs better than GCAE when there is only one sentiment label in the given sentence, but not on the hard test dataset. 6.3.2 ATSA We apply the extended version of GCAE on ATSA task. On this task, the aspect terms are marked in the sentences and usually consist of multiple words. We compare IAN (Ma et al., 2017), RAM (Chen et al., 2017), TD-LSTM (Tang et al., 2016a), ATAE-LSTM (Wang et al., 2016b), and our GCAE model in Table 5. The models other than GCAE is based on LSTM and attention mechanisms. IAN has better performance than TD-LSTM and ATAE-LSTM, because two attention layers guides the representation learning of the context and the entity interactively. RAM also achieves good accuracy by combining multiple attentions with a recurrent neural network, but it needs more training time as shown in the following section. On the hard test dataset, GCAE has 1% higher accuracy than RAM on restaurant data and 1.7% higher on laptop data. GCAE uses the outputs of the small CNN over aspect terms to guide the composition of the sentiment features through the ReLU gate. Because of the gating mechanisms and the convolutional layer over aspect terms, GCAE outperforms other neural models and basic SVM. Again, large scale sentiment lexicons bring significant improvement to SVM. 6.4 Training Time We record the training time of all models until convergence on a validation set on a desktop machine with a 1080 Ti GPU, as shown in Table 6. LSTM based models take more training time than convolutional models. On ATSA task, because of multiple attention layers in IAN and RAM, they need even more time to finish the training. GCAE is much faster than other neural models, because neither convolutional operation nor GTRU has time dependency compared with LSTM and attention layer. Therefore, it is easier for hardware and libraries to parallel the comput2521 Models Restaurant-Large Restaurant 2014 Test Hard Test Test Hard Test SVM* 75.32 SVM + lexicons* 82.93 ATAE-LSTM 83.91±0.49 66.32±2.28 78.29±0.68 45.62±0.90 CNN 84.28±0.15 50.43±0.38 79.47±0.32 44.94±0.01 GCN 84.48±0.06 50.08±0.31 79.67±0.35 44.49±1.52 GCAE 85.92±0.27 70.75±1.19 79.35±0.34 50.55±1.83 Table 4: The accuracy of all models on test sets and on the subsets made up of test sentences that have multiple sentiments and multiple aspect terms. Restaurant-Large dataset is created by merging all the restaurant reviews of SemEval workshops within three years. ‘*’: the results with SVM are retrieved from NRC-Canada (Kiritchenko et al., 2014). Models Restaurant Laptop Test Hard Test Test Hard Test SVM* 77.13 63.61 SVM + lexicons* 80.16 70.49 TD-LSTM 73.44±1.17 56.48±2.46 62.23±0.92 46.11±1.89 ATAE-LSTM 73.74±3.01 50.98±2.27 64.38±4.52 40.39±1.30 IAN 76.34±0.27 55.16±1.97 68.49±0.57 44.51±0.48 RAM 76.97±0.64 55.85±1.60 68.48±0.85 45.37±2.03 GCAE 77.28±0.32 56.73±0.56 69.14±0.32 47.06±2.45 Table 5: The accuracy of ATSA subtask on SemEval 2014 Task 4. ‘*’: the results with SVM are retrieved from NRC-Canada (Kiritchenko et al., 2014) Model ATSA ATAE 25.28 IAN 82.87 RAM 64.16 TD-LSTM 19.39 GCAE 3.33 Table 6: The time to converge in seconds on ATSA task. Gates Restaurant-Large Restaurant 2014 Test Hard Test Test Hard Test GTU 84.62 60.25 79.31 51.93 GLU 84.74 59.82 79.12 50.80 GTRU 85.92 70.75 79.35 50.55 Table 7: The accuracy of different gating units on restaurant reviews on ACSA task. ing process. Since the performance of SVM is retrieved from the original paper, we are not able to compare the training time of SVM. 6.5 Gating Mechanisms In this section, we compare GLU (X ∗W + b) × σ(X ∗Wa + Vva + ba) (Dauphin et al., 2017), Average to good Thai food but terrible delivery food delivery Figure 3: The outputs of the ReLU gates in GTRU. GTU tanh(X∗W +b)×σ(X∗Wa +Vva +ba) (van den Oord et al., 2016), and GTRU used in GCAE. Table 7 shows that all of three gating units achieve relatively high accuracy on restaurant datasets. GTRU outperforms the other gates. It has a convolutional layer generating aspect features via ReLU activation function, which controls the magnitude of the sentiment signals according to the given aspect information. On the other hand, the sigmoid function in GTU and GLU has the upper bound +1, which may not be able to distill sentiment features effectively. 7 Visualization In this section, we take a concrete review sentence as an example to illustrate how the proposed gate GTRU works. It is more difficult to visualize 2522 the weights generated by the gates than the attention weights in other neural networks. The attention weight score is a global score over the words and the vector dimensions; whereas in our model, there are Nword × Nfilter × Ndimension gate outputs. Therefore, we train a small model with only one filter which is only three word wide. Then, for each word, we sum the Ndimension outputs of the ReLU gates. After normalization, we plot the values on each word in Figure 3. Given different aspect targets, the ReLU gates would control the magnitude of the outputs of the tanh gates. 8 Conclusions and Future Work In this paper, we proposed an efficient convolutional neural network with gating mechanisms for ACSA and ATSA tasks. GTRU can effectively control the sentiment flow according to the given aspect information, and two convolutional layers model the aspect and sentiment information separately. We prove the performance improvement compared with other neural models by extensive experiments on SemEval datasets. How to leverage large-scale sentiment lexicons in neural networks would be our future work. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR, pages CoRR abs–1409.0473. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent Attention Network on Memory for Aspect Sentiment Analysis. In EMNLP, pages 463–472. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. In NIPS. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493–2537. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann LeCun. 2016. Very Deep Convolutional Networks for Text Classification. In EACL, pages 1107–1116. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language Modeling with Gated Convolutional Networks. In ICML, pages 933–941. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification. In ACL, pages 49–54. John C Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, pages 2121–2159. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In ICML, pages 1243–1252. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural computation, 9(8):1735–1780. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A¨aron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural Machine Translation in Linear Time . CoRR, abs/1610.10099. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In ACL, pages 655–665. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In EMNLP, pages 1746– 1751. Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif M. Mohammad. 2014. NRC-Canada-2014: Detecting aspects and sentiment in customer reviews. In SemEval@COLING, pages 437–442, Stroudsburg, PA, USA. Association for Computational Linguistics. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent Convolutional Neural Networks for Text Classification. AAAI, pages 2267–2273. Himabindu Lakkaraju, Richard Socher, and Chris Manning. 2014. Aspect specific sentiment analysis using hierarchical deep learning. In NIPS Workshop on Deep Learning and Representation Learning. Hoa T Le, Christophe Cerisara, and Alexandre Denis. 2017. Do Convolutional Networks need to be Deep for Text Classification ? CoRR, abs/1707.04108. Bing Liu and Lei Zhang. 2012. A Survey of Opinion Mining and Sentiment Analysis. Mining Text Data, (Chapter 13):415–463. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive Attention Networks for Aspect-Level Sentiment Classification. In IJCAI, pages 4068–4074. International Joint Conferences on Artificial Intelligence Organization. A¨aron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Koray Kavukcuoglu, Oriol Vinyals, and Alex Graves. 2016. Conditional Image Generation with PixelCNN Decoders. In NIPS, pages 4790–4798. 2523 Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends R⃝in Information Retrieva, 2:1–135. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global Vectors for Word Representation. In EMNLP, pages 1532–1543. Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Haris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In SemEval@COLING, pages 27–35, Stroudsburg, PA, USA. Association for Computational Linguistics. Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016a. A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis. In EMNLP, pages 999–1005. Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016b. INSIGHT-1 at SemEval-2016 Task 5 - Deep Learning for Multilingual Aspect-based Sentiment Analysis. In SemEval@NAACL-HLT, pages 330– 336. Richard Socher, Alex Perelygin, Jean Y Wu, and Jason Chuang. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631–1642. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In ACL, pages 1556–1566. Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective LSTMs for Target-Dependent Sentiment Classification. In COLING, pages 3298– 3307. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document Modeling with Gated Recurrent Neural Network for Sentiment Classification. In EMNLP, pages 1422– 1432. Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect Level Sentiment Classification with Deep Memory Network. In EMNLP, pages 214–224. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016a. Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis. In EMNLP, pages 616–626. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016b. Attention-based LSTM for Aspectlevel Sentiment Classification. In EMNLP, pages 606–615. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory Networks. In ICLR, pages CoRR abs–1410.3916. Jiacheng Xu, Danlu Chen, Xipeng Qiu, and Xuanjing Huang. 2016. Cached Long Short-Term Memory Neural Networks for Document-Level Sentiment Classification. In EMNLP, pages 1660–1669. Wei Xue, Wubai Zhou, Tao Li, and Qing Wang. 2017. Mtna: A neural multi-task model for aspect category classification and aspect term extraction on restaurant reviews. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 151–156. Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2016. Gated Neural Networks for Targeted Sentiment Analysis. In AAAI, pages 3087–3093.
2018
234
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2524–2534 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2524 A Helping Hand: Transfer Learning for Deep Sentiment Analysis Xin Dong Rutgers University New Brunswick, NJ, USA [email protected] Gerard de Melo Rutgers University New Brunswick, NJ, USA [email protected] Abstract Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains. In this work, we present an approach to feed generic cues into the training process of such networks, leading to better generalization abilities given limited training data. We propose to induce sentiment embeddings via supervision on extrinsic data, which are then fed into the model via a dedicated memorybased component. We observe significant gains in effectiveness on a range of different datasets in seven different languages. 1 Introduction Over the past decades, sentiment analysis has grown from an academic endeavour to an essential analytics tool. Across the globe, people are voicing their opinion in online social media, product review sites, booking platforms, blogs, etc. Hence, it is important to keep abreast of ongoing developments in all pertinent markets, accounting for different domains as well as different languages. In recent years, deep neural architectures based on convolutional or recurrent layers have become established as the preeminent models for supervised sentiment polarity classification. At the same time, it is also frequently observed that deep neural networks tend to be particularly data-hungry. This is a problem in many real-world settings, where large amounts of training examples may be too costly to obtain for every target domain. A model trained on movie reviews, for instance, will fare very poorly on the task of assessing restaurant or hotel reviews, let alone tweets about politicians. In this paper, we investigate how extrinsic signals can be incorporated into deep neural networks for sentiment analysis. Numerous papers have found the use of regular pre-trained word vector representations to be beneficial for sentiment analysis (Socher et al., 2013; Kim, 2014; dos Santos and de C. Gatti, 2014). In our paper, we instead consider word embeddings specifically specialized for the task of sentiment analysis, studying how they can lead to stronger and more consistent gains, despite the fact that the embeddings were obtained using out-of-domain data. An intuitive solution would be to concatenate regular embeddings, which provide semantic relatedness cues, with sentiment polarity cues that are captured in additional dimensions. We instead propose a bespoke convolutional neural network architecture with a separate memory module dedicated to the sentiment embeddings. Our empirical study shows that the sentiment embeddings can lead to consistent gains across different datasets in a diverse set of domains and languages if a suitable neural network architecture is used. 2 Approach 2.1 Sentiment Embedding Computation Our goal is to incorporate external cues into a deep neural network such that the network is able to generalize better even when training data is scarce. While in computer vision, weights pre-trained on ImageNet are often used for transfer learning, the most popular way to incorporate external information into deep neural networks for text is to draw on word embeddings trained on vast amounts of word context information (Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2018). Indeed, the semantic relatedness signals provided by such representations often lead to slightly improved results in polarity classification tasks (Socher et al., 2013; Kim, 2014; dos Santos and de C. Gatti, 2014). However, the co-occurrence-based objectives of word2vec and GloVe do not consider sentiment 2525 specifically. We thus seek to examine how complementary sentiment-specific information from an external source can give rise to further gains. Transfer Learning. To this end, our goal is to induce sentiment embeddings that capture sentiment polarity signals in multiple domains and hence may be useful across a range of different sentiment analysis tasks. The multi-domain nature of these distinguish them from the kinds of generic polarity scores captured in sentiment polarity lexicons. We achieve this via transfer learning from trained models, benefiting from supervision on a series of sentiment polarity tasks from different domains. Given a training collection consisting of n binary classification tasks (e.g., with documents in n different domains), we learn n corresponding polarity prediction models. From these, we then extract token-level scores that are tied to specific prediction outcomes. Specifically, we train n linear models fi(x) = w⊺ i x+bi for tasks i = 1, . . . , n. Then, each vocabulary word index j is assigned a new ndimensional word vector xj = (w1,j, · · · , wn,j) that incorporates the linear coefficients for that word across the different linear models. A minor challenge is that na¨ıvely using bag-ofword features can lead to counter-intuitive weights. If a word such as “pleased” in one domain mainly occurs after the word “not”, while the reviews in another domain primarily used “pleased” in its unnegated form, then “pleased” would be assessed as possessing opposite polarities in different domains. To avoid this, we assume that features are preprocessed to better reflect whether words occur in a negated context. In our experiments, we simply treat occurrences of “not ⟨word⟩” as a single feature “not ⟨word⟩”. Of course, one can replace this heuristic with much more sophisticated techniques that fully account for the scope of a wider range of negation constructions. Graph-Based Extension. Most sentiment-related resources are available for the English language. To produce vectors for other languages in our experiments, we rely on cross-lingual projection via graph-based propagation (de Melo, 2015; de Melo, 2017; Dong and de Melo, 2018). At this point, we have a set of initial sentiment embedding vectors ˜vx ∈Rn for words x ∈V0. We assume that we have a lexical knowledge graph GL = (V, AL) with a node set consisting of an extended multilingual vocabulary V ⊇ V0 and a set of weighted directed arcs AL = {(x1, x′ 1, w1), . . . , (xm, x′ m, wm)}. Each such arc reflects a weighted semantic connection between two vocabulary items x, x′ ∈V , where vocabulary items are words labeled with their respective language. Typically, many of the arcs in the GL would reflect translational equivalence, but in our experiments, we also include monolingual links between semantically related words. Given this data, we aim to minimize − X x∈V v⊺ x   1 P (x,x′,w)∈AL w X (x,x′,w)∈AL wvx′   +C X x∈V0 ∥vx −˜vx∥2 (1) The first component of this objective seeks to ensure that sentiment embeddings of words accord with those of their connected words, in terms of the dot product. The second part ensures that the deviation from any available initial word vectors ˜vx is minimal (for some very high constant C). For optimization, we preinitialize vx = ˜vx for all x ∈V0, and then rely on stochastic gradient descent steps. 2.2 Dual-Module Memory based CNNs To feed this sentiment information into our architecture, we propose a Dual-Module Memory based Convolutional Neural Network (DM-MCNN) approach, which incorporates a dedicated memory module to process the sentiment embeddings, as illustrated in Fig. 1. While the module with regular word embeddings enables the model to learn salient patterns and harness the nearest neighbour and linear substructure properties of word embeddings, we conjecture that a separate sentiment memory module allows for better exploiting the information brought to the table by the sentiment embeddings. Convolutional Module Inputs and Filters. The Convolutional Module input of the DM-MCNN is a sentence matrix S ∈Rs×d, the rows of which represent the words of the input sentence after tokenization. In the case of S, i.e., in the regular module, each word is represented by its conventional word vector representation. Here, s refers to the length of a sentence, and d represents the dimensionality of the regular word vectors. We perform convolutional operations on these matrices via linear filters. Given rows representing discrete words, we rely on weight matrices W ∈ Rh×d with region size h. We use the notation Si:j to denote the sub-matrix of S from row i to row 2526 Sentiment Level Sentiment Embeddings Weighted Sentiment Embeddings Softmax Sentiment Weights Weighted Sum + Normal Embedding Feature Map Max Pooling layer Merged layer Fully Connected layer sentence: !", !#,… !$, !"% !" !# !' !( !) !* !+ !, !$ !"% Fully Connected layer + Memory Module + (b) (a) -. -/ -0 Figure 1: (a) Dual-Module Memory based Convolutional Neural Network architecture. (b) Single layer in Memory Module j. Supposing that the weight matrix has a filter width of h, a wide convolution (Kalchbrenner et al., 2014) is induced such that out-of-range submatrix values Si,j with i < 1 or i > s are taken to be zero. Thus, applying the filter on sub-matrices of S yields the output sequence o ∈Rs+h−1 as oi = W ⊙Si:i+h−1, (2) where the ⊙operator provides the sum of an element-wise multiplication. Wide convolutions ensure that filters can cover words at the margins of the normal weight matrix. Next, the ci in feature maps c ∈Rs+h−1 are computed as: ci = f(oi + b), where i = 1, . . . , s + h −1, the parameter b ∈R is a bias term, and f is an activation function. Multiple Layers in Memory Module. The memory module obtains as input a sequence of sentiment embedding vectors for the input, and attempts to draw conclusions about the overall sentiment polarity of the entire input sequence. Given a set of sentence words S = {w1, w2, w3, . . . , wn}, each word is mapped to its sentiment embedding vector of dimension ds and we denote this set of vectors as Vs. The preliminary sentiment level vp is also a vector of dimensionality ds. We take the mean of all sentiment vectors vi for words wi ∈S to initialize vp. Next, we compute a vector s of similarities si between vp and each sentiment word vector vi, by taking the inner product, followed by ℓ2-normalization and a softmax: si = exp v⊺ pvi ∥v⊺ pvi∥2 P i exp v⊺ pvi ∥v⊺ pvi∥2 (3) As the sentiment embeddings used in our paper are generated from a linear model, the degree of correspondence between vp and vi can adequately be assessed by the inner product. The resulting vector of scores s can be regarded as yielding sentiment weights for each word in the sentence. We apply ℓ2-normalization to ensure a more balanced weight distribution. The output sentiment level vector vo is then a sum over the sentiment inputs vi weighted by the ℓ2-normalized vector of similarities: vo = X i si ∥s∥2 vi (4) This processing can be repeated in multiple passes, akin to how end-to-end memory networks for question answering often perform multiple hops (Sukhbaatar et al., 2015). While in the first iteration, vp was set to the mean sentiment vector, subsequent passes may allow us to iteratively refine 2527 this vector. Assuming that vk o has been produced by the k-th pass, then the subsequent level vk+1 p in the next pass is: vk+1 p = vk o + vk p (5) The intuition here is that multiple passes can enable the model to adaptively retrieve iterative sentiment level statistics beyond the initial average sentiment information. Merging Layer and Prediction. Subsequently, for the convolutional module, 1d-max pooling is applied to c, which ought to capture the most prominent signals. In the memory module, the final sentiment vector is modulated by a weight matrix Ws ∈Rl×ds to form a feature vector of dimensionality l. In general, we can use multiple filters to obtain several features in the convolutional module, while the memory module allows for adjusting the number of passes over the memory. Finally, the outputs of these two modules are concatenated to form a fixed-length vector, which is passed to a fully connected softmax layer to obtain the final output probabilities. Loss Function and Training. Our loss function is the cross-entropy function L = −1 n n X i=1 X c∈C yi,c ln ˆyi,c, (6) where n is the number of training examples, C is the set of (two) classes, yi,c are ground truth labels for a given training example and class c, and ˆyi,c are corresponding label probabilities predicted by the model, as emitted by the softmax layer. We train our model using Adam optimization (Kingma and Ba, 2014) for better robustness across different datasets. Further details about our training regime follow in the Experiments section. 3 Experiments We now turn to our extensive empirical evaluation, which assesses the effectiveness of our novel architecture with sentiment word vectors. 3.1 Experimental Setup Datasets. For evaluation, we use real world datasets for 7 different languages, taken from a range of different sources that cover several domains. These are summarized in Table 1, with ISO 639-3 language codes. In our experimental setup, these are all cast as binary polarity classification Table 1: Dataset Descriptions Language Source Domain train test en SST Movies 6,920 1,821 AFF Food 5,945 1,189 es SE16-T5 Restaurants 2,070 881 ru TA Hotels 2,387 682 de TA Restaurants 1,687 481 cs TA Restaurants 1,722 491 it TA Hotels 3,437 982 ja TA Restaurants 1,435 411 tasks, for which we use accuracy as our evaluation metric. • The Stanford Sentiment Treebank (SST) dataset (Socher et al., 2013) consists of movie reviews taken from the Rotten Tomatoes website, including binary labels. We only used sentence-level data in our experiment. • The SemEval-2016 Task 5 (SE16-T5) dataset (Pontiki et al., 2016) provides Spanish reviews of restaurants. It targeted aspect-based sentiment analysis, so we converted the entity-level annotations to sentence-level polarity labels via voting. As the number of entities per sentence is often one or very low, this process is reasonably precise. In any case, it enables us to compare the ability of different model variants to learn to recognize pertinent words. • From TripAdvisor (TA), we crawled German, Russian, Italian, Czech, and Japanese reviews of restaurants and hotels. We removed threestar reviews, as these can be regarded as neutral ones, so reviews with a rating < 3 are considered negative, while those with a rating > 3 were deemed positive. • The Amazon Fine Food Reviews AFF (McAuley and Leskovec, 2013) dataset provides food reviews left on Amazon. We chose a random subset of it with preprocessing as for TripAdvisor. As there was no test set provided for TripAdvisor or for the Amazon Fine Food Reviews data, we randomly partitioned this data into training, validation, and test splits with a 80%/10%/20% ratio. Additionally, 10% of the training sets from SE16-T5 were randomly extracted and reserved for validation, while SST provides its own validation set. The new datasets are available from http: //gerard.demelo.org/sentiment/. Embeddings. The standard pre-trained word vectors used for English are the GloVe (Pennington et al., 2014) ones trained on 840 billion tokens of 2528 Table 2: DM-MCNN Model Parameter Settings. (a) General configuration. Description Values Convol. Module filter region size (3,4,5) feature maps 100 pooling 1d-max pooling Memory Module # passes (k) 2 feature vector size 100 dropout rate 0.5 optimizer Adam activation function ReLU batch size 50 (b) Learning rate α used in DM-MCNN under 7 languages. en es de ru α 0.0004 0.0008 0.003 0.003 cs it ja α 0.003 0.003 0.003 Common Crawl data1, while for other languages, we rely on the Facebook fastText Wikipedia embeddings (Bojanowski et al., 2016) as input representations. All of these are 300-dimensional. The vectors are either fed to the CNN, or to the convolutional module of the DM-MCNN during initialization, while unknown words are initialized with zeros. All words, including the unknown ones, are fine-tuned during the training process. For our transfer learning approach, our experiments rely on the multi-domain sentiment dataset by Blitzer et al. (2007), collected from Amazon customers reviews. This dataset includes 25 categories of products and is used to generate our sentiment embeddings using linear models. Specifically, we train linear SVMs using scikit-learn to extract word coefficients in each domain and also for the union of all domains together, yielding a 26-dimensional sentiment embedding. For comparison and analysis, we also consider several alternative forms of infusing external cues. Firstly, lexicon-driven methods have often been used for domain-independent sentiment analysis. We consider a recent sentiment lexicon called VADER (Hutto and Gilbert, 2014). The polarity scores assigned to words by the lexicon are taken as the components of a set of 1-dimensional word vectors (dividing the original scores by the difference between max and min polarity scores for normalization). Secondly, as another particularly strong alternative, we consider the SocialSent Reddit community-specific lexicons mined by the 1https://nlp.stanford.edu/projects/glove/ Stanford NLP group (Hamilton et al., 2016). These contain separate domain-specific scores for 250 different Reddit communities, and hence result in 250-dimensional embeddings. For cross-lingual projection, we extract links between words from a 2017 dump of the English edition of Wiktionary. We restrict the vocabulary link set to include the languages in Table 1, mining corresponding translation, synonymy, derivation, and etymological links from Wiktionary. Neural Network Details. For CNNs, we make use of the well-known CNN-non-static architecture and hyperparameters proposed by Kim (2014), with a learning rate of 0.0006, obtained by tuning on the validation data. For our DM-MCNN models, the configuration of the convolutional module is the same as for CNNs, and the remaining hyperparameter values were as well tuned on the validation sets. An overview of the relevant network parameter values is given in Table 2. For greater efficiency and better convergence properties, the training relies on mini-batches. Our implementation considers the maximal sentence length in each mini-batch and zero-pads all other sentences to this length under convolutional module, thus enabling uniform and fast processing of each mini-batch. All neural network architectures are implemented using the PyTorch framework2. 3.2 Results and Analysis Baseline Results. Our main results are summarized in Table 3. We compare both regular CNNs and our dual-module alternative DM-MCNNs under a variety of settings. A common approach is to use a CNN with randomly initialized word vectors. Comparing this to CNNs with GloVe/fastText embeddings, where GloVe is used for English, and fastText is used for all other languages, we observe substantial improvements across all datasets. This shows that word vectors do tend to convey pertinent word semantics signals that enable models to generalize better. Note also that the accuracy using GloVe on the English movies review dataset is consistent with numbers reported in previous work (Zhang and Wallace, 2015). Dual-Module Architecture. Next, we consider our DM-MCNNs with their dual-module mechanism to take advantage of transfer learning. We observe fairly consistent and sometimes quite substan2http://pytorch.org 2529 Table 3: Accuracy on several different English and non-English datasets from different domains, comparing our architecture against CNNs. Rest.: restaurants domain. Approach d en es ru de cs it ja Movies Food Rest. Hotels Rest. Rest. Hotels Rest. CNN — Random Init. 300 80.78 86.63 81.50 90.18 88.09 90.00 93.18 78.59 — Word Vec. Init. 300 85.72 87.97 85.13 92.82 92.10 92.46 96.20 77.62 Our Approach — With fine-tuning 300/26 86.99 90.08 85.02 93.40 93.14 93.08 95.50 85.40 — No fine-tuning 300/26 86.38 88.81 85.70 94.87 94.59 93.48 96.20 77.62 CNN with Concatenated Sentiment Embeddings — VADER 301 85.89 88.39 84.90 92.31 88.36 93.08 96.34 77.62 — SocialSent 550 84.90 88.48 82.63 92.23 91.48 86.56 94.51 76.64 — Our Embeddings 326 86.05 89.07 84.56 92.72 93.56 91.24 95.78 77.62 Our Model with Alternative Sentiment Embeddings — Random 300/26 86.16 87.97 85.24 93.99 93.14 92.67 96.20 80.29 — VADER 300/1 86.33 88.39 84.45 94.18 92.31 92.87 96.48 75.43 — SocialSent 300/250 86.38 87.89 83.09 93.40 92.31 93.28 96.62 81.02 tial gains over CNNs with just the GloVe/fastText vectors. We see that the sentiment embeddings provide important complementary signals beyond what is provided in regular word embeddings, and that our dual-module approach succeeds at exploiting these signals across a range of different domains and languages. Our transfer learning approach leads to sentiment embeddings that capture signals from multiple domains. The model successfully picks the pertinent parts of this signal for datasets from domains as different as movie reviews and food reviews. We report results for two different training conditions. In the first condition (with fine-tuning), the sentiment embedding matrix is preinitialized using the data from our transfer learning procedure, but the model is then able to modify these arbitrarily via backpropagation. In the second condition (no fine-tuning), we simply use our sentiment embedding matrix as is, and do not update it. Instead, the model is able to update its various other parameters, particularly its various weight matrices and bias vectors. While both training conditions outperform the CNN baseline, there is no obvious winner among the two. When the training data set is very small and hence there is a significant risk of overfitting, one may be best advised to forgo fine-tuning. In contrast, when it is somewhat larger (as for our English datasets, which each have over 5,000 training instances) or when the language is particularly idiosyncratic or not covered sufficiently well by our cross-lingual projection procedure (such as perhaps for Japanese), then fine-tuning is recommended. In this case, fine-tuning may allow the model to adjust the embeddings to cater to domain-specific meanings and corpus-specific correlations, while also overcoming possible sparsity of the cross-lingual vectors resulting from a lack of coverage of the translation dictionary. It is important to note that many of the results in Table 3 stem from embeddings that were created automatically using cross-lingual projection. Our transfer learning embeddings were induced from entirely English data. Although the automatically projected cross-lingual embeddings are very noisy and limited in their coverage, particularly with respect to inflected forms, our model succeeds in exploiting them to obtain substantial gains in several different languages and domains. Alternative Embedding Methods. For a more detailed analysis, we conducted additional experiments with alternative embedding conditions. In particular, as a simpler means of achieving gains over standard CNNs, we propose to use CNNs with word vectors augmented with sentiment cues. Given that regular word embeddings appear to be useful for capturing semantics, one may conjecture that extending these word vectors with additional dimensions to capture sentiment information can lead to improved results. For this, we simply concatenate the regular word embeddings with different forms of sentiment embeddings that we have obtained, including those from the sentiment lexicon VADER, from the Stanford SocialSent project, and from our transfer learning procedure via Amazon reviews. To conduct these experiments, we also produced cross-lingual projections of the VADER and SocialSent embedding data. The results of using these embeddings as opposed to regular ones are somewhat mixed. Con2530 catenating the VADER embeddings or our transfer learning ones leads to minor improvements on English, and our cross-lingual projection of them leads to occasional gains, but the results are far from consistent. Even on English, adding the 250dimensional SocialSent embedding seems to degrade the effectiveness of the CNN, although all input information that was previously there continues to be provided to it. This suggests that a simple concatenation may harm the model’s ability to harness the semantic information carried by regular word vectors. This risk seems more pronounced for larger-dimensional sentiment embeddings. In contrast, with our DM-MCNNs approach, the sentiment information is provided to the model in a separate memory module that makes multiple passes over this data before combining it with the regular CNN module’s signals. Thus, the model can exploit the two kinds of information independently, and learn a suitable way to aggregate them to produce an overall output classification. This hence demonstrates not only that the sentiment embeddings tend to provide important complementary signals but also that a dual-module approach is best-suited to incorporate such signals into deep neural models. We also analysed our DM-MCNNs with alternative embeddings. When we feed random sentiment embeddings into them, not unexpectedly, in many cases the results do not improve much. This is because our memory module has been designed to leverage informative prior information and to reweight its signals based on this assumption. Hence, it is important to feed genuine sentiment cues into the memory module. Yet, on some languages, we nevertheless note improvements over the CNN baseline. In these cases, even if similarities between pairs of sentiment vectors initially do not carry any significance, backpropagation may have succeeded in updating the sentiment embedding matrix such that eventually the memory module becomes able to discern salient patterns in the data. We also considered our DM-MCNNs when feeding the VADER or SocialSent embeddings into the memory module. In this case, it also mostly succeeded in outperforming the CNN baseline. In fact, on the Italian TripAdvisor dataset, the SocialSent embeddings yielded the overall strongest results. In all other cases, however, our transfer learning embeddings proved more effective. We believe that this is because they are obtained in a data-driven manner based on an objective that directly seeks to optimize for classification accuracy. Influence of Training Set Size. To look into the effect of our approach with restricted training data, we first consider the SST dataset as an instructive example. We set the training set size to 20%, 50%, 100% of its original size and compared our full dual module model with sentiment embeddings against state-of-the-art methods. The results are given in Table 4. Our dual module CNN has a sizeable lead over other methods when only using 20% of SST training set. Given that we study how to incorporate extrinsic cues into a deep neural model, we consider CNN-Ruleq (Hu et al., 2016) and Gumbel Tree-LSTM (Choi et al., 2017) as the relevant baseline methods. The CNN-Rule-q method used an iterative distillation method that exploits structured information from logical rules, which for SST is based on the word but to determine the weights in the neural network. The Gumbel Tree-LSTM approach incorporates a Straight-Through Gumbel-Softmax into a treestructured LSTM architecture that learns how to compose task-specific tree structures starting from plain raw text. They all require a large amount of data to pick up sufficient information during training, while our method is able to efficiently capture sentiment information from our transfer learning even though the data is scarce. For further analysis, we also artificially reduce the training set sizes to 50% of the original sizes given in Table 1 for our multilingual datasets. The results are plotted in Fig. 2. We compare: 1) the CNN model baseline, 2) the CNN model but concatenating our sentiment embeddings from transfer learning, and 3) our full dual module model with these sentiment embeddings. We already saw in Table 3 that we obtain reasonable gains over generic embeddings when using the full training sets. In Fig. 2, we additionally observe that the gains are overall much more remarkable on smaller training sets. This shows that the sentiment embeddings are most useful when they are of high quality and domain-specific training data is scarce, although a modest amount of training data is still needed for the model to be able to adapt to the target domain. Inspection of the DM-MCNN-learned Deep Sentiment Information. To further investigate what the model is learning, we examine the changes of weights of words on the English SST dataset when using the VADER sentiment embeddings 2531 Table 4: Accuracy on SST with increasing training sizes Model 20% 50% 100% CNN (Kim, 2014) 83.14 84.29 85.72 CNN-Rule-q (Hu et al., 2016) 83.75 85.45 86.49 Gumbel Tree-LSTM (Choi et al., 2017) 84.04 84.83 86.80 DC-MCNN (ours) 85.06 86.16 86.99 50% 100% en(FR) 60 70 80 90 100 Accuracy (%) 50% 100% de 60 70 80 90 100 50% 100% cs 60 70 80 90 100 50% 100% it 60 70 80 90 100 Accuracy (%) CNN CNN with Our Transfer Learning DM-MCNN 50% 100% ru 60 70 80 90 100 50% 100% jp 60 70 80 90 100 Figure 2: Effectiveness of three embedding alternatives on 6 languages at a reduced training size (comparing 50% and 100%). with DM-MCNNs. Although these are not as powerful as our transfer learning embeddings, the VADER embeddings are the most easily interpretable here, since they are one-dimensional, and thus can be regarded as word-specific weights. The result is visualized in Fig. 3. Here, the dark-shaded segments (in blue) refer to the original weights, while the light-shaded segments (in red) refer to the adjusted weights after training. The mediumshaded segments (in purple) reflect the overlap between the two. Hence, whenever we observe a dark (blue) segment above a medium (purple) segment in a bar, we can infer that the fine-tuned weight for a word (e.g., for “plays” in Fig. 3) was lower than the original weight of that word. Conversely, whenever we observe a light (red) segment at the top, the weight increased during training (e.g., for hilarious). Generally, dark (blue) segments reflect decreased weight magnitudes and light (red) ones reflect increased weight magnitudes, both on the positive and on the negative side. We consider in Fig. 3 the top 50 weight changes only of words that were already covered by the original VADER sentiment embeddings. Here, it is worth noting that the weight magnitudes of positive words such as “laugh”, “appealing” and negative words such as “lack”, “missing” increase further, while words such as “damn”, “interest”, “war” see decreases in magnitude, presumably due to their ambiguity and context (e.g., “damn good”, “lost the interest”, descriptions of war movies). Hence, the figure confirms that our DM-MCNN approach is able to exploit and customize the provided sentiment weights for the target domain. However, unlike the VADER data, our transfer learning approach results in multi-dimensional sentiment embeddings that can more easily capture multiple domains right from the start, thus making it possible to use them even without further fine-tuning. 4 Related Work Sentiment Mining and Embeddings. There is a long history of work on collecting word polarity scores manually (Hu and Liu, 2004) or via graphbased propagation from seeds (Kim and Hovy, 2004; Baccianella et al., 2010). Maas et al. (2011) present a probabilistic topic model that exploits sentiment supervision during training, leading to rep2532 worth sustain like powerful lack perfectly solid fun none plays instead tragedy problem manages hilarious kind definitely comes wants unless genre lacking nor pleasures pay frailty smarter provides captures damn money assured treat seeing war ghost depressing wo laugh devoid mistake deserve missing jokes interest loses convinced failure appealing stupid 0.4 0.2 0.0 0.2 0.4 Weight of Word Figure 3: Top 50 weight changes of words fine-tuned by the sentiment memory module of the DM-MCNN, using the one-dimensional VADER embeddings, but considering only words with non-zero values in the original VADER data. Here, the dark shade (blue) refers to the original value of word vectors, while the light shade (red) refers to their fine-tuned values after training. The medium intensity (purple) corresponds to the overlap between the original and fine-tuned word vectors. resentations that include sentiment signals. However, in their experiments, the semantic-only models mostly outperform the corresponding full models with extra sentiment signals. Tang et al. (2014) showed that one can acquire sentiment information by learning from millions of training examples via distant supervision. While prior work used such signals for rule-based sentiment analysis or for feature engineering in SVMs and other shallow models, our study examines how they are best be incorporated into deep neural models, as the baseline of na¨ıvely feeding them into the model does not work sufficiently well. Neural Architectures. In terms of architectures, deep recursive neural networks (Socher et al., 2013) were soon outperformed by deep convolutional and recurrent neural networks (˙Irsoy and Cardie, 2014; Kim, 2014). Recent work has investigated more involved models, with ingredients such as Tree-LSTMs (Tai et al., 2015; Looks et al., 2017), hierarchical attention (Yang et al., 2016), user and product attention (Chen et al., 2016), aspectspecific modeling (Wang et al., 2015), and part of speech-specific transition functions (Huang et al., 2017). Large ensemble models also tend to outperform individually trained sentiment analysis models (Looks et al., 2017). The goal of our study is not necessarily to devise the most sophisticated stateof-the-art neural architecture, but to demonstrate how external sentiment cues can be incorporated such architectures. Our initial explorations relied on a simple dual-channel convolutional neural network (Dong and de Melo, 2018). The present work proposes a more sophisticated approach, drawing on ideas from attention mechanisms in machine translation (Bahdanau et al., 2014) as well as from memory networks (Weston et al., 2014) and iterative attention (Kumar et al., 2015), which have proven useful for tasks such as question answering. We incorporate these ideas into a separate memory module that operates alongside the regular convolutional module. 5 Conclusions Deep neural networks are widely used in sentiment polarity classification, but suffer from their dependence on very large annotated training corpora. In this paper, we study how to incorporate extrinsic cues into the network, beyond just generic word embeddings. We have found that this is best achieved using a dual-module approach that encourages the learning of models with favourable generalization abilities. Our experiments show that this can lead to gains across a number of different languages and domains. Our embeddings and multilingual datasets are freely available from http: //gerard.demelo.org/sentiment/. Acknowledgments This research is funded in part by ARO grant no. W911NF-17-C-0098 as part of the DARPA SocialSim program. 2533 References Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC. European Language Resources Association. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, volume 7, pages 440–447. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Huimin Chen, Maosong Sun, Cunchao Tu, Yankai Lin, and Zhiyuan Liu. 2016. Neural sentiment classification with user and product attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1650–1659, Austin, Texas. Association for Computational Linguistics. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2017. Unsupervised learning of task-specific tree structures with tree-lstms. arXiv preprint arXiv:1707.02786. Gerard de Melo. 2015. Wiktionary-based word embeddings. In Proceedings of MT Summit XV. Xin Dong and Gerard de Melo. 2018. Cross-lingual propagation for deep sentiment analysis. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI 2018). AAAI Press. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 595–605, Austin, Texas. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD 2004: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177, New York, NY, USA. ACM. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. 2016. Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318. Minlie Huang, Qiao Qian, and Xiaoyan Zhu. 2017. Encoding syntactic knowledge in neural networks for sentiment classification. ACM Trans. Inf. Syst., 35(3):26:1–26:27. C.J. Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proc. ICWSM-14. Ozan ˙Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 720–728. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. Soo-Min Kim and Eduard Hovy. 2004. Determining the sentiment of opinions. In Proceedings of Coling 2004, pages 1367–1373, Geneva, Switzerland. COLING. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natural language processing. CoRR, abs/1506.07285. Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. 2017. Deep learning with dynamic computation graphs. CoRR, abs/1702.02181. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 142–150. Association for Computational Linguistics. Julian John McAuley and Jure Leskovec. 2013. From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews. In Proceedings of the 22nd international conference on World Wide Web, pages 897–908. ACM. Gerard de Melo. 2017. Inducing conceptual embedding spaces from Wikipedia. In Proceedings of WWW 2017. ACM. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep contextualized word representations. ArXiv e-prints. 2534 Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, AL Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph´ee De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. Proceedings of SemEval, pages 19–30. C´ıcero Nogueira dos Santos and Ma´ıra A. de C. Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In COLING 2014. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. CoRR, abs/1503.00075. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1555–1565, Baltimore, Maryland. Association for Computational Linguistics. Linlin Wang, Kang Liu, Zhu Cao, Jun Zhao, and Gerard de Melo. 2015. Sentiment-aspect extraction based on Restricted Boltzmann Machines. In Proceedings of ACL 2015, pages 616–625. Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. Association for Computational Linguistics. Ye Zhang and Byron Wallace. 2015. A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820.
2018
235
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2535–2544 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2535 Cold-Start Aware User and Product Attention for Sentiment Classification Reinald Kim Amplayo and Jihyeok Kim and Sua Sung and Seung-won Hwang Yonsei University Seoul, South Korea {rktamplayo, zizi1532, dormouse, seungwonh}@yonsei.ac.kr Abstract The use of user/product information in sentiment analysis is important, especially for cold-start users/products, whose number of reviews are very limited. However, current models do not deal with the cold-start problem which is typical in review websites. In this paper, we present Hybrid Contextualized Sentiment Classifier (HCSC), which contains two modules: (1) a fast word encoder that returns word vectors embedded with short and long range dependency features; and (2) Cold-Start Aware Attention (CSAA), an attention mechanism that considers the existence of cold-start problem when attentively pooling the encoded word vectors. HCSC introduces shared vectors that are constructed from similar users/products, and are used when the original distinct vectors do not have sufficient information (i.e. cold-start). This is decided by a frequency-guided selective gate vector. Our experiments show that in terms of RMSE, HCSC performs significantly better when compared with on famous datasets, despite having less complexity, and thus can be trained much faster. More importantly, our model performs significantly better than previous models when the training data is sparse and has coldstart problems. 1 Introduction Sentiment classification is the fundamental task of sentiment analysis (Pang et al., 2002), where we are to classify the sentiment of a given text. It is widely used on online review websites as they contain huge amounts of review data that can be clas… 0.6 distinct user vector shared user vector User X User A User X Review Count: 5 (Cold Start) final user vector 0.3 User B 0.1 User Z Similarity Weights 0.8 0.2 Figure 1: Conceptual schema of HCSC applied to users. The same idea can be applied to products. sified a sentiment. In these websites, a sentiment is usually represented as an intensity (e.g. 4 out of 5). The reviews are written by users who have bought a product. Recently, sentiment analysis research has focused on personalization (Zhang, 2015) to recommend product to users, and vise versa. To this end, many have used user and product information not only to develop personalization but also to improve the performance of the classification model (Tang et al., 2015). Indeed, these information are important in two ways. First, some expressions are user-specific for a certain sentiment intensity. For example, the phrase “very salty” may have different sentiments for a person who likes salty food and a person who likes otherwise. This is also apparent in terms of products. Second, these additional contexts help mitigate data sparsity and cold-start problems. Coldstart is a problem when the model cannot draw useful information from users/products where data is insufficient. User and product information can help by introducing a frequent user/product with similar attributes to the cold-start user/product. Thanks to the promising results of deep neural networks to the sentiment classification task 2536 (Glorot et al., 2011; Tang et al., 2014), more recent models incorporate user and product information to convolutional neural networks (Tang et al., 2015) and deep memory networks (Dou, 2017), and have shown significant improvements. The current state-of-the-art model, NSC (Chen et al., 2016a), introduced an attention mechanism called UPA which is based on user and product information and applied this to a hierarchical LSTM. The main problem with current models is that they use user and product information naively as an ordinary additional context, not considering the possible existence of cold-start problems. This makes NSC more problematic than helpful in reality since majority of the users in review websites have very few number of reviews. To this end, we propose the idea shown in Figure 1. It can be described as follows: If the model does not have enough information to create a user/product vector, then we use a vector computed from other user/product vectors that are similar. We introduce a new model called Hybrid Contextualized Sentiment Classifier (HCSC), which consists of two modules. First, we build a fast yet effective word encoder that accepts word vectors and outputs new encoded vectors that are contextualized with short- and long-range contexts. Second, we combine these vectors into one pooled vector through a novel attention mechanism called Cold-Start Aware Attention (CSAA). The CSAA mechanism has three components: (a) a user/product-specific distinct vector derived from the original user/product information of the review, (b) a user/product-specific shared vector derived from other users/products, and (c) a frequency-guided selective gate which decides which vector to use. Multiple experiments are conducted with the following results: In the original non-sparse datasets, our model performs significantly better than the previous state-of-the-art, NSC, in terms of RMSE, despite being less complex. In the sparse datasets, HCSC performs significantly better than previous competing models. 2 Related work Previous studies have shown that using additional contexts for sentiment classification helps improve the performance of the classifier. We survey several competing baseline models that use user and product information and other models using other kinds of additional context. Baselines: Models with user and product information User and product information are helpful to improve the performance of a sentiment classifier. This argument was verified by Tang et al. (2015) through the observation at the consistency between user/product information and the sentiments and expressions found in the text. Listed below are the following models that employ user and product information: • JMARS (Diao et al., 2014) jointly models the aspects, ratings, and sentiments of a review while considering the user and product information using collaborative filtering and topic modeling techniques. • UPNN (Tang et al., 2015) uses a CNN-based classifier and extends it to incorporate userand product-specific text preference matrix in the word level which modifies the word meaning. • TLFM+PRC (Song et al., 2017) is a textdriven latent factor model that unifies userand product-specific latent factor models represented using the consistency assumption by Tang et al. (2015). • UPDMN (Dou, 2017) uses an LSTM classifier as the document encoder and modifies the encoded vector using a deep memory network with other documents of the user/product as the memory. • TUPCNN (Chen et al., 2016b) extends the CNN-based classifier by adding temporal user and product embeddings, which are obtained from a sequential model and learned through the temporal order of reviews. • NSC (Chen et al., 2016a) is the current stateof-the-art model that utilizes a hierarchical LSTM model (Yang et al., 2016) and incorporates user and product information in the attention mechanism. Models with other additional contexts Other additional contexts used previously are spatial (Yang et al., 2017) and temporal (Fukuhara et al., 2007) features which help contextualize the sentiment based on the location where and the time when the text is written. Inferred contexts were also used as additional contexts for sentiment classifiers, such as latent topics (Lin and He, 2009) and aspects (Jo and Oh, 2011) from a topic model, argumentation features (Wachsmuth et al., 2015), and more recently, latent review clusters (Amplayo and Hwang, 2017). These additional con2537 w2 w3 wn w1 w2 w3 w4 w1 w2 w3 w4 w5 𝒉= 𝟑 𝒉= 𝟓 … … Hybrid Contextualized Word Encoder w1 w2 w3 w4 w5 wn … … User Vectors … u1 u2 u3 ux lookup Product Vectors … p1 p2 p3 px lookup Distinct Shared Distinct Shared freq(u1) freq(p1) 𝑳𝒖𝒅 𝑳𝒖 𝑳𝒖𝒑 𝑳𝒑 𝑳𝒖𝒔 𝑳𝒑𝒅 𝑳𝒑𝒔 CSAA (user) CSAA (product) Review Text: Figure 2: Full architecture of HCSC, which consists of the Hybrid Contextualized Word Encoder (middle), and user-specific (left) and product-specific (right) Cold-Start Aware Attention (CSAA). texts were especially useful when data is sparse, i.e. number of instances is small or there exists cold-start entities. Our model differs from the baseline models mainly because we consider the possible existence of the data sparsity problem. Through this, we are able to construct more effective models that are comparably powerful yet more efficient complexity-wise than the state-of-the-art, and are better when the training data is sparse. Ultimately, our goal is to demonstrate that, similar to other additional contexts, user and product information can be used to effectively mitigate the problem caused by cold-start users and products. 3 Our model In this section, we present our model, Hybrid Contextualized Sentiment Classifier (HCSC)1 which consists of a fast hybrid contextualized word encoder and an attention mechanism called Cold-Start Aware Attention (CSAA). The word encoder returns word vectors with both local and global contexts to cover both short and long range dependency relationship between words. The CSAA then incorporates user and product information to the contextualized words through an attention mechanism that considers the possible existence of cold-start problems. The full architecture of the model is presented in Figure 2. We 1The data and code used in this paper are available here: https://github.com/rktamplayo/HCSC. describe the subparts of the model below. 3.1 Hybrid contextualized word encoder The base model is a word encoder that transforms vectors of words {wi} in the text to new word vectors. In this paper, we present a fast yet very effective word encoder based on two different off-theshelf classifiers. The first part of HCWE is based on a CNN model which is widely used in text classification (Kim, 2014). This encoder contextualizes words based on local context words to capture short range relationships between words. Specifically, we do the convolution operation using filter matrices Wf ∈Rh×d with filter size h to a window of h words. We do this for different sizes of h. This produces new feature vectors ci,h as shown below, where f(.) is a non-linear function: ci,h = f([wi−(h−1)/2; ...; wi+(h−1)/2]⊤Wf + bf) The convolution operation reduces the number of words differently depending on the filter size h. To prevent loss of information and to produce the same amount of feature vectors ci,h, we pad the texts dynamically such that when the filter size is h, the number of paddings on each side is (h −1)/2. This requires the filter sizes to be odd numbers. Finally, we concatenate all feature vectors of different h’s for each i as the new word vector: wcnni = [ci,h1; ci,h2; ...] 2538 The second part of HCWE is based on an RNN model which is used when texts are longer and include word dependencies that may not be captured by the CNN model. Specifically, we use a bidirectional LSTM and concatenate the forward and backward hidden state vectors as the new word vector, as shown below: −→h i = LSTM(wi, −→h i−1) ←−h i = LSTM(wi, ←−h i+1) wrnni = [−→h i; ←−h i] The answer to the question whether to use local or global context to encode words for sentiment classification is still unclear, and both CNN and RNN models have previous empirical evidence that they perform better than the other (Kim, 2014; McCann et al., 2017). We believe that both short and long range relationships, captured by CNN and RNN respectively, are useful for sentiment classification. There are already previous attempts to intricately combine both CNN and RNN (Zhou et al., 2016), resulting to a slower model. On the other hand, HCWE resorts to combine them by simply concatenating the word vectors encoded from both CNN and RNN encoders, i.e. wi = [wcnni; wrnni]. This straightforward yet fast alternative outputs a word vector with semantics contextualized from both local and global contexts. Moreover, they perform as well as complex hierarchical structured models (Yang et al., 2016; Chen et al., 2016a) which train very slow. 3.2 Cold-start aware attention Incorporating the user and product information of the text as context vectors u and p to attentively pool the word vectors, i.e. e(wi, u, p) = v⊤tanh(Wwwi + Wuu + Wpp + b), has been proven to improve the performance of sentiment classifiers (Chen et al., 2016a). However, this method assumes that the user and product vectors are always present. This is not the case in real world settings where a user/product may be new and has just got its first review. In this case, the vectors u and p are rendered useless and may also contain noisy signals that decrease the overall performance of the models. To this end, we present an attention mechanism called Cold-Start Aware Attention (CSAA). CSAA operates on the idea that a cold-start user/product can use the information of other similar users/products with sufficient number of reviews. CSAA separates the construction of pooled vectors for user and for product, unlike previous methods that use both user/product information to create a single pooled vector. Constructing a user/product-specific pooled vector consists of three parts: the distinct pooled vector created using the original user/product, the shared pooled vector created using similar users/products, and the selective gate to select between the distinct and shared vectors. Finally, the user- and productspecific pooled vectors are combined into one final pooled vector. In the following paragraphs, we discuss the step-by-step process on how the user-specific pooled vector is constructed. A similar process is done to construct the product-specific pooled vector, but is not presented here for conciseness. The user-specific distinct pooled vector vd u is created using a method similar to the additive attention mechanism (Bahdanau et al., 2014), i.e. vd u = att({wi}, u), where the context vector is the distinct vector of user u, as shown in the equation below. An equivalent method is used to create the distinct product-specific pooled vector vd p. ed u(wi, u) = vd⊤tanh(W d wwi + W d uu + bd) ad ui = exp(ed u(wi, u)) P j exp(edu(wj, u)) vd u = X i ad ui × wi The user-specific shared pooled vector vs u is created using the same method above, but using a shared context vector u′. The shared context vector u′ is constructed using the vectors of other users and weighted based on a similarity weight. Similarity is defined as how similar the word usages of two users are. This means that if a user uk uses words similarly to the word usage of the original user u, then uk receives a high similarity weight. The similarity weight as uk is calculated as the softmax of the product of µ({wi}) and uk with a project matrix in the middle, where µ({wi}) is the average of the word vectors. The similarity weights are used to create u′, as shown below. Similar method is used for the shared productspecific pooled vector vs p. 2539 es u(µ({wi}), uk) = µ({wi})W s uuk as uk = exp(es u(wi, uk)) P j exp(esu(wi, uj)) u′ = X k as uk × uk vs u = att({wi}, u′) We select between the user-specific distinct and shared pooled vector, vd u and vs u, into one userspecific pooled vector vu through a gate vector gu. The vector gu should put more weight to the distinct vector when user u is not cold-start and to the shared vector when u is otherwise. We use a frequency-guided selective gate that utilizes the frequency, i.e. the number of reviews user u has written. The challenge is that we do not know how many reviews should be considered cold-start or not. This is automatically learned through a twoparameter Weibull cumulative distribution where given the review frequency of the user f(u), a learned shape vector ku and a learned scale vector λu, a probability vector is sampled and is used as the gate vector gu to create vu, according to the equation below. We normalized f(u) by dividing it to the average user review frequency. The relu function ensures that both ku and λu are nonnegative vectors. The final product-specific pooled vector vp is created in a similar manner. gu = 1 −exp  −  f(u) relu(λu) relu(ku) vu = gu × vd u + (1 −gu) × vs u Finally, we combine both the user- and productspecific pooled vector, vu and vp, into one pooled vector vup. This is done by using a gate vector gup created using a sigmoidal transformation of the concatenation of vu and vp, as illustrated in the equation below. gup = σ(Wg[vu; vp] + bg) vup = gup × vu + (1 −gup) × vp We note that our attention mechanism can be applied to any word encoders, including the basic bag of words (BoW) to more recent models such as CNN and RNN. Later (in Section 4.2), we show that CSAA improves the performance of simpler models greatly. 3.3 Training objective Normally, a sentiment classifier transforms the final vector vup, usually in a linear fashion, into a vector with a dimension equivalent to the number of classes C. A softmax layer is then used to obtain a probability distribution y′ over the sentiment classes. Finally, the full model uses a crossentropy over all training documents D as objective function L during training, where y is the gold probability distribution: y′ = softmax(Wvup + b) L = − X d∈D X c∈C y(d) c · log(y′(d) c ) However, HCSC has a nice architecture which can be used to improve the training. It contains seven pooled vectors V = {vd u, vd p, vs u, vs p, vu, vp, vup} that are essentially in the same vector space. This is because these vectors are created using weighted sums of either the encoded word vectors through attention or the parent pooled vectors through the selective gates. Therefore, we can train separate classifiers for each pooled vectors using the same parameters W and b. Specifically, for each v ∈V, we calculate the loss Lv using the above formulas. The final loss is then the sum of all the losses, i.e. L = P v∈V Lv. 4 Experiments In this section, we present our experiments and the corresponding results. We use the models described in Section 2 as baseline models: JMARS (Diao et al., 2014), UPNN (Tang et al., 2015), TLFM+PRC (Song et al., 2017), UPDMN (Dou, 2017), TUPCNN (Chen et al., 2016b), and NSC (Chen et al., 2016a), where NSC is the model with state-of-the-art results. 4.1 Experimental settings Implementation We set the size of the word, user, and product vectors to 300 dimensions. We use pre-trained GloVe embeddings2 (Pennington et al., 2014) to initialize our word vectors. We simply set the parameters for both BiLSTMs and CNN to produce an output with 300 dimensions: For the BiLSTMs, we set the state sizes of the LSTMs to 75 dimensions, for a total of 150 dimensions. For CNN, we set h = 3, 5, 7, each with 50 2https://nlp.stanford.edu/projects/ glove/ 2540 Datasets Classes Train Dev Test #docs #users #prods #docs #users #prods #docs #users #prods IMDB 10 67426 1310 1635 8381 1310 1574 9112 1310 1578 Yelp 2013 5 62522 1631 1633 7773 1631 1559 8671 1631 1577 Datasets Classes Sparse20 Sparse50 Sparse80 #docs #users #prods #docs #users #prods #docs #users #prods IMDB 10 44261 1042 1323 17963 659 840 2450 250 312 Yelp 2013 5 38687 1301 1288 16058 818 823 2406 352 304 Table 1: Dataset statistics feature maps, for a total of 150 dimensions. These two are concatenated to create a 300-dimension encoded word vectors. We use dropout (Srivastava et al., 2014) on all non-linear connections with a dropout rate of 0.5. We set the batch size to 32. Training is done via stochastic gradient descent over shuffled mini-batches with the Adadelta update rule (Zeiler, 2012), with l2 constraint (Hinton et al., 2012) of 3. We perform early stopping using a subset of the given development dataset. Training and experiments are all done using a NVIDIA GeForce GTX 1080 Ti graphics card. Additionally, we also implement two versions of our model where the word encoder is a subpart of HCSC, i.e. (a) the CNN-based model (CNN+CSAA) and (b) the RNN-based model (RNN+CSAA). For the CNN-based model, we use 100 feature maps for each of the filter sizes h = 3, 5, 7, for a total of 300 dimensions. For the RNN-based model, we set the state sizes of the LSTMs to 150, for a total of 300 dimensions. Datasets and evaluation We evaluate and compare our models with other competing models using two widely used sentiment classification datasets with available user and product information: IMDB and Yelp 2013. Both datasets are curated by Tang et al. (2015), where they are divided into train, dev, and test sets using a 8:1:1 ratio, and are tokenized and sentence-splitted using Stanford CoreNLP (Manning et al., 2014). In addition, we create three subsets of the train dataset to test the robustness of the models on sparse datasets. To create these datasets, we randomly remove all the reviews of x% of all users and products, where x = 20, 50, 80. These datasets are not only more sparse than the original datasets, but also have smaller number of users and products, introducing cold-start users and products. All datasets are summarized in Table 1. Evaluation is done using two metrics: the Accuracy which measures the overall sentiment classification performance and the RMSE which measures the diverModels IMDB Yelp 2013 Acc. RMSE Acc. RMSE JMARS 1.773∗ 0.985∗ UPNN 0.435∗ 1.602∗ 0.596∗ 0.784∗ TLFM+PRC 1.352∗ 0.716∗ UPDMN 0.465∗ 1.351∗ 0.639∗ 0.662 TUPCNN 0.488∗ 1.451∗ 0.639∗ 0.694∗ NSC 0.533 1.281∗ 0.650 0.692∗ CNN+CSAA 0.522∗ 1.256∗ 0.654 0.665 RNN+CSAA 0.527∗ 1.237∗ 0.654 0.667 HCSC 0.542 1.213 0.657 0.660 Table 2: Accuracy and RMSE values of competing models on the original non-sparse datasets. An asterisk indicates that HCSC is significantly better than the model (p < 0.01). gence between predicted and ground truth classes. We notice very minimal differences among performances of different runs. 4.2 Comparisons on original datasets We report the results on the original datasets in Table 2. On both datasets, HCSC outperforms all previous models based on both accuracy and RMSE. Based on accuracy, HCSC performs significantly better than all previous models except NSC, where it performs slightly better with 0.9% and 0.7% increase on IMDB and Yelp 2013 datasets. Based on RMSE, HCSC performs significantly better than all previous models, except when compared with UPDMN on the Yelp 2013 datasets, where it performs slightly better. We note that RMSE is a better metric because it measures how close the wrongly predicted sentiment and the ground truth sentiment are. Although NSC performs as well as HCSC based on accuracy, it performs worse based on RMSE, which means that its predictions deviate far from the original sentiment. It is also interesting to note that when CSAA is used as attentive pooling, both simple CNN and RNN models perform just as well as NSC, despite NSC being very complex and modeling the documents with compositionality (Chen et al., 2016a). This is especially true when com2541 Models Sparse20 Sparse50 Sparse80 NSC(LA) 0.469 0.428 0.309 NSC 0.497 0.408 0.292 CNN+CSAA 0.497 0.444 0.343 RNN+CSAA 0.505 0.455 0.364 HCSC 0.505 0.456 0.368 (a) IMDB Datasets Models Sparse20 Sparse50 Sparse80 NSC(LA) 0.624 0.590 0.523 NSC 0.626 0.592 0.511 CNN+CSAA 0.626 0.605 0.522 RNN+CSAA 0.633 0.603 0.527 HCSC 0.636 0.608 0.538 (b) Yelp 2013 Datasets Table 3: Accuracy values of competing models when the training data used is sparse. Bold-faced values are the best accuracies in the column, while red values are accuracies worse than NSC(LA). pared using RMSE, where both CNN+CSAA and RNN+CSAA perform significantly better (p < 0.01) than NSC. This proves that CSAA is an effective use of the user and product information for sentiment classification. 4.3 Comparisons on sparse datasets Table 3 shows the accuracy of NSC (Chen et al., 2016a) and our models CNN+CSAA, RNN+CSAA, and HCSC on the sparse datasets. As shown in the table, on all datasets with different levels of sparsity, HCSC performs the best among the competing models. The difference between the accuracy of HCSC and NSC increases as the level of sparsity intensifies: While the HCSC only gains 0.8% and 1.0% over NSC on the less sparse Sparse20 IMDB and Yelp 2013 datasets, it improves over NSC significantly with 7.6% and 2.7% increase on the more sparse Sparse80 IMDB and Yelp 2013 datasets, respectively. We also run our experiments using NSC without user and product information, i.e. NSC(LA) which reduces the model into a hierarchical LSTM model (Yang et al., 2016). Results show that although the use of user and product information in NSC improves the model on less sparse datasets (as also shown in the original paper (Chen et al., 2016a)), it decreases the performance of the model on more sparse datasets: It performs 2.0%, 1.7%, and 1.2% worse than NSC(LA) on Sparse50 IMDB, Sparse80 IMDB, and Sparse80 Yelp 2013 datasets. We argue that this is because NSC does not consider the existence of cold-start problems, which makes the additional user and product in 0.35 0.4 0.45 0.5 0.55 0 20 40 60 80 100 Accuracy Review frequency Accuracy per user review frequency on IMDB NSC HCSC 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.7 0 20 40 60 80 100 Accuracy Review frequency Accuracy per product review frequency on Yelp 2013 NSC HCSC Figure 3: Accuracy per user/product review frequency on both datasets. The review frequency value f represents the frequencies in the range [f, f + 10), except when f = 100, which represents the frequencies in the range [f, ∞). formation more noisy than helpful. 5 Analysis In this section, we show further interesting analyses of the properties of HCSC. We use the Sparse50 datasets and the corresponding results of several models as the experimental data. Performance per review frequency We investigate the performance of the model over users/products with different number of reviews. Figure 3 shows plots of accuracy of both NSC and HCSC over (a) different user review frequency on IMDB dataset and (b) different product review frequency on Yelp 2013 dataset. On both plots, we observe that when the review frequency is small, the performance gain of HCSC over NSC is very large. However, as the review frequency becomes larger, the performance gain of HCSC over NSC decreases to a very marginal increase. This means that HCSC finds its improvements over NSC from cold-start users and products, in which NSC does not consider explicitly. How few is cold-start? One intriguing question is when do we say that a user/product is coldstart or not. Obviously, users/products with no previous reviews at all should be considered coldstart, however the cut-off point between cold-start and non-cold-start entities is vague. Although we 2542 Example 1 Text: four words, my friends… fresh. baked. soft. pretzels. freq(user): 0 (cold start) freq(product): 13 (cold start) four words , my friends ... fresh . baked . soft . pretzels . user distinct user shared product distinct product shared 𝒈𝒖= 𝟎. 𝟎𝟎 𝟏−𝒈𝒖= 𝟏. 𝟎𝟎 𝒈𝒑= 𝟎. 𝟒𝟗 𝟏−𝒈𝒑= 𝟎. 𝟓𝟏 Example 2 Text: delicios new york style thin crust pizza with simple topping combinations as it should. ... we enjoyed the dining atmosphere but the waitress we had rushed us to leave . freq(user): 65 freq(product): 117 delicios new york style thin crust pizza with simple topping combinations as it should 𝒈𝒖= 𝟎. 𝟗𝟔 𝟏−𝒈𝒖= 𝟎. 𝟎𝟒 𝒈𝒑= 𝟏. 𝟎𝟎 𝟏−𝒈𝒑= 𝟎. 𝟎𝟎 user distinct user shared product distinct product shared we enjoyed the dining atmosphere but the waitress we had rushed us to leave user distinct user shared product distinct product shared Figure 4: Visualization of attention and gate values of two examples from the Yelp 2013 dataset. Example 2 is truncated, leaving only the important parts. Gate values g’s are the average of the values in the original gate vector. 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 Gate weight Review frequency Yelp User Yelp Product IMDB User IMDB Product Figure 5: Graph of the user/product-specific Weibull cumulative distribution on both datasets. cannot provide an exact answer to this question, HCSC is able to provide a nice visualization by reducing the shape and scale vectors, k and λ, of the frequency-guided selective gate into their averages and draw a Weibull cumulative distribution graph, as shown in Figure 5. The figure provides us these observations: First, users have a more lenient coldstart cut-off point compared to products; in the IMDB dataset, a user only needs approximately at least five reviews to use at least 80% of its own information (i.e. distinct vector). On the other hand, products tend to need more reviews to be considered sufficient and not cold start; in the IMDB dataset, a product needs approximately 40 reviews to use at least 80% of its own information. This explains the marginal increase in performance of previous models when only product information is used as additional context, as reported by previous papers (Tang et al., 2015; Chen et al., 2016a). On the different pooled vectors We visualize the attention and gate values of two example results from HCSC in Figure 4 to investigate on how Models IMDB Yelp 2013 NSC 7331 6569 CNN+CSAA 256 (28.6x) 146 (45.0x) RNN+CSAA 968 (7.6x) 561 (11.7x) HCSC 1110 (6.6x) 615 (10.7x) Table 4: Time (in seconds) to process the first 100 batches of competing models for each dataset. The numbers in the parenthesis are the speedup of time when compared to NSC. user/product vectors, and distinct/shared vectors work. In the first example, both user and product are cold-start. The user distinct vector focuses its attention to wrong words, since it is not able to use any useful information from the user at all. In this case, HCSC uses the user shared vector by using a gate vector gu = 0. The user shared vector correctly attends to important words such as fresh, baked, soft, and pretzels. In the second example, both user and product are not cold-start. In this case, the distinct vectors are used almost entirely by setting the gates close to 1. Still, the corresponding shared vectors are similar to the distinct vectors, proving that HCSC is able to create useful user/product-specific context from similar users/products. Finally, we look at the differing attention values of users and products. We observe that user vectors focus on words that describe the product or express their emotions (e.g. fresh and enjoyed). On the other hand, product vectors focus more on words pertaining to the products/services (e.g. pretzels and waitress). On the time complexity of models Finally, we report the time in seconds to run 100 batches of data of the models NSC, CNN+CSAA, 2543 RNN+CSAA, and HCSC in Figure 4. NSC takes too long to train, needing at least 6500 seconds to process 100 batches of data. This is because it uses two non-parallelizable LSTMs on top of each other. Our models, on the other hand, only use one (or none in the case of CNN+CSAA) level of BiLSTM. This results to at least 6.6x speedup on the IMDB datasets, and at least 10.7x speedup on the Yelp 2013 datasets. This means that HCSC does not sacrifice a lot of time complexity to obtain better results. 6 Conclusion We propose Hybrid Contextualized Sentiment Classifier (HCSC) with a fast word encoder which contextualizes words to contain both short and long range word dependency features, and an attention mechanism called Cold-start Aware Attention (CSAA) which considers the existence of the cold-start problem among users and products by using a shared vector and a frequency-guided selective gate, in addition to the original distinct vector. Our experimental results show that our model performs significantly better than previous models. These improvements increase when the level of sparsity in data increases, which confirm that HCSC is able to deal with the cold-start problem. Acknowledgements This work was supported by Microsoft Research Asia and the ICT R&D program of MSIT/IITP. [2017-0-01778, Development of Explainable Human-level Deep Machine Learning Inference Framework] References Reinald Kim Amplayo and Seung-won Hwang. 2017. Aspect sentiment model for micro reviews. In 2017 IEEE International Conference on Data Mining (ICDM). IEEE, pages 727–732. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Huimin Chen, Maosong Sun, Cunchao Tu, Yankai Lin, and Zhiyuan Liu. 2016a. Neural sentiment classification with user and product attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pages 1650–1659. Tao Chen, Ruifeng Xu, Yulan He, Yunqing Xia, and Xuan Wang. 2016b. Learning user and product distributed representations using a sequence model for sentiment analysis. IEEE Computational Intelligence Magazine 11(3):34–44. Qiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexander J Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 193–202. Zi-Yi Dou. 2017. Capturing user and product information for document level sentiment analysis with deep memory network. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 521–526. Tomohiro Fukuhara, Hiroshi Nakagawa, and Toyoaki Nishida. 2007. Understanding sentiment of people from news articles: Temporal sentiment analysis of social events. In ICWSM. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th international conference on machine learning (ICML-11). pages 513–520. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR abs/1207.0580. Yohan Jo and Alice H Oh. 2011. Aspect and sentiment unification model for online review analysis. In Proceedings of the fourth ACM international conference on Web search and data mining. ACM, pages 815– 824. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM conference on Information and knowledge management. ACM, pages 375– 384. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. pages 55–60. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems. pages 6297–6308. 2544 Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10. Association for Computational Linguistics, pages 79–86. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 1532–1543. Kaisong Song, Wei Gao, Shi Feng, Daling Wang, KamFai Wong, and Chengqi Zhang. 2017. Recommendation vs sentiment analysis: a text-driven latent factor model for rating prediction with cold-start awareness. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, pages 2744–2750. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958. Duyu Tang, Bing Qin, and Ting Liu. 2015. Learning semantic representations of users and products for document level sentiment classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). volume 1, pages 1014–1023. Duyu Tang, Furu Wei, Bing Qin, Ting Liu, and Ming Zhou. 2014. Coooolll: A deep learning system for twitter sentiment classification. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). pages 208–212. Henning Wachsmuth, Johannes Kiesel, and Benno Stein. 2015. Sentiment flow-a general model of web review argumentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 601–611. Min Yang, Jincheng Mei, Heng Ji, Zhou Zhao, Xiaojun Chen, et al. 2017. Identifying and tracking sentiments and topics from social media texts during natural disasters. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 527–533. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1480–1489. Matthew D. Zeiler. 2012. Adadelta: An adaptive learning rate method. CoRR abs/1212.5701. Yongfeng Zhang. 2015. Incorporating phrase-level sentiment analysis on textual reviews for personalized recommendation. In Proceedings of the eighth ACM international conference on web search and data mining. ACM, pages 435–440. Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. 2016. Text classification improved by integrating bidirectional lstm with twodimensional max pooling. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pages 3485–3495.
2018
236
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2545–2555 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2545 Modeling Deliberative Argumentation Strategies on Wikipedia Khalid Al-Khatib† Henning Wachsmuth‡ Kevin Lang† Jakob Herpel† Matthias Hagen§ Benno Stein† † Bauhaus-Universität Weimar Webis Group, Faculty of Media <firstname>.<lastname>@uni-weimar.de ‡ Paderborn University Computational Social Science Group [email protected] § Halle University [email protected] Abstract This paper studies how the argumentation strategies of participants in deliberative discussions can be supported computationally. Our ultimate goal is to predict the best next deliberative move of each participant. In this paper, we present a model for deliberative discussions and we illustrate its operationalization. Previous models have been built manually based on a small set of discussions, resulting in a level of abstraction that is not suitable for move recommendation. In contrast, we derive our model statistically from several types of metadata that can be used for move description. Applied to six million discussions from Wikipedia talk pages, our approach results in a model with 13 categories along three dimensions: discourse acts, argumentative relations, and frames. On this basis, we automatically generate a corpus with about 200,000 turns, labeled for the 13 categories. We then operationalize the model with three supervised classifiers and provide evidence that the proposed categories can be predicted. 1 Introduction Deliberation is the type of discussions where the aim is to find the best choice from a set of possible actions (Walton, 2010). This type is influential for making decisions in different processes including collaborative writing. Studies have shown the positive impact of deliberation on the quality of several document types, such as scientific papers, research proposals, political reports, and Wikipedia articles, among others (Kraut et al., 2012). However, deliberative discussions may fail, either by agreeing on the wrong action, or by reaching no agreement. While the former is hard to measure, the latter is, for example, clearly reflected in the number of disputed discussions on Wikipedia (Wang and Cardie, 2014). Although agreement can never be guaranteed, a deliberative argumentation strategy of a discussion’s participants makes it more likely (Kittur et al., 2007). With strategy, we here mean the sequence of moves that participants take during the discussion. Such a sequence is effective if it leads to a successful discussion. To achieve effectiveness, every participant has to understand the current state of a discussion and to come up with a next deliberative move that best serves the discussion. For newcomers, this requires substantial effort and time, especially when a discussion grows due to conflicts and back-and-forth arguments. Here, automated tools can help by annotating ongoing discussions with a label for each move or by providing a textual summary of past moves (Zhang et al., 2017a,b). A way to go beyond that is to let the tool recommend the best possible moves according to an effective strategy. This is the ultimate goal of our research. As a substantial step towards this goal, two fundamental research questions are addressed in the paper at hand: (1) How to model deliberative discussions in light of the aim of agreement, and (2) how to operationalize the model in order to identify different argumentation strategies and to learn about their effectiveness. Different models of deliberative discussions have been proposed in previous studies. These models were developed based on expert analyses of a small set of sampled discussions (see Section 2). However, the small size, in fact, confines the ability to develop a representative model, which should ideally cover a wide range of moves while being abstract to fit the majority of discussions. To overcome this limitation, we propose to derive a model statistically from a large set of discussions. We approach this based on different types of 2546 metadata that people use to describe their moves on Wikipedia talk pages, the richest source of deliberative discussions on the web. Particularly, we extract the entire set of about six million discussions from all English Wikipedia talk pages. We parse each discussion to identify its structural components, such as turns, users, and time stamps. Besides, we store four types of metadata from the turns: the user tag, a shortcut, an in-line template, and links. To learn from the metadata, we cluster the types’ instances based on their semantic similarity. Then, we map each cluster to a specific concept (e.g., ‘providing a source’), and the related concepts into a set of categories (e.g., ‘providing evidence’). Table 2 shows the categories of our model. Analyzing the distribution of these categories, we find that each turn ideally should have (1) one of six categories that we call discourse acts, (2) one of three categories that we call argumentative relations, and (3) one of four categories that we call frames. As such, our model is in line with three well-established theories: speech act theory (Searle, 1969), argumentation theory (Peldszus and Stede, 2013), and framing theory (P. Levin et al., 1998). A model instance is sketched in Figure 1. Based on the model, we generate a new largescale corpus using the metadata automatically: Webis-WikiDebate-18 corpus. Basically, if a turn in a discussion has metadata that belongs to a specific category according to the above-mentioned analysis, it is labeled with that category. The corpus includes 2400 turns labeled with a discourse act, 7437 turns labeled with a relation, and 182,321 turns labeled with a frame. To operationalize our model, we train three supervised classifiers for acts, relations, and frames on the corpus. The classifiers employ a rich set of linguistic features that has been shown to be effective in similar tasks (Ferschke et al., 2012). The results of our experiments suggest that we are able to predict the labels with a comparable performance to the one achieved in similar tasks. Overall, the contribution of this paper is threefold: (1) A data-driven approach for creating a new model of deliberative discussions that is aligned with well-established theories, (2) a corpus with about 200,000 turns labeled for 13 different categories, and (3) a classification approach that predicts the labels of turns. All developed resources are freely available at https://www.webis. de/data/data.html. 2 Related Work Modeling deliberative discussions in Wikipedia has been already addressed in different studies. The central goal of these studies is to minimize the coordination effort among discussion participants. In particular, Ferschke et al. (2012) have proposed a model of 17 dialogue acts, each belonging to one of four categories: article criticism, explicit performative, information content, and interpersonal. The model was derived by performing a manual analysis of 30 talk pages in the Simple English Wikipedia. Based on the model, a new corpus of 1367 turns has been created and used to train and evaluate a multi-label classifier for predicting the model’s acts. Another model is the one proposed by Viegas et al. (2007). The model consists of 11 different dialogue acts. These acts have been used to manually label 25 talk pages from the English Wikipedia. Furthermore, Bender et al. (2011) have developed a model for authority claims and alignment moves in Wikipedia discussions. The model then has been used to label 47 talk pages. Rooted in the limitation of being derived from a small sample, these models obtain low coverage and/or are over-abstracted. This is indicated by labels such as ‘other’ (Viegas et al., 2007) or by a very abstract ‘information providing’ act (Ferschke et al., 2012), which covers 78% of the turns. We argue that recommending such moves for new participants will not be useful. On the other hand, the model of Ferschke et al. (2012) does not include anything similar to ‘propose alternative action’, for example, although such a concept was shown to be important in deliberative dialogues (Walton, 2010). Moreover, no existing model distinguishes the three dimensions of turns: act, relation, and frame. They either consider only one dimension or mix an act with a relation, such as in the label: ‘criticizing unsuitable or unnecessary content’ (Ferschke et al., 2012). This is a problem for predicting the next best deliberative move. For example, consider a discussion about adding new content to an article, where the participants support the action with different acts (e.g., ‘providing evidence’), but all of them consider the ‘writing quality’ frame. A new turn attacks the action by providing evidence that the action would violate the ‘neutral point of view’. The best next move should actually consider this frame, since no content that violates ‘neutral point of view’ policy should be added, regardless of its adherence to the ‘writing quality’. 2547 Enhancing the understanding Attack Writing quality Providing evidence Attack Verifiability and factual accuracy Socializing Neutral Dialogue management Asking a question Neutral Verifiability and factual accuracy Enhancing the understanding Neutral Verifiability and factual accuracy Recommending an act Support Writing quality Frame Relation Act Figure 1: Left: An excerpt of a discussion in a Wikipedia talk page. Right: The labels of each turn in the discussion according to our proposed model. In contrast, our approach of deriving the model using thousands of different ‘descriptions’ of moves written by the numerous Wikipedia users is, in our view, more likely to give a representative picture of how people argue in deliberative discussions. This, in turn, leads not only to high coverage, but also to better abstraction. Our model is in line with three well-known theories, which we summarize in the next paragraph. Speech act is a widely accepted theory in pragmatics (Searle, 1969). Based on this theory, many research papers have been proposed for modeling different domains, such as one-on-one live chat (Kim et al., 2010), persuasiveness in blogs (Anand et al., 2011), twitter conversations (Zarisheva and Scheffler, 2015), and online dialogues (Khanpour et al., 2016). In the context of argumentation theory (Peldszus and Stede, 2013), agreement detection is a related direction of work which has been studied in discussions (Rosenthal and McKeown, 2015). Notably, Andreas et al. (2012) annotated 822 turns from 50 talk pages with three labels: ‘agreement’, ‘disagreement’, and ‘non’. Anyhow, over the last few years, argumentation mining became a hot topic in our community, where several studies have went beyond the agreement detection to investigate the identification of the ‘support’ and ‘attack’ relations in argumentation discourses (Peldszus and Stede, 2013). Finally, framing is one of the important theories in discourse analysis (Entman, 1993). This theory has been studied widely in different domains, such as news article (Naderi and Hirst, 2017) and political debates (Tsur et al., 2015). These three theories back up the essence of our proposed model. We found that a participant in a discussion writes her text considering a specific act, an argumentative relation, and a frame. The metadata in Wikipedia have been used for different tasks. The ‘infobox’ has been exploited in the tasks of question answering (Morales et al., 2016) and summarization (Ye et al., 2009), among others. Moreover, Wang and Cardie (2014) have used specific discussion templates to identify discussions that are disputed. Besides Wikipedia, metadata such as ‘point for’, ‘point against’, and ‘introduction’ have been used successfully for modeling argumentativeness in debate platforms (AlKhatib et al., 2016a). Also, The metadata for user interactions, such as the ‘delta indicator’ and users votes in Reddit ChangeMyView discussions have been used to model the persuasiveness of a text (Tan et al., 2016). 2548 We started the investgation of strategies for writing argumentative texts in previous work. In (AlKhatib et al., 2016b), we have presented a corpus for argumentation strategies in news editorials. We then used this corpus and other data in (Al-Khatib et al., 2017) to identify patterns of strategies across different general topics. In contrast to those two studies targeting monological texts, here we address argumentation strategies in dialogical texts. 3 Modeling Deliberative Discussions The web is full of platforms where users can share and discuss opinions, beliefs, and ideas. In case of deliberative discussions, in particular, participants try to find the best action from several choices. Apparently, the participants there follow a strategy to achieve an effective discussion, i.e., each participant tries to come with the best deliberative move that leads to achieve the goal of discussion. The numerous deliberative discussions on these platforms do not only include user-written text, but also different types of metadata that users add to benefit the coordination between them. For example, users vote for specific posts, summarize texts, include references to the sources they use, refer to the discussion policies of a platform, or report bad behavior of others. Overall, the available metadata represents a valuable resource that provides insights into three main aspects of a discussion: The functions of users’ moves, the users’ roles, and the discussion topics along with their flows. We propose to exploit the metadata for modeling argumentation strategies in deliberative discussions. To this end, we proceed in four general steps: (1) metadata inspection, which includes investigating the used metadata and its functions, (2) concept origination, where clusters of similar metadata are created and mapped to corresponding concepts, (3) concept categorization, where similar concepts are abstracted into a defined set of categories, and (4) category composition, where possible overlaps between categories should be identified. The idea of this approach is not only to model the strategies, but also to allow for an operationalization of the resulting model by providing a dataset for training classifiers. In particular, the metadata can also be used to label discussions based on distant supervision (Mintz et al., 2009). In the following, we describe how we implement our approach to derive a new model of Wikipedia discussions, using the metadata provided by the participants. 3.1 Discussion Parsing As part of the management policies of Wikipedia, each article has an associated page called ‘Talk’. The main purpose of the talk page is to allow users to discuss how to improve the article through specific actions that they agree on. Most of these discussions can be seen as deliberative, since all participants share the same goal: finding the best action to improve the article. When a user has a proposal on how to improve an article, she can open a discussion on the article’s talk page, specifying a title and the main topic of discussion. Usually, the topic denotes a suggestion to perform a specific action, such as adding, merging, or deleting certain content of the article, among others. Ideally, multiple users then participate in the discussion about whether the action would improve the article or not. Each single comment written by a user at a specific time is called a ‘turn’. A turn may reply directly to the main topic of the discussion or to any other turn. Overall, a discussion consists of the title, the main topic, and a number of turns written by users with attached time stamps (see Figure 1). Based on a manual inspection of the turns’ texts of 50 discussions, we found four general types of metadata used by the participants: user tags, shortcuts, inline-templates, and external links. To derive a model from Wikipedia, we need to extract and parse the whole set of discussions on all talk pages, including both ongoing and closed ones. This is all but trivial, particularly due to the fact that the creation of a discussion is solely done by the users; although Wikipedia describes the required format of the different parts of a discussion in detail, not all users follow the format, often forgetting required symbols or mistakenly confusing a symbol with another one. In the implementation of our approach, we built upon the English Wikipedia dump created on March 1st, 2017. Given a Wikipedia dump, we parse it in the following steps: Extraction of Talk Pages First, we obtain the talk pages. We use the Java Wikipedia Library (JWPL) from Zesch et al. (2008), which converts a Wikipedia dump into a database that provides an easy-to-use access to the dump components. Extraction of Discussions Next, we extract the discussions from the talk pages. To this end, we develop several regular expressions that capture the format for starting and ending a discussion. 2549 Corpus Component Instances Page 5 807 046 Discussion 5 941 534 Discussion template 144 824 Turn 20 816 860 Registered users 739 244 Turns by registered users 10 926 670 Turns by anonymous user 9 890 190 Tag 99 889 Shortcut 425 583 Inline template 3 382 443 Links 4 824 085 Turns with tag and shortcut 2 347 Turns with tag and inline template 61 521 Turns with shortcut and inline template 170 065 Table 1: Instance counts of the different components of the Webis-WikiDiscussions-18 corpus. Identification of Structure Given the discussion, we identify their structure. We created a specific template to mine the title. The topic of the discussion is simply given by the first turn. To identify and correctly segment all users’ turns, we use several indicators, for instance, indentations. Identification of Turn Metadata Finally, we identify the metadata of each turn. We analyzed how users include the tags in their turns, finding that they usually start a turn with a user tag in triple quotation marks. A shortcut starts with ‘WP:’, followed by a name for the shortcut, together encapsulated by brackets. Also templates are placed between double parentheses, but they do not start with ‘WP:’. Links are simply identified by either of the affixes ‘www.’ and ‘http:’. 3.2 The Webis-WikiDiscussions-18 Corpus The result of the parsing process is a large-scale corpus of Wikipedia discussions. In particular, the Webis-WikiDiscussions-18 corpus we created contains about six million discussions, consisting of about 20 million turns. The turns comprise around 74,000 different tags with a total of about 100,000 instances, around 7000 different shortcuts with about 400,000 instances, and around 51,000 different inline templates with about 3.3 million instances. Half of the turns are written by registered users. Table 1 lists the exact instance counts. 3.3 Model Derivation We now explain how we derive a model of deliberative discussions from the metadata obtained in the previous subsection. The derivation process includes the four steps outlined in the beginning of this section. Metadata Inspection As mentioned before, a turn on Wikipedia includes up to four types of metadata: user tag, shortcut, inline template, and external link. Each type has a specific definition, a suggested usage, and properties that we discuss in the following paragraphs. A user tag is a short text that a discussion participant uses to describe or summarize her contribution. Most tags indicate the main function of the contribution, such as ‘proposal’ and ‘question’. Users can define any free-text tag they want using a noun, verb, etc. Analyzing the tags in the crawled discussions, we found the most frequent tags to be rather general and meaningful, whereas less frequent tags often capture aspects of the topic of discussion, such as ‘Israel-Venezuela relations’ in the discussion about ‘Foreign relations of Israel’. Sometimes, tags are used to get the attention of specific users, such as ‘For who reverted my change’. Unfortunately, many users also misuse tags, for example, by including the whole turn’s text there or by encoding meaningless information. A shortcut is an abbreviation text link that redirects the user to some page on Wikipedia. Although shortcuts may link to any Wikipedia page, they are often used to link to rules or policies. The respective pages belong to one of five categories: (1) Behavioral guidelines: Pages that describe how users should interact with each other (e.g., during a discussion). This includes that users should be “good-faith” (WP:AGF), among others. (2) Content guidelines: Pages that describe how to identify and include information in the articles, such as those about how an article should have reliable and accepted sources (WP:RELIABLE). (3) Style guidelines: Pages that contain advice on writing style, formatting, grammar, and similar. This includes how to write the introduction (WP:LEAD) and headings (WP:HEADINGS), and what style to use for the content (WP:MOS). (4) Notability guidelines: Pages that illustrate the conditions of testing whether a given topic warrants its own article. The most common shortcut in this category is (WP:N). (5) Editing guidelines: Pages that provide information on the metadata of articles, such as the articles’ categories (WP:CAT). Overall, we found that shortcuts are used particularly frequently for style, content, and behavioral 2550 guidelines in Wikipedia discussions. The participants mainly use them to discuss the impact of applying an action that has been proposed to be performed on a Wikipedia article. For example, adding a lot of content to the introduction of an article may violate the style guidelines. A user can indicate this by referring to the style rules using the shortcut (WP:LEAD). An inline template is a Wikipedia page that has been created to be included in other pages. Inline templates usually comprise specific patterns that are used in many articles, such as standard warnings or boilerplate messages. For example, there are templates for including a quotation, citation, or code, among others. Templates are used frequently in Wikipedia discussions, with the objective of writing readable and well structured turns. An external link, finally, points to a web page outside Wikipedia. External links occur both in Wikipedia articles and in Wikipedia discussions. While there are some restrictions for using them in articles, they can be used without restriction in discussions. We found that these links are used in Wikipedia discussions to point to evidence on the linked web pages. In particular, they often link to research, news, search engines, educational institutions, and blogs. Concept Origination We analyzed the usage of the four types of metadata in Wikipedia discussions and identified a set of concepts. Each concept primarily describes the turn that a participant writes: User tags: We explored all 376 tags that occurred at least 35 times. As discussed before, the tags could be seen as a keywords that describe the turns. Often, different tags refer to the same concept, for example, ‘conclusion’, ‘summary’, and ‘overall’ all capture the concept of ‘summarization’, i.e., the main function of the respective turns is to summarize the discussion. As a result, we identified 32 clusters. We examined some turns belonging to each cluster, and mapped each cluster to a specific concept that describes it. Shortcuts: Analogously, we explored all 99 shortcuts that occurred at least 900 times. Since the shortcuts themselves do not describe the turn, but rather the policy pages they refer to, we analyzed these pages by reading their first paragraphs and by checking their relation to the pages of the five shortcut categories we discussed before (e.g., ‘behavioral’). This resulted in the identification of 12 concepts. We found that each shortcut concept describes the main quality aspect that a turn addresses. For example, ‘writing content’ specifies how a proposed action influences the quality of the writing of the associated article. Inline-templates: Our investigation of this type led only to concepts that we already found before for the tags and shortcuts, such as ‘stating a fact’. External links: Similar to the templates, we identified concepts in the links that we also observed in the tags, such as ‘providing source’. Concept Categorization The concepts that we identified in the user tags can be grouped into six categories that we see as ‘discourse acts’: 1. Socializing: All concepts related to social interaction, such as thanking, apologizing, or welcoming other users. 2. Providing evidence: All concepts concerning the provision of evidence. Evidence may be given in form of a quote, an example, a fact, references, a source, and similar. 3. Enhancing the understanding: All concepts related to helping users understand the topic of discussion or a discussion itself. This can be done by giving background information, by clarifying misunderstandings, or by summarizing the discussion, among others. 4. Recommending an act: All concepts proposing to add a new aspect to the discussion, to ask more users to participate in the discussion, or to come up with an alternative to the proposed action. 5. Asking a question: All concepts related to questions serving different purposes, such as obtaining information on the topic of discussion, requesting reasons of specific decisions, and similar. 6. Finalizing the discussion: All concepts related to the decision of a discussion, including reporting the decision, committing it, or closing the discussion to move it to the archive. In addition, we identified three further categories based on the user tags, which we see as relevant to ‘argumentation theory’. Each represents a relation between the turn and the topic of discussion or between the turn and another turn: 1. Support relation: The turn agrees with or supports another turn or the topic of discussion, 2551 for instance, by providing an argument in favor of the one in the ‘supported’ turn. 2. Attack relation: The opposite of the ‘support relation’, i.e., the turn disagrees or attacks another turn or the topic of discussion. 3. Neutral relation: The turn has a neutral relation to another turn or the topic of discussion when it neither support nor attack it. Finally, we identified four categories based on the shortcuts that we see as relevant to ‘framing theory’. They target a quality dimension of the article or of the discussion itself: 1. Writing quality: Turns that mainly address issues related to the quality of writing of an article, such as whether adding new content complies with the style guidelines for lead sections, the layout, or similar. 2. Verifiability and factual accuracy: Turns that address issues related to the quality of references, the reliability of sources, copyright violations, plagiarism, and similar. 3. Neutral point of view: Turns that focus on a fair representation of viewpoints and on how to avoid bias. 4. Dialogue management: Turns that concentrate on issues related to managing the discussion, such as reporting abusive language, preserving respect between users, encouraging newcomer participants, and similar. Category Composition Given these categories, we investigated the interaction between them in 20 discussions, for instance, to see whether the categories are orthogonal. We found that each turn may have one discourse act, one relation, and one frame at the same time. For example, a turn may support another turn by providing evidence (say, of the type ‘source’), while focusing on the writing quality frame. Table 2 shows the categories of our model and their concepts. 4 Model Operationalization In this section, we present the operationalization process of our proposed model for deliberative argumentation strategies. First, we explain the construction of Webis-WikiDebate-18: a large-scale corpus for our model that we generated automatically based on the metadata in discussions. Then, we discuss the development and evaluation of a classification approach which we use for predicting the model’s categories. 4.1 The Webis-WikiDebate-18 Corpus To create a corpus for our model, we decided to rely again on the metadata. In particular, for each category in our model, we retrieved the metadata instances that had been used to derive the category, and then labeled any turn that included any metadata with this category. For example, the user tag ‘overall’ was used to originate the concept ‘summarization’, which was abstracted into the category ‘enhancing the understanding’. Accordingly, all the turns that included this tag were labeled with the category ‘enhancing the understanding’. This process is in line with the distant supervision paradigm. In case a turn contained metadata belonging to two categories, we excluded it from the corpus. This happened with some shortcuts in particular. Basically, such cases indicate that some turns address more than one frame. Overall, the corpus comprises 2400 turns labeled with one of the six discourse act categories, 7437 turns with one of the relation categories, and 182,321 turns with one of the frame categories. In order to verify the reliability of the corpus, we randomly sampled about 100 turns from each category, ensuring that all the category’s concepts are taken into consideration. The turns in the samples were verified (i.e., whether they belong to the assigned category) by a worker hired from the freelancing platform upwork.com. The worker was a native speaker of English with deep expertise in writing. Table 3 shows statistics of the corpus, including the percentage of turns in each sample that belong to the assigned category according to the expert. In general, this verification result is comparable to the inter-annotator agreement achieved in some related studies (Ferschke et al., 2012). 4.2 Classification Approach Based on the Webis-WikiDebate-18 corpus, we develop three supervised classifiers: one for the discourse acts, one for the relations, and one for the frames. Since this paper does not aim at proposing a novel approach for the classification tasks, but rather at showing the ability to operationalize the model, we follow existing work that has proposed methods for the tasks at hand. Particularly, we implement a rich set of features that have been used by others before. These features capture lexical, semantic, style, and pragmatic properties of turns. 2552 Dimension Category Concepts Discourse act Socializing (1) Thank a user, (2) Apologize from a user, (3) Welcome a user, (4) Express anger Providing evidence (1) Provide a quote, (2) Reference, (3) Source, (4) Give an example, (5) State a fact, (6) Explain a rational Enhancing the understanding (1) Provide background info, (2) Info on the history of similar discussions, (3) Introduce the topic of discussion, (4) Clarify a misunderstanding, (5) Correct previous own or other’s turn, (6) Write a discussion summary, (7) Conduct a survey on participants, (8) Request info Recommending an act (1) Propose alternative action on the article, (2) Suggest a new process of discussion, (3) Propose asking a third party Asking a question (1) Ask a general question about the topic, (2) Question a proposal or arguments in a turn Finalizing the discussion (1) Report the decision, (2) Commit the decision, (3) Close the discussion Argumentative Support (1) Agree, (2) Support relation Neutral (1) Be neutral. Attack (1) Disagree, (2) Attack, (3) Counter-attack Frame Writing quality (1) Naming articles, (2) Writing content, (3) Formatting, (4) images, (5) Layout and list Verifiability and factual accuracy (1) Reliable sources, (2) Proper citation (3) Good argument Neutral point of view (1) Neutral point of view Dialogue management (1) Be bold. (2) Be civil, (3) Don’t game the system Table 2: The concepts covered by each category of each of the three principle dimensions of our model. Dimension Category Turns Prec. Discourse act Socializing 83 0.71 Providing evidence 781 0.49 Enhancing the understanding 671 0.56 Recommending an act 137 0.82 Asking a question 106 0.71 Finalizing the discussion 622 0.71 Argumentative Support 2895 1.00 relation Neutral 1937 0.63 Attack 2605 1.00 Frame Writing quality 19893 0.51 Verifiability and factual ac. 72049 0.89 Neutral point of view 60007 0.89 Dialogue management 30372 0.74 Table 3: Number of turns in each category of WebisWikiDebate-18 corpus and the precision of sampled turns for each category according to an expert. In short, we used the following features: The frequency of word 1–3-grams, character 1–3-grams, chunk 1–3-grams, function word 1–3-grams, and of the first 1–3 tokens in a turn. The number of characters, syllables, tokens, phrases, and sentences in a turn. the frequencies of part-of-speech tag 1–3-grams. The mean SentiWordNet score of the words in a turn (http://sentiwordnet. isti.cnr.it). The frequency of each word class of the General Inquirer (http://www. wjh.harvard.edu/~inquirer). The depth level of turns in the discussion. For the relation classifier, we had additional features that consider the target of the relation (the parent turn), namely, the cosine, euclidean, manhattan, and jaccard similarity between turn and parent turn. 4.3 Experiments and Results As a preprocessing step, we cleaned the turns in the Webis-WikiDebate-18 Corpus by removing all the metadata: user tags, shortcuts, user and time stamps, etc. Then, we grouped the turns that belong to the discourse act categories in a single dataset (say, the ‘discourse act dataset’). The same was performed for the turns belonging to relations and frames. We then split each of the three datasets randomly into training (60%), development (20%), and test (20%) sets. We ensured that turns from the same discussion should appear only in either of the split sets, in order to avoid biasing the classifiers by topical information. We trained different machine learning models on the training sets and evaluated them on the development sets. The models included those which had been used before in similar tasks, such as naive bayes, logistic regression, support vector machine, and random forest. We tried both under and oversampling on the training sets. The best results in the three tasks were achieved by using support vector 2553 Dimension Category Prec. Rec. F1 Discourse act Socializing 0.14 0.11 0.13 Providing evidence 0.63 0.77 0.69 Enhancing the understand. 0.62 0.55 0.58 Recommending an act 0.13 0.09 0.10 Asking a question 0.80 0.19 0.31 Finalizing the discussion 0.67 0.74 0.71 Argumentative Support 0.53 0.59 0.56 relation Neutral 0.55 0.50 0.52 Attack 0.50 0.49 0.50 Frame Writing quality 0.74 0.47 0.57 Verifiability and factual ac. 0.62 0.74 0.67 Neutral point of view 0.59 0.56 0.58 Dialogue management 0.64 0.56 0.60 Table 4: The precision, recall, and F1-score of our classifiers for all categories of the three dimensions. machine without sampling the training sets. We used the support vector machine implementation from the LibLinear library (Fan et al., 2008) on the test sets and report the results in Table 4. Overall, the three classifiers achieved results that are comparable to the results of previous methods on the corresponding tasks (Ferschke et al., 2012; Zhang et al., 2017a). We obtained the best results in the frame task, followed by relations and then discourse acts. Apparently, the results correlate with the size of the datasets. In case of discourse acts, the classifier achieves low F1-scores for ‘socializing’, ‘recommending an act’, and ‘asking a question’. These categories have a significantly smaller number of turns compared to other categories, which makes identifying them harder. The effectiveness of classifying the relation and frame categories, on the other hand, appears promising given the difficulty of these tasks. We point that we considered mainly the turns’ texts in our experiments. In principle, this helps to get an idea about the effectiveness of our approach in Wikipedia as well as other registers for discussions. Nevertheless, including the metadata and structural information of the analyzed discussions is definitely worthwhile in general, and will naturally tend to lead to notably higher effectiveness. 5 Discussion and Conclusion While our approach to modeling argumentation strategies in deliberative discussions may seem Wikipedia-specific, the derivation of concepts and categories from metadata can be transferred to other online discussion platforms. We expect the general derivation steps to be the same, whereas the techniques applied within each step may differ depending on the types, frequency, and quality of metadata. For example, the consistent usage of the most common user tags in Wikipedia discussions helps originating concepts manually. In contrast, other metadata might require the use of computational methods, such as clustering, keyphrase extraction, and textual entailment. Unlike previous approaches to the modeling of discussions on Wikipedia, our model decouples the three principle dimensions of discussions: discourse acts, argumentative relations, and frames. We argue that the distinction of these dimensions is key to develop tool support for discussion participants, for example, for recommending the best possible move in an ongoing discussion. Also, our model helps analyzing the influence of user interaction and behavior on the effectiveness of discussion decisions. For example, some Wikipedia users focus on the frame ‘well written’ while ignoring others, which may negatively affect the accuracy of an article’s content. Also, users often attack other turns, instead of considering neutral acts such as clarifications of misunderstandings. Many categories in our model will apply to deliberative discussions in general, particularly the discourse acts and argumentative relations. While the found frames are more Wikipedia-specific, similar play a role on collaborative writing platforms. For example, when writing a scientific paper, possible frames are the ‘writing quality’ or the ‘verifiability of content and citations’. Besides the model, we created two large-scale corpora: The Webis-WikiDiscussions-18 corpus, including the entire set of Wikipedia discussions (at the time of parsing) with annotated discussion structure and metadata, and the Webis-WikiDebate-18 corpus, where turns are labeled for their discourse acts, argumentative relations, and frames. We believe that these corpora will help foster research on tasks such as argument mining, among others. Finally, we operationalized our Wikipedia discussion model in three support vector machine classifiers with tailored features. Our experiment results confirm that categories of our model can be predicted successfully. In future work, we plan to study how to distinguish effective from ineffective discussions based on our model as well as how to learn from the strategies used in successful discussions, in order to predict the best next deliberative move in an ongoing discussion. 2554 References Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas Kohler, and Benno Stein. 2016a. Crossdomain mining of argumentative text through distant supervision. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL, pages 1395–1404. Association for Computational Linguistics. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, and Benno Stein. 2017. Patterns of Argumentation Strategies across Topics. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1351–1357. Association for Computational Linguistics. Khalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016b. A News Editorial Corpus for Mining Argumentation Strategies. In Proceedings of the 26th International Conference on Computational Linguistics, COLING, pages 3433–3443. Association for Computational Linguistics. P. Anand, J. King, Jordan Boyd-Graber, E. Wagner, C. Martell, Douglas Oard, and Philip Resnik. 2011. Believe Me—We Can Do This! Annotating Persuasive Acts in Blog Text. In Workshops at the TwentyFifth AAAI Conference on Artificial Intelligence. Jacob Andreas, Sara Rosenthal, and Kathleen McKeown. 2012. Annotating Agreement and Disagreement in Threaded Discussion. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC, pages 818–822. Emily M. Bender, Jonathan T. Morgan, Meghan Oxley, Mark Zachry, Brian Hutchinson, Alex Marin, Bin Zhang, and Mari Ostendorf. 2011. Annotating Social Acts: Authority Claims and Alignment Moves in Wikipedia Talk Pages. In Proceedings of the Workshop on Languages in Social Media, pages 48–57. Association for Computational Linguistics. Robert M. Entman. 1993. Framing: Toward Clarification of a Fractured Paradigm. Journal of Communication, 43(4):51–58. Rong En Fan, Kai-Wei Chang, Cho-Jui Hsieh, X.-R. Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. JMLR. Oliver Ferschke, Iryna Gurevych, and Yevgen Chebotar. 2012. Behind the Article: Recognizing Dialog Acts in Wikipedia Talk Pages. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL, pages 777–786. Association for Computational Linguistics. Hamed Khanpour, Nishitha Guntakandla, and Rodney Nielsen. 2016. Dialogue Act Classification in Domain-Independent Conversations Using a Deep Recurrent Neural Network. In Proceedings of 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, COLING, pages 2012–2021. Su Nam Kim, Lawrence Cavedon, and Timothy Baldwin. 2010. Classifying Dialogue Acts in One-onone Live Chats. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 862–871. Association for Computational Linguistics. Aniket Kittur, Bongwon Suh, Bryan A. Pendleton, and Ed H. Chi. 2007. He Says, She Says: Conflict and Coordination in Wikipedia. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI, pages 453–462. ACM. Robert E. Kraut, Paul Resnick, Sara Kiesler, Yuqing Ren, Yan Chen, Moira Burke, Niki Kittur, John Riedl, and Joseph Konstan. 2012. Building Successful Online Communities: Evidence-Based Social Design. The MIT Press. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant Supervision for Relation Extraction Without Labeled Data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing, ACL, pages 1003– 1011. Association for Computational Linguistics. Alvaro Morales, Varot Premtoon, Cordelia Avery, Sue Felshin, and Boris Katz. 2016. Learning to Answer Questions from Wikipedia Infoboxes. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1930–1935. Association for Computational Linguistics. Nona Naderi and Graeme Hirst. 2017. Classifying Frames at the Sentence Level in News Articles. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP, pages 536–542. Irwin P. Levin, Sandra L. Schneider, and Gary J. Gaeth. 1998. All Frames Are Not Created Equal: A Typology and Critical Analysis of Framing Effects. Organizational Behavior and Human Decision Processes, 76(2):149 – 188. Andreas Peldszus and Manfred Stede. 2013. From Argument Diagrams to Argumentation Mining in Texts: A Survey. Int. J. Cogn. Inform. Nat. Intell., 7(1):1–31. Sara Rosenthal and Kathy McKeown. 2015. I Couldn’t Agree More: The Role of Conversational Structure in Agreement and Disagreement Detection in Online Discussions. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL, pages 168–177. J.R. Searle. 1969. Speech Acts: An Essay in the Philosophy of Language. Cam: Verschiedene Aufl. Cambridge University Press. 2555 Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of the 25th International World Wide Web Conference, pages 613–624. Oren Tsur, Dan Calacci, and David Lazer. 2015. A frame of mind: Using statistical models for detection of framing and agenda setting campaigns. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP, pages 1629–1638. Association for Computational Linguistics. Fernanda B. Viegas, Martin Wattenberg, Jesse Kriss, and Frank van Ham. 2007. Talk Before You Type: Coordination in Wikipedia. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences, HICSS ’07, pages 78–. IEEE Computer Society. Douglas Walton. 2010. Types of Dialogue and Burdens of Proof. In Frontiers in Artificial Intelligence and Applications, volume 216, pages 13–24. Lu Wang and Claire Cardie. 2014. A Piece of My Mind: A Sentiment Analysis Approach for Online Dispute Detection. In Proceedings of the52nd Annual Meeting of the Association for Computational Linguistics, ACL, volume 2, pages 693–699. Association for Computational Linguistics. Shiren Ye, Tat-Seng Chua, and Jie Lu. 2009. Summarizing Definition from Wikipedia. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing, ACL, pages 199– 207. Association for Computational Linguistics. Elina Zarisheva and Tatjana Scheffler. 2015. Dialog Act Annotation for Twitter Conversations. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL, pages 114–123. Torsten Zesch, Christof Muller, and Iryna Gurevych. 2008. Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary. In Proceedings of the Sixth International Conference on Language Resources and Evaluation, LREC. European Language Resources Association (ELRA). Amy X. Zhang, Bryan Culbertson, and Praveen Paritosh. 2017a. Characterizing Online Discussion Using Coarse Discourse Sequences. In Proceedings of the 11th International AAAI Conference on Weblogs and Social Media, ICWSM, pages 357–366. Amy X. Zhang, Lea Verou, and David Karger. 2017b. Wikum: Bridging Discussion Forums and Wikis Using Recursive Summarization. In Proceedings of the 20th ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW, pages 2082–2096. ACM.
2018
237
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2556–2565 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2556 Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning Piyush Sharma, Nan Ding, Sebastian Goodman, Radu Soricut Google AI Venice, CA 90291 {piyushsharma,dingnan,seabass,rsoricut}@google.com Abstract We present a new dataset of image caption annotations, Conceptual Captions, which contains an order of magnitude more images than the MS-COCO dataset (Lin et al., 2014) and represents a wider variety of both images and image caption styles. We achieve this by extracting and filtering image caption annotations from billions of webpages. We also present quantitative evaluations of a number of image captioning models and show that a model architecture based on Inception-ResNetv2 (Szegedy et al., 2016) for image-feature extraction and Transformer (Vaswani et al., 2017) for sequence modeling achieves the best performance when trained on the Conceptual Captions dataset. 1 Introduction Automatic image description is the task of producing a natural-language utterance (usually a sentence) which correctly reflects the visual content of an image. This task has seen an explosion in proposed solutions based on deep learning architectures (Bengio, 2009), starting with the winners of the 2015 COCO challenge (Vinyals et al., 2015a; Fang et al., 2015), and continuing with a variety of improvements (see e.g. Bernardi et al. (2016) for a review). Practical applications of automatic image description systems include leveraging descriptions for image indexing or retrieval, and helping those with visual impairments by transforming visual signals into information that can be communicated via text-to-speech technology. The scientific challenge is seen as aligning, exploiting, and pushing further the latest improvements at the intersection of Computer Vision and Natural Language Processing. Alt-text: A Pakistani worker helps to clear the debris from the Taj Mahal Hotel November 7, 2005 in Balakot, Pakistan. Conceptual Captions: a worker helps to clear the debris. Alt-text: Musician Justin Timberlake performs at the 2017 Pilgrimage Music & Cultural Festival on September 23, 2017 in Franklin, Tennessee. Conceptual Captions: pop artist performs at the festival in a city. Figure 1: Examples of images and image descriptions from the Conceptual Captions dataset; we start from existing alt-text descriptions, and automatically process them into Conceptual Captions with a balance of cleanliness, informativeness, fluency, and learnability. There are two main categories of advances responsible for increased interest in this task. The first is the availability of large amounts of annotated data. Relevant datasets include the ImageNet dataset (Deng et al., 2009), with over 14 million images and 1 million bounding-box annotations, and the MS-COCO dataset (Lin et al., 2014), with 120,000 images and 5-way image-caption annotations. The second is the availability of powerful modeling mechanisms such as modern Convolutional Neural Networks (e.g. Krizhevsky et al. (2012)), which are capable of converting image pixels into high-level features with no manual featureengineering. In this paper, we make contributions to both the data and modeling categories. First, we present a new dataset of caption annotations∗, Conceptual Captions (Fig. 1), which has an order of magnitude more images than the COCO ∗https://github.com/google-research-datasets/conceptualcaptions 2557 dataset. Conceptual Captions consists of about 3.3M ⟨image, description⟩pairs. In contrast with the curated style of the COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. The raw descriptions are harvested from the Alt-text HTML attribute† associated with web images. We developed an automatic pipeline (Fig. 2) that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. As a contribution to the modeling category, we evaluate several image-captioning models. Based on the findings of Huang et al. (2016), we use Inception-ResNet-v2 (Szegedy et al., 2016) for image-feature extraction, which confers optimization benefits via residual connections and computationally efficient Inception units. For caption generation, we use both RNN-based (Hochreiter and Schmidhuber, 1997) and Transformerbased (Vaswani et al., 2017) models. Our results indicate that Transformer-based models achieve higher output accuracy; combined with the reports of Vaswani et al. (2017) regarding the reduced number of parameters and FLOPs required for training & serving (compared with RNNs), models such as T2T8x8 (Section 4) push forward the performance on image-captioning and deserve further attention. 2 Related Work Automatic image captioning has a long history (Hodosh et al., 2013; Donahue et al., 2014; Karpathy and Fei-Fei, 2015; Kiros et al., 2015). It has accelerated with the success of Deep Neural Networks (Bengio, 2009) and the availability of annotated data as offered by datasets such as Flickr30K (Young et al., 2014) and MS-COCO (Lin et al., 2014). The COCO dataset is not large (order of 106 images), given the training needs of DNNs. In spite of that, it has been very popular, in part because it offers annotations for images with non-iconic views, or non-canonical perspectives of objects, and therefore reflects the composition of everyday scenes (the same is true about Flickr30K (Young et al., 2014)). COCO annotations–category labeling, instance spotting, and instance segmentation– are done for all objects in an image, including those †https://en.wikipedia.org/wiki/Alt attribute in the background, in a cluttered environment, or partially occluded. Its images are also annotated with captions, i.e. sentences produced by human annotators to reflect the visual content of the images in terms of objects and their actions or relations. A large number of DNN models for image caption generation have been trained and evaluated using COCO captions (Vinyals et al., 2015a; Fang et al., 2015; Xu et al., 2015; Ranzato et al., 2015; Yang et al., 2016; Liu et al., 2017; Ding and Soricut, 2017). These models are inspired by sequence-tosequence models (Sutskever et al., 2014; Bahdanau et al., 2015) but use CNN-based encodings instead of RNNs (Hochreiter and Schmidhuber, 1997; Chung et al., 2014). Recently, the Transformer architecture (Vaswani et al., 2017) has been shown to be a viable alternative to RNNs (and CNNs) for sequence modeling. In this work, we evaluate the impact of the Conceptual Captions dataset on the image captioning task using models that combine CNN, RNN, and Transformer layers. Also related to this work is the Pinterest image and sentence-description dataset (Mao et al., 2016). It is a large dataset (order of 108 examples), but its text descriptions do not strictly reflect the visual content of the associated image, and therefore cannot be used directly for training image-captioning models. 3 Conceptual Captions Dataset Creation The Conceptual Captions dataset is programmatically created using a Flume (Chambers et al., 2010) pipeline. This pipeline processes billions of Internet webpages in parallel. From these webpages, it extracts, filters, and processes candidate ⟨image, caption⟩pairs. The filtering and processing steps are described in detail in the following sections. Image-based Filtering The first filtering stage, image-based filtering, discards images based on encoding format, size, aspect ratio, and offensive content. It only keeps JPEG images where both dimensions are greater than 400 pixels, and the ratio of larger to smaller dimension is no more than 2. It excludes images that trigger pornography or profanity detectors. These filters discard more than 65% of the candidates. Text-based Filtering The second filtering stage, text-based filtering, harvests Alt-text from HTML webpages. Alt-text generally accompanies images, 2558 [Alt-text not processed: undesired image format, aspect ratio or size] ALT-TEXT “Ferrari dice” “The meaning of life” “Demi Lovato wearing a black Ester Abner Spring 2018 gown and Stuart Weitzman sandals at the 2017 American Music Awards” IMAGE [Alt-text discarded] CAPTION “pop rock artist wearing a black gown and sandals at awards” [Alt-text discarded: Text does not contain prep./article] [Alt-text discarded: No text vs. image-object overlap] Image Filtering Text Filtering Img/Text Filtering Text Transform PIPELINE IMAGE IMAGE IMAGE Figure 2: Conceptual Captions pipeline steps with examples and final output. and intends to describe the nature or the content of the image. Because these Alt-text values are not in any way restricted or enforced to be good image descriptions, many of them have to be discarded, e.g., search engine optimization (SEO) terms, or Twitter hash-tag terms. We analyze candidate Alt-text using the Google Cloud Natural Language APIs, specifically partof-speech (POS), sentiment/polarity, and pornography/profanity annotations. On top of these annotations, we have the following heuristics: • a well-formed caption should have a high unique word ratio covering various POS tags; candidates with no determiner, no noun, or no preposition are discarded; candidates with a high noun ratio are also discarded; • candidates with a high rate of token repetition are discarded; • capitalization is a good indicator of wellcomposed sentences; candidates where the first word is not capitalized, or with too high capitalized-word ratio are discarded; • highly unlikely tokens are a good indicator of not desirable text; we use a vocabulary VW of 1B token types, appearing at least 5 times in the English Wikipedia, and discard candidates that contain tokens that are not found in this vocabulary. • candidates that score too high or too low on the polarity annotations, or trigger the pornography/profanity detectors, are discarded; • predefined boiler-plate prefix/suffix sequences matching the text are cropped, e.g. “click to enlarge picture”, “stock photo”; we also drop text which begins/ends in certain patterns, e.g. “embedded image permalink”, “profile photo”. These filters only allow around 3% of the incoming candidates to pass to the later stages. Image&Text-based Filtering In addition to the separate filtering based on image and text content, we filter out candidates for which none of the text tokens can be mapped to the content of the image. To this end, we use classifiers available via the Google Cloud Vision APIs to assign class labels to images, using an image classifier with a large number of labels (order of magnitude of 105). Notably, these labels are also 100% covered by the Vw token types. Images are generally assigned between 5 to 20 labels, though the exact number depends on the 2559 Original Alt-text Harrison Ford and Calista Flockhart attend the premiere of ‘Hollywood Homicide’ at the 29th American Film Festival September 5, 2003 in Deauville, France. Conceptual Captions actors attend the premiere at festival. what-happened “Harrison Ford and Calista Flockhart” mapped to “actors”; name, location, and date dropped. Original Alt-text Side view of a British Airways Airbus A319 aircraft on approach to land with landing gear down - Stock Image Conceptual Captions side view of an aircraft on approach to land with landing gear down what-happened phrase “British Airways Airbus A319 aircraft” mapped to “aircraft”; boilerplate removed. Original Alt-text Two sculptures by artist Duncan McKellar adorn trees outside the derelict Norwich Union offices in Bristol, UK - Stock Image Conceptual Captions sculptures by person adorn trees outside the derelict offices what-happened object count (e.g. “Two”) dropped; proper noun-phrase hypernymized to “person”; propernoun modifiers dropped; location dropped; boilerplate removed. Table 1: Examples of Conceptual Captions as derived from their original Alt-text versions. image. We match these labels against the candidate text, taking into account morphology-based stemming as provided by the text annotation. Candidate ⟨image, caption⟩pairs with no overlap are discarded. This filter discards around 60% of the incoming candidates. Text Transformation with Hypernymization In the current version of the dataset, we considered over 5 billion images from about 1 billion English webpages. The filtering criteria above are designed to be high-precision (which comes with potentially low recall). From the original input candidates, only 0.2% ⟨image, caption⟩pairs pass the filtering criteria described above. While the remaining candidate captions tend to be appropriate Alt-text image descriptions (see Alt-text in Fig. 1), a majority of these candidate captions contain proper names (people, venues, locations, etc.), which would be extremely difficult to learn as part of the image captioning task. To give an idea of what would happen in such cases, we train an RNN-based captioning model (see Section 4) on non-hypernymized Alt-text data and present an output example in Fig. 3. If automatic determination of person identity, location, etc. is needed, it should be attempted as a separate task and would need to leverage image metainformation about the image (e.g. location). Using the Google Cloud Natural Language APIs, we obtain named-entity and syntactic-dependency annotations. We then use the Google Knowledge Graph (KG) Search API to match the namedentities to KG entries and exploit the associated hypernym terms. For instance, both “Harrison Ford” and “Calista Flockhart” identify as named-entities, Alt-text (groundtruth): Jimmy Barnes performs at the Sydney Entertainment Centre Model output: Singer Justin Bieber performs onstage during the Billboard Music Awards at the MGM Figure 3: Example of model output trained on clean, non-hypernymized Alt-text data. so we match them to their corresponding KG entries. These KG entries have “actor” as their hypernym, so we replace the original surface tokens with that hypernym. The following steps are applied to achieve text transformations: • noun modifiers of certain types (proper nouns, numbers, units) are removed; • dates, durations, and preposition-based locations (e.g., “in Los Angeles”) are removed; • named-entities are identified, matched against the KG entries, and substitute with their hypernym; • resulting coordination noun-phrases with the same head (e.g., “actor and actor”) are resolved into a single-head, pluralized form (e.g., “actors”); Around 20% of samples are discarded during this transformation because it can leave sentences too short or inconsistent. Finally, we perform another round of text analysis and entity resolution to identify concepts with low-count. We cluster all resolved entities (e.g., 2560 “actor”, “dog”, “neighborhood”, etc.) and keep only the candidates for which all detected types have a count of over 100 (around 55% of the candidates). These remaining ⟨image, caption⟩pairs contain around 16,000 entity types, guaranteed to be well represented in terms of number of examples. Table 1 contains several examples of before/aftertransformation pairs. Conceptual Captions Quality To evaluate the precision of our pipeline, we consider a random sample of 4K examples extracted from the test split of the Conceptual Captions dataset. We perform a human evaluation on this sample, using the same methodology described in Section 5.4. GOOD (out of 3) 1+ 2+ 3 Conceptual Captions 96.9% 90.3% 78.5% Table 2: Human evaluation results on a sample from Conceptual Captions. The results are presented in Table 2 and show that, out of 3 annotations, over 90% of the captions receive a majority (2+) of GOOD judgments. This indicates that the Conceptual Captions pipeline, though involving extensive algorithmic processing, produces high-quality image captions. Examples Unique Tokens/Caption Tokens Mean StdDev Median Train 3,318,333 51,201 10.3 4.5 9.0 Valid. 28,355 13,063 10.3 4.6 9.0 Test 22,530 11,731 10.1 4.5 9.0 Table 3: Statistics over Train/Validation/Test splits for Conceptual Captions. We present in Table 3 statistics over the Train/Validation/Test splits for the Conceptual Captions dataset. The training set consists of slightly over 3.3M examples, while there are slightly over 28K examples in the validation set and 22.5K examples in the test set. The size of the training set vocabulary (unique tokens) is 51,201. Note that the test set has been cleaned using human judgements (2+ GOOD), while both the training and validation splits contain all the data, as produced by our automatic pipeline. The mean/stddev/median statistics for tokens-per-caption over the data splits are consistent with each other, at around 10.3/4.5/9.0, respectively. 4 Image Captioning Models In order to assess the impact of the Conceptual Captions dataset, we consider several image captioning models previously proposed in the literature. These models can be understood using the illustration in Fig. 4, as they mainly differ in the way in which they instantiate some of these components. Encoder <GO> people playing frisbee Decoder people playing frisbee in Image Embedding X H Y Z Figure 4: The main model components. There are three main components to this architecture: • A deep CNN that takes a (preprocessed) image and outputs a vector of image embeddings X = (x1, x2, ..., xL). • An Encoder module that takes the image embeddings and encodes them into a tensor H = fenc(X). • A Decoder model that generates outputs zt = fdec(Y1:t, H) at each step t, conditioned on H as well as the decoder inputs Y1:t. We explore two main instantiations of this architecture. One uses RNNs with LSTM cells (Hochreiter and Schmidhuber, 1997) to implement the fenc and fdec functions, corresponding to the Show-AndTell (Vinyals et al., 2015b) model. The other uses Transformer self-attention networks (Vaswani et al., 2017) to implement fenc and fdec. All models in this paper use Inception-ResNet-v2 as the CNN component (Szegedy et al., 2016). 4.1 RNN-based Models Our instantiation of the RNN-based model is close to the Show-And-Tell (Vinyals et al., 2015b) model. hl ≜RNNenc(xl, hl−1), and H = hL, zt ≜RNNdec(yt, zt−1), where z0 = H . 2561 In the original Show-And-Tell model, a single image embedding of the entire image is fed to the first cell of an RNN, which is also used for text generation. In our model, a single image embedding is fed to an RNNenc with only one cell, and then a different RNNdec is used for text generation. We tried both single image (1x1) embeddings and 8x8 partitions of the image, where each partition has its own embedding. In the 8x8 case, image embeddings are fed in a sequence to the RNNenc. In both cases, we apply plain RNNs without cross attention, same as the Show-And-Tell model. RNNs with cross attention were used in the Show-Attend-Tell model (Xu et al., 2015), but we find its performance to be inferior to the Show-And-Tell model. 4.2 Transformer Model In the Transformer-based models, both the encoder and the decoder contain a stack of N layers. We denote the n-th layer in the encoder by Xn = {xn,1, . . . , xn,L}, and X0 = X, H = XN. Each of these layers contains two sub-layers: a multihead self-attention layer ATTN, and a position-wise feedforward network FFN: x′ n,j =ATTN(xn,j,Xn;We q,We k,We v) ≜softmax(⟨xn,j We q,Xn We k⟩) Xn We v x(n+1),j =FFN(x′ n,j;We f) where We q, We k, and We v are the encoder weight matrices for query, key, and value transformation in the self-attention sub-layer; and We f denotes the encoder weight matrix of the feedforward sub-layer. Similar to the RNN-based model, we consider using a single image embedding (1x1) and a vector of 8x8 image embeddings. In the decoder, we denote the n-th layer by Zn = {zn,1, . . . , zn,T } and Z0 = Y. There are two main differences between the decoder and encoder layers. First, the self-attention sub-layer in the decoder is masked to the right, in order to prevent attending to “future” positions (i.e. zn,j does not attend to zn,(j+1), . . . , zn,T ). Second, in between the self-attention layer and the feedforward layer, the decoder adds a third cross-attention layer that connects zn,j to the top-layer encoder representation H = XN. z′ n,j =ATTN(zn,j,Zn,1:j;Wd q,Wd k,Wd v) z′′ n,j =ATTN(z′ n,j,H;Wc q,Wc k,Wc v) z(n+1),j =FFN(z′′ n,j;Wd f) where Wd q, Wd k, and Wd v are the weight matrices for query, key, and value transformation in the decoder self-attention sub-layer; Wc q, Wc k, Wc v are the corresponding decoder weight matrices in the cross-attention sub-layer; and Wd f is the decoder weight matrix of the feedforward sub-layer. The Transformer-based models utilize position information at the embedding layer. In the 8x8 case, the 64 embedding vectors are serialized to a 1D sequence with positions from [0, . . . , 63]. The position information is modeled by applying sine and cosine functions at each position and with different frequencies for each embedding dimension, as in (Vaswani et al., 2017), and subsequently added to the embedding representations. 5 Experimental Results In this section, we evaluate the impact of using the Conceptual Captions dataset (referred to as ’Conceptual’ in what follows) for training image captioning models. To this end, we train the models described in Section 4 under two experimental conditions: using the training & development sets provided by the COCO dataset (Lin et al., 2014), versus training & development sets using the Conceptual dataset. We quantitatively evaluate the resulting models using three different test sets: the blind COCO-C40 test set (indomain for COCO-trained models, out-of-domain for Conceptual-trained models); the Conceptual test set (out-of-domain for COCO-trained models, in-domain for Conceptual-trained models); and the Flickr (Young et al., 2014) 1K test set (outof-domain for both COCO-trained models and Conceptual-trained models). 5.1 Dataset Details COCO Image Captions The COCO image captioning dataset is normally divided into 82K images for training, and 40K images for validation. Each of these images comes with at least 5 groundtruth captions. Following standard practice, we combine the training set with most of the validation dataset for training our model, and only hold out a subset of 4K images for validation. Conceptual Captions The Conceptual Captions dataset contains around 3.3M images for training, 28K for validation and 22.5K for the test set. For more detailed statistics, see Table 3. 2562 COCO-trained RNN8x8 a group of men standing in front of a building a couple of people walking down a walkway a child sitting at a table with a cake on it a close up of a stuffed animal on a table T2T8x8 a group of men in uniform and ties are talking a narrow hallway with a clock and two doors a woman cutting a birthday cake at a party a picture of a fish on the side of a car Conceptual-trained RNN8x8 graduates line up for the commencement ceremony a view of the nave a child ’s drawing at a birthday party a cartoon businessman thinking about something T2T8x8 graduates line up to receive their diplomas the cloister of the cathedral learning about the arts and crafts a cartoon businessman asking for help Figure 5: Side by side comparison of model outputs under two training conditions. Conceptual-based models (lower half) tend to hallucinate less, are more expressive, and handle well a larger variety of images. The two images in the middle are from Flickr; the other two are from Conceptual Captions. 5.2 Experimental Setup Image Preprocessing Each input image is first preprocessed by random distortion and cropping (using a random ratio from 50%∼100%). This prevents models from overfitting individual pixels of the training images. Encoder-Decoder For RNN-based models, we use a 1-layer, 512-dim LSTM as the RNN cell. For the Transformer-based models, we use the default setup from (Vaswani et al., 2017), with N = 6 encoder and decoder layers, a hidden-layer size of 512, and 8 attention heads. Text Handling Training captions are truncated to maximum 15 tokens. We use a token type mincount of 4, which results in around 9,000 token types for the COCO dataset, and around 25,000 token types for the Conceptual Captions dataset. All other tokens are replaced with special token ⟨UNK⟩. The word embedding matrix has size 512 and is tied to the output projection matrix. Optimization All models are trained using MLE loss and optimized using Adagrad (Duchi et al., 2011) with learning rate 0.01. Mini-batch size is 25. All model parameters are trained for a total number of 5M steps, with batch updates asynchronously distributed across 40 workers. The final model is selected based on the best CIDEr score on the development set for the given training condition. Inference During inference, the decoder prediction of the previous position is fed to the input of the next position. We use a beam search of beam size 4 to compute the most likely output sequence. 5.3 Qualitative Results Before we present the numerical results for our experiments, we discuss briefly the patterns that we have observed. One difference between COCO-trained models and Conceptual-trained models is their ability to use the appropriate natural language terms for the entities in an image. For the left-most image in Fig. 5, COCO-trained models use “group of men” to refer to the people in the image; Conceptualbased models use the more appropriate and informative term “graduates”. The second image, from the Flickr test set, makes this even more clear. The Conceptual-trained T2T8x8 model is perfectly rendering the image content as “the cloister of the cathedral”. None of the other models come close to producing such an accurate description. A second difference is that COCO-trained models often seem to hallucinate objects. For instance, they hallucinate “front of building” for the first image, “clock and two doors” for the second, and “birthday cake” for the third image. In contrast, Conceptual-trained models do not seem to have this problem. We hypothesize that the hallucination issue for COCO-based models comes from the high correlations present in the COCO data (e.g., if there is a kid at a table, there is also cake). This high degree of correlation in the data does not allow the captioning model to correctly disentangle and learn representations at the right level of granularity. 2563 Model Training 1+ 2+ 3+ RNN8x8 COCO 0.390 0.276 0.173 T2T8x8 COCO 0.478 0.362 0.275 RNN8x8 Conceptual 0.571 0.418 0.277 T2T8x8 Conceptual 0.659 0.506 0.355 Table 4: Human eval results on Flickr 1K Test. A third difference is the resilience to a large spectrum of image types. COCO only contains natural images, and therefore a cartoon image like the fourth one results in massive hallucination effects for COCO-trained models (“stuffed animal”, “fish”, “side of car”). In contrast, Conceptual-trained models handle such images with ease. 5.4 Quantitative Results In this section, we present quantitative results on the quality of the outputs produced by several image captioning models. We present both automatic evaluation results and human evaluation results. 5.4.1 Human Evaluation Results For human evaluations, we use a pool of professional raters (tens of raters), with a double-blind evaluation condition. Raters are asked to assign a GOOD or BAD label to a given ⟨image, caption⟩ input, using just common-sense judgment. This approximates the reaction of a typical user, who normally would not accept predefined notions of GOOD vs. BAD. We ask 3 separate raters to rate each input pair and report the percentage of pairs that receive k or more (k+) GOOD annotations. In Table 4, we report the results on the Flickr 1K test set. This evaluation is out-of-domain for both training conditions, so all models are on relatively equal footing. The results indicate that the Conceptual-based models are superior. In 50.6% (for the T2T8x8 model) of cases, a majority of annotators (2+) assigned a GOOD label. The results also indicate that the Transformer-based models are superior to the RNN-based models by a good margin, by over 8-points (for 2+) under both COCO and Conceptual training conditions. Model Training CIDEr ROUGE-L METEOR RNN1x1 COCO 1.021 0.694 0.348 RNN8x8 COCO 1.044 0.698 0.354 T2T1x1 COCO 1.032 0.700 0.358 T2T8x8 COCO 1.032 0.700 0.356 RNN1x1 Conceptual 0.403 0.445 0.191 RNN8x8 Conceptual 0.410 0.437 0.189 T2T1x1 Conceptual 0.348 0.403 0.171 T2T8x8 Conceptual 0.345 0.400 0.170 Table 5: Auto metrics on the COCO C40 Test. Model Training CIDEr ROUGE-L SPICE RNN1x1 COCO 0.183 0.149 0.062 RNN8x8 COCO 0.191 0.152 0.065 T2T1x1 COCO 0.184 0.148 0.062 T2T8x8 COCO 0.190 0.151 0.064 RNN1x1 Conceptual 1.351 0.326 0.235 RNN8x8 Conceptual 1.401 0.330 0.240 T2T1x1 Conceptual 1.588 0.331 0.254 T2T8x8 Conceptual 1.676 0.336 0.257 Table 6: Auto metrics on the 22.5K Conceptual Captions Test set. Model Training CIDEr ROUGE-L SPICE RNN1x1 COCO 0.340 0.414 0.101 RNN8x8 COCO 0.356 0.413 0.103 T2T1x1 COCO 0.341 0.404 0.101 T2T8x8 COCO 0.359 0.416 0.103 RNN1x1 Conceptual 0.269 0.310 0.076 RNN8x8 Conceptual 0.275 0.309 0.076 T2T1x1 Conceptual 0.226 0.280 0.068 T2T8x8 Conceptual 0.227 0.277 0.066 Table 7: Auto metrics on the Flickr 1K Test. 5.4.2 Automatic Evaluation Results In this section, we report automatic evaluation results, using established image captioning metrics. For the COCO C40 test set (Fig. 5), we report the numerical values returned by the COCO online evaluation server‡, using the CIDEr (Vedantam et al., 2015), ROUGE-L (Lin and Och, 2004), and METEOR (Banerjee and Lavie, 2005) metrics. For Conceptual Captions (Fig. 6) and Flickr (Fig. 7) test sets, we report numerical values for the CIDEr, ROUGE-L, and SPICE (Anderson et al., 2016)§. For all metrics, higher number means closer distance between the candidates and the groundtruth captions. The automatic metrics are good at detecting invs out-of-domain situations. For COCO-models tested on COCO, the results in Fig. 5 show CIDEr scores in the 1.02-1.04 range, for both RNN- and Transformer-based models; the scores drop in the 0.35-0.41 range (CIDEr) for the Conceptual-based models tested against COCO groundtruth. For Conceptual-models tested on the Conceptual Captions test set, the results in Fig. 6 show scores as high as 1.468 CIDEr for the T2T8x8 model, which corroborates the human-eval results for the Transformer-based models being superior to the RNN-based models; the scores for the COCObased models tested against Conceptual Captions groundtruth are all below 0.2 CIDEr. The automatic metrics fail to corroborate the ‡http://mscoco.org/dataset/#captions-eval. §https://github.com/tylin/coco-caption. 2564 human evaluation results. According to the automatic metrics, the COCO-trained models are superior to the Conceptual-trained models (CIDEr scores in the mid-0.3 for the COCO-trained condition, versus mid-0.2 for the Conceptual-trained condition), and the RNN-based models are superior to Transformer-based models. Notably, these are the same metrics which score humans lower than the methods that won the COCO 2015 challenge (Vinyals et al., 2015a; Fang et al., 2015), despite the fact that humans are still much better at this task. The failure of these metrics to align with the human evaluation results casts again grave doubts on their ability to drive progress in this field. A significant weakness of these metrics is that hallucination effects are under-penalized (a small precision penalty for tokens with no correspondent in the reference), compared to human judgments that tend to dive dramatically in the presence of hallucinations. 6 Conclusions We present a new image captioning dataset, Conceptual Captions, which has several key characteristics: it has around 3.3M examples, an order of magnitude larger than the COCO image-captioning dataset; it consists of a wide variety of images, including natural images, product images, professional photos, cartoons, drawings, etc.; and, its captions are based on descriptions taken from original Alt-text attributes, automatically transformed to achieve a balance between cleanliness, informativeness, and learnability. We evaluate both the quality of the resulting image/caption pairs, as well as the performance of several image-captioning models when trained on the Conceptual Captions data. The results indicate that such models achieve better performance, and avoid some of the pitfalls seen with COCO-trained models, such as object hallucination. We hope that the availability of the Conceptual Captions dataset will foster considerable progress on the automatic image-captioning task. References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: semantic propositional image caption evaluation. In ECCV. D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. Yoshua Bengio. 2009. Learning deep architectures for ai. Found. Trends Mach. Learn. 2(1):1–127. Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from images: A survey of models, datasets, and evaluation measures. JAIR 55. Craig Chambers, Ashish Raniwala, Frances Perry, Stephen Adams, Robert Henry, Robert Bradshaw, and Nathan. 2010. Flumejava: Easy, efficient data-parallel pipelines. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). 2 Penn Plaza, Suite 701 New York, NY 10121-0701, pages 363–375. http://dl.acm.org/citation.cfm?id=1806638. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 . J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei. 2009. ImageNet: A large-scale hierarchical image database. In CVPR. Nan Ding and Radu Soricut. 2017. Cold-start reinforcement learning with softmax policy gradients. In NIPS. Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2014. Long-term recurrent convolutional networks for visual recognition and description. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121–2159. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John Platt, et al. 2015. From captions to visual concepts and back. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735– 1780. Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. JAIR . 2565 Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, and Kevin Murphy. 2016. Speed/accuracy trade-offs for modern convolutional object detectors. CoRR abs/1611.10012. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2015. Unifying visual-semantic embeddings with multimodal neural language models. Transactions of the Association for Computational Linguistics . A. Krizhevsky, I. Sutskever, and G. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In NIPS. Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of ACL. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. CoRR abs/1405.0312. Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017. Optimization of image description metrics using policy gradient methods. In International Conference on Computer Vision (ICCV). Junhua Mao, Jiajing Xu, Yushi Jing, and Alan Yuille. 2016. Training and evaluating multimodal word embeddings with large-scale web annotated images. In NIPS. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR abs/1511.06732. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. 2016. Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR abs/1602.07261. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015a. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition. pages 3156–3164. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and tell: A neural image caption generator. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proc. of the 32nd International Conference on Machine Learning (ICML). Z. Yang, Y. Yuan, Y. Wu, R. Salakhutdinov, and W. W. Cohen. 2016. Review networks for caption generation. In NIPS. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL 2:67–78.
2018
238
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2566–2576 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2566 Learning Translations via Images with a Massively Multilingual Image Dataset John Hewitt∗Daphne Ippolito∗Brendan Callahan Reno Kriz Derry Wijaya Chris Callison-Burch University of Pennsylvania Computer and Information Science Department {johnhew,daphnei,rekriz,derry,ccb}@seas.upenn.edu Abstract We conduct the most comprehensive study to date into translating words via images. To facilitate research on the task, we introduce a large-scale multilingual corpus of images, each labeled with the word it represents. Past datasets have been limited to only a few high-resource languages and unrealistically easy translation settings. In contrast, we have collected by far the largest available dataset for this task, with images for approximately 10,000 words in each of 100 languages. We run experiments on a dozen high resource languages and 20 low resources languages, demonstrating the effect of word concreteness and part-of-speech on translation quality. To improve image-based translation, we introduce a novel method of predicting word concreteness from images, which improves on a previous stateof-the-art unsupervised technique. This allows us to predict when image-based translation may be effective, enabling consistent improvements to a state-of-the-art text-based word translation system. Our code and the Massively Multilingual Image Dataset (MMID) are available at http: //multilingual-images.org/. 1 Introduction Learning the translations of words is important for machine translation and other tasks in natural language processing. Typically this learning is done using sentence-aligned bilingual parallel texts. However, for many languages, there are not ∗These authors contributed equally; listed alphabetically. Figure 1: Our dataset and approach allow translations to be discovered by comparing the images associated with foreign and English words. Shown here are five images for the Indonesian word kucing, a word with high predicted concreteness, along with its top 4 ranked translations using CNN features. sufficiently large parallel texts to effectively learn translations. In this paper, we explore the question of whether it is possible to learn translations with images. We systematically explore an idea originally proposed by Bergsma and Van Durme (2011): translations can be identified via images associated with words in different languages that have a high degree of visual similarity. This is illustrated in Figure 1. Most previous image datasets compiled for the task of learning translations were limited to the translation of nouns in a few high-resource languages. In this work, we present a new large-scale dataset that contains images for 100 languages, and is not restricted by part-of-speech. We collected images using Google Image Search for up to 10,000 words in each of 100 foreign languages, and their English translations. For each word, we collected up to 100 images and the text on images’ corresponding web pages. We conduct a broad range of experiments to evaluate the utility of image features across a number of factors: 2567 • We evaluate on 12 high-resource and 20 lowresource languages. • We evaluate translation quality stratified by part-of-speech, finding that nouns and adjectives are translated with much higher accuracy than adverbs and verbs. • We present a novel method for predicting word concreteness from image features that better correlates with human perception than existing methods. We show that choosing concrete subsets of words to translate results in higher accuracy. • We augment a state-of-the-art text-based word translation system with image feature scores and find consistent improvements to the textonly system, ranging from 3.12% absolute top-1 accuracy improvement at 10% recall to 1.30% absolute improvement at 100% recall. A further contribution of this paper is our dataset, which is the largest of its kind and should be a standard for future work in learning translations from images. The dataset may facilitate research into multilingual, multimodal models, and translation of low-resource languages. 2 Related Work The task of learning translations without sentencealigned bilingual parallel texts is often called bilingual lexicon induction (Rapp, 1999; Fung and Yee, 1998). Most work in bilingual lexicon induction has focused on text-based methods. Some researchers have used similar spellings across related languages to find potential translations (Koehn and Knight, 2002; Haghighi et al., 2008). Others have exploited temporal similarity of word frequencies to induce translation pairs (Schafer and Yarowsky, 2002; Klementiev and Roth, 2006). Irvine and Callison-Burch (2017) provide a systematic study of different text-based features used for bilingual lexicon induction. Recent work has focused on building joint distributional word embedding spaces for multiple languages, leveraging a range of levels of language supervision from bilingual dictionaries to comparable texts (Vuli´c and Korhonen, 2016; Wijaya et al., 2017). The most closely related work to ours is research into bilingual lexicon induction using image similarity by Bergsma and Van Durme (2011) and Kiela et al. (2015). Their work differs from ours in that they focused more narrowly on the translation of nouns for a limited number of high resource languages. Bergsma and Van Durme (2011) compiled datasets for Dutch, English, French, German, Italian, and Spanish by downloading 20 images for up to 500 concrete nouns in each of the foreign languages, and 20,000 English words. Another dataset was generated by Vulic and Moens (2013) who collected images for 1,000 words in Spanish, Italian, and Dutch, along with the English translations for each. Their dataset also consists of only nouns, but includes abstract nouns. Our corpus will allow researchers to explore image similarity for bilingual lexicon induction on a much wider range of languages and parts of speech, which is especially desirable given the potential utility of the method for improving translation between languages with little parallel text. The ability of images to usefully represent a word is strongly dependent on how concrete or abstract the word is. The terms abstractness and concreteness are used in the psycholinguistics and cognitive psychology literature. Concrete words directly reference a sense experience (Paivio et al., 1968), while abstract words can denote ideas, emotions, feelings, qualities or other abstract or intangible concepts. Concreteness ratings are closely correlated with imagery ratings, defined as the ease with which a word arouses a mental image (Gilhooly and Logie, 1980; Friendly et al., 1982). Intuitively, concrete words are easier to represent visually, so a measure of a word’s concreteness ought to be able to predict the effectiveness of using images to translate the word. Kiela et al. (2014) defines an unsupervised method called image dispersion that approximates a word’s concreteness by taking the average pairwise cosine distance of a set of image representations of the word. Kiela et al. (2015) show that image dispersion helps predict the usefulness of image representations for translation. In this paper, we introduce novel supervised approaches for predicting word concreteness from image and textual features. We make use of a dataset created by Brysbaert et al. (2014) containing human evaluations of concreteness for 39,954 English words. Concurrently with our work, Hartmann and Søgaard (2017) released an unpublished arXiv draft challenging the efficacy of using images for translation. Their work presents several difficulties of using image features for translation, difficulties which 2568 our methods address. They find that image features are only useful in translating simple nouns. While we did indeed find that nouns perform better than other parts of speech, we do not find that images are only effective in translating simple words. Instead, we show a gradual degradation in performance as words become more abstract. Their dataset is restricted to six high-resource languages and a small vocabulary of 557 English words. In contrast, we present results for over 260,000 English words and 32 foreign languages. Recent research in the NLP and computer vision communities has been enabled by large collections of images associated with words or longer texts. Object recognition has seen dramatic gains in part due to the ImageNet database (Deng et al., 2009), which contains 500-1000 images associated with 80,000 synsets in WordNet. Ferraro et al. (2015) surveys existing corpora that are used in vision and language research. Other NLP+Vision tasks that have been enabled by the availability of large datasets include caption generation for images, action recognition in videos, visual question answering, and others. Most existing work on multilingual NLP+Vision relies on having a corpus of images manually annotated with captions in several languages, as in the Multi30K dataset (Elliott et al., 2016). Several works have proposed using image features to improve sentence level translations or to translate image captions (Gella et al., 2017; Hitschler and Riezler, 2016; Miyazaki and Shimizu, 2016). Funaki and Nakayama (2015) show that automatically scraped data from websites in English and Japanese can be used to effectively perform zero-shot learning for the task of cross-lingual document retrieval. Since collecting multilingual annotations is difficult at a large-scale or for low-resource languages, our approach relies only on data scraped automatically from the web. 3 Corpus Construction We present a new dataset for image-based word translation that is more expansive than any previous ones, encompassing all parts-of-speech, the gamut of abstract to concrete, and both low- and highresource languages. 3.1 Dictionaries We collect images for words in 100 bilingual dictionaries created by Pavlick et al. (2014). They selected the 10,000 most frequent words on Wikipedia pages in the foreign language, and then collected their translations into English via crowdsourcing. We will denote these dictionaries as CROWDTRANS. The superset of English translations for all foreign words consists of 263,102 translations. The English portion of their data tends to be much noisier than the foreign portion due to its crowdsourced nature (e.g. misspellings, or definition included with translations.) We computed part-of-speech for entries in each dictionary. We found that while nouns are the most common, other parts-of-speech are reasonably represented (Section 5.1). 3.2 Method For each English and foreign word, we query Google Image Search to collect 100 images associated with the word. A potential criticism of our use of Google Image Search is that it may be using a bilingual dictionary to translate queries into English (or other high resource languages) and returning images associated with the translated queries (Kilgarriff, 2007). We take steps (Section 3.3) to filter out images that did not appear on pages written in the language that we are gathering images for. After assembling the collection of images associated with words, we construct low-dimensional vector representations of the images using convolutional neural networks (CNNs). We also save the text from each web page that an image appeared on. Further detail on our corpus construction pipeline can be found in Section 2 of the supplemental materials. 3.3 Filtering by Web Page Language We used the following heuristic to filter images: if text could be extracted from an image’s web page, and the expected language was in the top-3 most likely languages output by the CLD21 language detection system then we kept the image; otherwise it was discarded. This does not filter all images from webpages with English text; instead it acknowledges the presence of English in the multilingual web and keeps images from pages with some targetlanguage presence. An average of approximately 42% of images for each foreign language remained after the language-filtering step. 1https://github.com/CLD2Owners/cld2 2569 Language Concreteness Ratings Overall 1-2 2-3 3-4 4-5 English .804 .814 .855 .913 .857 French .622 .653 .706 .828 .721 Indonesian .505 .569 .665 .785 .661 Uzbek .568 .530 .594 .683 .601 All .628 .649 .713 .810 .717 # Words 77 292 292 302 963 Table 1: The proportion of images determined to be good representations of their corresponding word. In columns 2-5, we bucket the results by the word’s ground-truth concreteness, while column 6 shows the results over all words. The last row shows the number of words in each bucket of concreteness, and the number of words overall for each language. 3.4 Manual Evaluation of Images By using a dataset scraped from the web, we expect some fraction of the images for each word to be incorrectly labeled. To confirm the overall quality of our dataset, we asked human evaluators on Amazon Mechanical Turk to label a subset of the images returned by queries in four languages: our target language, English; a representative highresource language, French; and two low-resource languages, Indonesian and Uzbek. In total, we collected 36,050 judgments of whether the images returned by Google Image Search were a good match for the keyword. Details on the experimental setup can be found in Section 1 of the Supplemental Materials. Table 1 shows the fraction of images that were judged to be good representations of the search word. It also demonstrates that as the concreteness of a word increases, the proportion of good images associated with that word increases as well. We further discuss the role of concreteness in Section 6.1. Overall, 85% of the English images, 72% of French, 66% of Indonesian, and 60% of Uzbek were judged to be good. 4 Finding Translations Using Images Can images help us learn translations for lowresource languages? In this section we replicate prior work in high-resource languages, and then evaluate on a wide array of low-resource languages. Although we scraped images and text for 100 languages, we have selected a representative set of 32 for evaluation. Kiela et al. (2015) established that CNN features are superior to the SIFT plus color histogram features used by Bergsma and Van Durme (2011), and so we restrict all analysis to the former. 4.1 Translation Prediction with AVGMAX To learn the English translation of each foreign word, we rank the English words as candidate translations based on their visual similarity with the foreign words. We take the cosine similarity score for each image if associated the foreign word wf with each of image ie for the English word we, and then compute the average maximum similarity as AVGMAX(wf, we) = 1 |wf| X if∈wf max ie∈we(cosine(if, ie)) Each image is represented by a 4096-dimensional vector from the fully connected 7th (FC7) layer of a CNN trained on ImageNet (Krizhevsky et al., 2012). AvgMax is the best-performing method described by Bergsma and Van Durme (2011) on images created with SIFT and color histogram features. It was later validated on CNN features by Kiela et al. (2015). The number of candidate English words is the number of entries in the bilingual dictionary after filtering out dictionary entries where the English word and foreign word are identical. In order to compare with Kiela et al. (2015), we evaluate the models’ rankings using Mean Reciprocal Rank (MRR), top-1, top-5 and top-20 accuracy. We prefer the more interpretable top-k accuracy in our subsequent experiments. We choose to follow Wijaya et al. (2017) in standardizing to k = 10, and we report top-1 accuracy only when it is particularly informative. 4.2 Replication of Prior Work We evaluate on the five languages–Dutch, French, German, Italian, and Spanish–which have been the focus of prior work. Table 2 shows the results reported by Kiela et al. (2015) on the BERGSMA500 dataset, along with results using our image crawl method (Section 3.2) on BERGSMA500’s vocabulary. On all five languages, our dataset performs better than that of Kiela et al. (2015). We attribute this to improvements in image search since they collected images. We additionally note that in the BERGSMA500 vocabularies, approximately 11% of the translation pairs are string-identical, like film ↔film. In all subsequent experiments, we remove trivial translation pairs like this. We also evaluate the identical model on our full data set, which contains 8,500 words, covering all parts of speech and the full range of concreteness ratings. The top-1 accuracy of the model is 23% on 2570 our more realistic and challenging data set, versus 68% on the easier concrete nouns set. 4.3 High- and Low-resource Languages To determine whether image-based translation is effective for low resource languages, we sample 12 high-resource languages (HIGHRES), and 20 lowresource languages (LOWRES). Table 3 reports the top-10 accuracy across all 32 languages. For each language, we predict a translation for each foreign word in the language’s CROWDTRANS dictionary. This comes to approximately 7,000 to 10,000 foreign words per language. We find that high-resource languages’ image features are more predictive of translation than those of low-resource languages. Top-10 accuracy is 29% averaged across high-resource languages, but only 16% for low-resource languages. This may be due to the quality of image search in each language, and the number of websites in each language indexed by Google, as suggested by Table 1. The difficulty of the translation task is dependent on the size of the English vocabulary used, as distinguishing between 5, 000 English candidates as in Slovak is not as difficult as distinguishing between 10, 000 words as in Tamil. 4.4 Large Target Vocabulary How does increasing the number of candidate translations affect accuracy? Prior work used an English vocabulary of 500 or 1,000 words, where the correct English translation is guaranteed to appear. This is unrealistic for many tasks such as machine translation, where the target language vocabulary is likely to be large. To evaluate a more realistic scenario, we take the union of the English vocabulary of every dictionary in CROWDTRANS, and run the same translation experiments as before. We call this large common vocabulary LARGEENG. Confirming our intuition, experiments with LARGEENG give significantly lower top-10 accuracies across parts of speech, but still provide discriminative power. We find .181 average top-10 accuracy using LARGEENG, whereas on the same languages, average accuracy on the CROWDTRANS vocabularies was .260. The full results for these experiments are reported in Table 4. 5 Evaluation by Part-of-speech Can images be used to translate words other than nouns? This section presents our methods for dedataset BERGSMA500 BERGSMA500 all Kiela et al. (2015) (ours) (ours) # words 500 500 8,500 MRR 0.658 0.704 0.277 Top 1 0.567 0.679 0.229 Top 5 0.692 0.763 0.326 Top 20 0.774 0.811 0.385 Table 2: Our results are consistently better than those reported by Kiela et al. (2015), averaged over Dutch, French, German, Italian, and Spanish on a similar set of 500 concrete nouns. The rightmost column shows the added challenge with our larger, more realistic dataset. HIGHRES All VB RB JJ NN # Spanish .417 .144 .157 .329 .593 9.9k French .366 .104 .107 .315 .520 10.5k Dutch .365 .085 .064 .262 .511 10.5k Italian .323 .086 .085 .233 .487 8.9k German .307 .071 .098 .164 .463 10.1k Swedish .283 .048 .048 .146 .328 9.6k Turkish .263 .035 .143 .233 .346 10.2k Romanian .255 .029 .080 .150 .301 9.1k Hungarian .240 .030 .082 .193 .352 10.9k Bulgarian .236 .024 .106 .116 .372 8.6k Arabic .223 .036 .084 .149 .344 10.2k Serbian .218 .023 .111 .090 .315 8.3k Average .291 .059 .097 .198 .411 9.7k LOWRES Thai .367 .139 .143 .264 .440 5.6k Indonesian .306 .103 .041 .238 .404 10.3k Vietnamese .303 .079 .058 .106 .271 6.6k Bosnian .212 .035 .084 .103 .277 7.5k Slovak .195 .024 .042 .095 .259 6.5k Ukrainian .194 .024 .131 .070 .273 5.0k Latvian .194 .028 .058 .114 .266 7.1k Hindi .163 .024 .068 .057 .231 9.4k Cebuano .153 .014 .070 .098 .180 7.7k Azerbaijani .150 .016 .031 .113 .174 6.2k Welsh .138 .007 .025 .033 .062 7.6k Albanian .127 .013 .017 .080 .154 6.0k Bengali .120 .026 .050 .063 .173 12.5k Tamil .089 .006 .013 .030 .140 9.9k Uzbek .082 .093 .066 .114 .077 12.4k Urdu .073 .005 .017 .032 .108 11.1k Telugu .065 .002 .018 .010 .095 9.6k Nepali .059 .002 .039 .018 .089 11.6k Gujarati .039 .004 .016 .012 .056 12.0k Average .159 .034 .052 .087 .196 8.7k Table 3: Top-10 accuracy on 12 high-resource languages and 20 low-resource languages. The parts of speech Noun, Adjective, Adverb, and Verb are referred to as NN, JJ, RB, VB, respectively. The “all” column reports accuracy on the entire dictionary. The “#” column reports the size of the English vocabulary used for each experiment. 2571 Language All VB RB JJ NN Arabic .149 .015 .053 .078 .219 Bengali .066 .009 .042 .025 .084 Dutch .265 .042 .039 .164 .350 French .268 .051 .092 .196 .368 German .220 .035 .040 .080 .321 Indonesian .211 .050 .035 .156 .257 Italian .233 .046 .028 .139 .350 Spanish .320 .068 .076 .207 .449 Turkish .171 .011 .086 .139 .201 Uzbek .057 .121 .075 .104 .045 LARGEENG Avg .181 .041 .055 .118 .244 SMALL Avg .260 .089 .078 .210 .392 Table 4: Top-10 accuracy on the expanded English dictionary task. For each experiment, 263,102 English words were used as candidate translations for each foreign word. The SMALL average is given for reference, averaging the results from Table 3 across the same 10 languages. termining part-of-speech for foreign words even in low-resource languages, and presents our imagebased translation results across part-of-speech. 5.1 Assigning POS Labels To show the performance of our translation method for each particular POS, we first assign a POS tag to each foreign word. Since we evaluate on highand low-resource languages, many of which do not have POS taggers, we POS tag English words, and transfer the tag to their translations. We scraped the text on the web pages associated with the images of each English word, and collected the sentences that contained each query (English) word. We chose to tag words in sentential context, rather than simply collecting parts of speech from a dictionary, because many words have multiple senses, often with different parts of speech. We assign universal POS tags (Petrov et al., 2012) using spaCy2, giving each word its majority tag. We gathered part-of-speech tags for 42% of the English words in our translations. Of the remaining untagged English entries, 40% were multi-word expressions, and 18% were not found in the text of the web pages that we scraped. When transferring POS tags to foreign words, we only considered foreign words where every English translation had the same POS. Across all 32 languages, on average, we found that, after filtering, 65% of foreign words were nouns, 14% were verbs, 14% were adjectives, 3% were adverbs, and 3% were other (i.e. they were labeled a different POS). 2https://spacy.io Figure 2: Shown here are five images for the abstract Indonesian word konsep, along with its top 4 ranked translations using CNN features. The actual translation, concept, was ranked 3,465. 5.2 Accuracy by Part-of-speech As we see in the results in Table 3, the highest translation performance is obtained for nouns, which confirms the observation by Hartmann and Søgaard (2017). However, we see considerable signal in translating adjectives as well, with top-10 accuracies roughly half that of nouns. This trend extends to low-resource languages. We also see that translation quality is relatively poor for adverbs and verbs. There is higher variation in our performance on adverbs across languages, because there were relatively few adverbs (3% of all words.) From these results, it is clear that one can achieve higher accuracy by choosing to translate only nouns and adjectives. Analysis by part-of-speech only indirectly addresses the question of when translation with images is useful. For example, Figure 2 shows that nouns like concept translate incorrectly because of a lack of consistent visual representation. However, verbs like walk may have concrete visual representation. Thus, one might perform better overall at translation on concrete words, regardless of part-of-speech. 6 Evaluation by Concreteness Can we effectively predict the concreteness of words in a variety of languages? If so, can these predictions be used to determine when translation via images is helpful? In this section, we answer both of these questions in the affirmative. 6.1 Predicting Word Concreteness Previous work has used image dispersion as a measure of word concreteness (Kiela et al., 2014). We 2572 introduce a novel supervised method for predicting word concreteness that more strongly correlates with human judgements of concreteness. To train our model, we took Brysbaert et al. (2014)’s dataset, which provides human judgments for about 40k words, each with a 1-5 abstractnessto-concreteness score, and scraped 100 images from English Google Image Search for each word. We then trained a two-layer perceptron with one hidden layer of 32 units, to predict word concreteness. The inputs to the network were the elementwise mean and standard deviation (concatenated into a 8094-dimensional vector)of the CNN features for each of the images corresponding to a word. To better assess this image-only approach, we also experimented with using the distributional word embeddings of Salle et al. (2016) as input. We used these 300-dimensional vectors either seperately or concatentated with the image-based features. Our final network was trained with a crossentropy loss, although an L2 loss performed nearly as well. We randomly selected 39,000 words as our training set. Results on the remaining held-out validation set are visualized in Figure 3. Although the concatenated image and word embedding features performed the best, we do not expect to have high-quality word embeddings for words in low-resource languages. Therefore, for the evaluation in Section 6.2, we used the imageembeddings-only model to predict concreteness for every English and foreign word in our dataset. 6.2 Accuracy by Predicted Concreteness It has already been shown that the images of more abstract words provide a weaker signal for translation (Kiela et al., 2015). Using our method for predicting concreteness, we determine which images sets are most concrete, and thereby estimate the likelihood that we will obtain a high quality translation. Figure 4 shows the reduction in translation accuracy as increasingly abstract words are included in the set. The concreteness model can be used to establish recall thresholds. For the 25% of foreign words we predict to be most concrete, (25% recall,) AVGMAX achieves top-10 accuracy of 47.0% for high-resource languages and 32.8% for lowresource languages. At a 50% most-concrete recall treshold, top-10 translation accuracies are 25.0% and 37.8% for low- and high-resource languages respectively, compared to 18.6% and 29.3% at 100% recall. 7 Translation with Images and Text Translation via image features performs worse than state-of-the-art distributional similarity-based methods. For example, Wijaya et al. (2017) demonstrate top-10 accuracies in range of above 85% on the VULIC1000 a 1,000-word dataset, whereas with only image features, Kiela et al. (2015) report top-10 accuracies below 60%. However, there may be utility in combining the two methods, as it is likely that visual and textual distributional representations are contributing different information, and fail in different cases. We test this intuition by combining image scores with the current state-of-the-art system of Wijaya et al. (2017), which uses Bayesian Personalized Ranking (BPR). In their arXiv draft, Hartmann and Søgaard (2017) presented a negative result when directly combining image representations with distributional representations into a single system. Here, we present a positive result by formalizing the problem as a reranking task. Our intuition is that we hope to guide BPR, clearly the stronger system, with aid from image features and a predicted concreteness value, instead of joining them as equals and potentially washing out the stronger signal. 7.1 Reranking Model For each foreign word wf and each English word we, we have multiple scores for the pair pf,e = (wf, we), used to rank we against all other we′ ∈E, where E is the English dictionary used in the experiment. Specifically, we have TXT(pf,e) and IMAGE(pf,e) for all pairs. For each foreign word, we also have the concreteness score, CNC(wf), predicted from its image set by the method described in Section 6.1. We use a small bilingual dictionary, taking all pairs pf,e and labeling them {±1}, with 1 denoting the words are translations. We construct training data out of the dictionary, treating each labeled pair as an independent observation. We then train a 2-layer perceptron (MLP), with 1 hidden layer of 4 units, to predict translations from the individual scores, minimizing the squared loss.3 MLP(pf,e) = MLP [TXT(pf,e); IMAGE(pf,e); CNC(wf)]  = {±1} 3We use DyNet (Neubig et al., 2017) for constructing and training our network with the Adam optimization method (Kingma and Ba, 2014). 2573 kindheartedness satisfyingly fishhook Ground-truth Concreteness Only Word Embedding Only Image Features 1 - Image Dispersion Predicted Concreteness Ground-truth Concreteness Ground-truth Concreteness Ground-truth Concreteness Word Embedding and Image Features Figure 3: Plots visualizing the distribution of concreteness predictions on the validation set for our three trained models and for image dispersion. Spearman correlation coefficients are shown. For the model trained only on images, the three worst failure cases are annotated. False positives tend to occur when one concrete meaning of an abstract word dominates the search results (i.e. many photos of “satisfyingly” show food). False negatives often stem from related proper nouns or an overabundance of clipart, as is the case for “fishhook.” Figure 4: The black curve shows mean top-10 accuracy over the HIGHRES and LOWRES sets sorted by predicted concreteness. The gray curves show the 25th and 75th percentiles. Once the model is trained, we fix each foreign word wf, and score all pairs (wf, we′) for all e′ ∈ E, using the learned model MLP(pf,e′). Using these scores, we sort E for each wf. 7.2 Evaluation We evaluate our text-based and image-based combination method by translating Bosnian, Dutch, French, Indonesian, Italian, and Spanish into English. For each language, we split our bilingual dictionary (of 8,673 entries, on average) into 2,000 entries for a testing set, 20% for training the textbased BPR system, 35% for training the reranking MLP, and the rest for a development set. We filtered out multi-word phrases, and translations where wf and we are string identical. We compare three models: TXT is Wijaya et al. (2017)’s text-based state-of-the-art model. TXT+IMG is our MLP-learned combination of the two features. TXT+IMG+CNC uses our predicted concreteness of the foreign word as well. We evaluate all models on varying percents of testing data sorted by predicted concreteness, as in Section 6.2. As shown in Figure 5, both imageaugmented methods beat TXT across concreteness thresholds on the top-1 accuracy metric. Results across the 6 languages are reported in Table 5. Confirming our intuition, images are useful at high concreteness, improving the SOA textbased method 3.21% at 10% recall. At 100% recall our method with images still improves the SOA by 1.3%. For example, the text-only system translates the Bosnian word koˇsarkaˇski incorrectly as football, whereas the image+text system produces the correct basketball. Further, gains are more pronounced for lowresource languages than for high-resource languages. Concreteness scores are useful for highresource languages, for example Spanish, where TXT+IMG falls below TXT alone on more abstract words, but TXT+IMG+CNC remains an improvement. Finally, we note that the text-only system also performs better on concrete words than abstract words, indicating a general trend of ease in translating concrete words regardless of method. 8 Summary We have introduced a large-scale multilingual image resource, and used it to conduct the most comprehensive study to date on using images to learn translations. Our Massively Multilingual Image Dataset will serve as a standard for future work in image-based translation due to its size and generality, covering 100 languages, hundreds of thousands 2574 Figure 5: Reranking top-1 and top-10 accuracies of our image+text combination sytems compared to the text-only Bayesian Personalized Ranking system. The X-axis shows percent of foreign words evaluated on, sorted by decreasing predicted concreteness. % words evaluated Method 10% 50% 100% High-Res TXT .746 .696 .673 TXT+IMG .771 .708 .678 TXT+IMG+Cnc .773 .714 .685 Low-Res TXT .601 .565 .549 TXT+IMG .646 .590 .562 TXT+IMG+Cnc .643 .589 .563 Table 5: Top-1 accuracy results across high-resource (Dutch, French, Italian, Spanish) and low-resource (Bosnian, Indonesian) languages. Words evaluated on are again sorted by concreteness for the sake of analysis. The best result on each % of test data is bolded. of words, and a broad range of parts of speech. Using this corpus, we demonstrated the substantial utility in supervised prediction of word concreteness when using image features, improving over the unsupervised state-of-the-art and finding that image-based translation is much more accurate for concrete words. Because of the text we collected with our corpus, we were also able to collect partof-speech information and demonstrate that image features are useful in translating adjectives and nouns. Finally, we demonstrate a promising path forward, showing that incorporating images can improve a state-of-the-art text-based word translation system. 9 Dataset and Code The MMID will be distributed both in raw form and for a subset of languages in memory compact featurized versions from http: //multilingual-images.org along with code we used in our experiments. Additional details are given in our Supplemental Materials document, which also describes our manual image annotation setup, and gives numerous illustrative examples of our system’s predictions. Acknowledgements We gratefully acknowledge Amazon for its support of this research through the Amazon Research Awards program and through AWS Research Credits. This material is based in part on research sponsored by DARPA under grant number HR0011-15C-0115 (the LORELEI program). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government. References Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web images. In Proceedings of the International Joint Conference on Artificial Intelligence. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior research methods, 46(3):904–911. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. IEEE Conference on, pages 248–255. IEEE. 2575 Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. CoRR, abs/1605.00459. Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, and Margaret Mitchell. 2015. A survey of current datasets for vision and language research. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 207–213, Lisbon, Portugal. Association for Computational Linguistics. Michael Friendly, Patricia E. Franklin, David Hoffman, and David C. Rubin. 1982. The Toronto word pool: Norms for imagery, concreteness, orthographic variables, and grammatical usage for 1,080 words. Behavior Research Methods & Instrumentation, 14(4):375–399. Ruka Funaki and Hideki Nakayama. 2015. Imagemediated learning for zero-shot cross-lingual document retrieval. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 585–590. Association for Computational Linguistics. Pascale Fung and Lo Yuen Yee. 1998. An IR approach for translating new words from nonparallel, comparable texts. In Proceedings of the 17th international Conference on Computational Linguistics, volume 1, pages 414–420. Association for Computational Linguistics. Spandana Gella, Rico Sennrich, Frank Keller, and Mirella Lapata. 2017. Image pivoting for learning multilingual multimodal representations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2839– 2845, Copenhagen, Denmark. Association for Computational Linguistics. K. J. Gilhooly and R. H. Logie. 1980. Age-ofacquisition, imagery, concreteness, familiarity, and ambiguity measures for 1,944 words. Behavior Research Methods & Instrumentation, 12(4):395–427. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In ACL, volume 2008, pages 771–779. Mareike Hartmann and Anders Søgaard. 2017. Limitations of cross-lingual learning from image search. CoRR, abs/1709.05914. Julian Hitschler and Stefan Riezler. 2016. Multimodal pivots for image caption translation. CoRR, abs/1601.03916. Ann Irvine and Chris Callison-Burch. 2017. A comprehensive analysis of bilingual lexicon induction. Computational Linguistics, 43(2):273–310. Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representations using image dispersion: Why less is sometimes more. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 835–841, Baltimore, Maryland. Association for Computational Linguistics. Douwe Kiela, Ivan Vuli´c, and Stephen Clark. 2015. Visual bilingual lexicon induction with transferred convnet features. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 148–158, Lisbon, Portugal. Association for Computational Linguistics. Adam Kilgarriff. 2007. Googleology is bad science. Computational linguistics, 33(1):147–151. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Alexandre Klementiev and Dan Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 817–824. Association for Computational Linguistics. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition, volume 9, pages 9–16. Association for Computational Linguistics. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105. Takashi Miyazaki and Nobuyuki Shimizu. 2016. Crosslingual image caption generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1780–1790. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Allan Paivio, John C. Yuille, and Stephen A. Madigan. 1968. Concreteness, imagery, and meaningfulness values for 925 nouns. In Journal of Experimental Psychology, volume 76, pages 207–213. American Psychological Association. 2576 Ellie Pavlick, Matt Post, Ann Irvine, Dmitry Kachaev, and Chris Callison-Burch. 2014. The language demographics of Amazon Mechanical Turk. Transactions of the Association for Computational Linguistics, 2:79–92. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2089–2096, Istanbul, Turkey. European Language Resources Association (ELRA). ACL Anthology Identifier: L12-1115. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated English and German corpora. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 519– 526. Association for Computational Linguistics. Alexandre Salle, Marco Idiart, and Aline Villavicencio. 2016. Matrix factorization using window sampling and negative sampling for improved word representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 419–424. Association for Computational Linguistics. Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In proceedings of the 6th conference on Natural language learning-Volume 20, pages 1–7. Association for Computational Linguistics. Ivan Vuli´c and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 247–257, Berlin, Germany. Association for Computational Linguistics. Ivan Vulic and Marie-Francine Moens. 2013. Crosslingual semantic similarity of words as the similarity of their semantic word responses. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT 2013), pages 106–116. ACL. Derry Tanti Wijaya, Brendan Callahan, John Hewitt, Jie Gao, Xiao Ling, Marianna Apidianaki, and Chris Callison-Burch. 2017. Learning translations via matrix completion. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1453–1464, Copenhagen, Denmark. Association for Computational Linguistics.
2018
239
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 252–262 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 252 LinkNBed: Multi-Graph Representation Learning with Entity Linkage Rakshit Trivedi ∗ College of Computing Georgia Tech Christos Faloutsos SCS, CMU and Amazon.com Bunyamin Sisman Amazon.com Hongyuan Zha College of Computing Georgia Tech Jun Ma Amazon.com Xin Luna Dong Amazon.com Abstract Knowledge graphs have emerged as an important model for studying complex multirelational data. This has given rise to the construction of numerous large scale but incomplete knowledge graphs encoding information extracted from various resources. An effective and scalable approach to jointly learn over multiple graphs and eventually construct a unified graph is a crucial next step for the success of knowledge-based inference for many downstream applications. To this end, we propose LinkNBed, a deep relational learning framework that learns entity and relationship representations across multiple graphs. We identify entity linkage across graphs as a vital component to achieve our goal. We design a novel objective that leverage entity linkage and build an efficient multi-task training procedure. Experiments on link prediction and entity linkage demonstrate substantial improvements over the state-ofthe-art relational learning approaches. 1 Introduction Reasoning over multi-relational data is a key concept in Artificial Intelligence and knowledge graphs have appeared at the forefront as an effective tool to model such multi-relational data. Knowledge graphs have found increasing importance due to its wider range of important applications such as information retrieval (Dalton et al., 2014), natural language processing (Gabrilovich and Markovitch, 2009), recommender systems (Catherine and Cohen, 2016), question-answering (Cui et al., 2017) ∗Correspondence: [email protected] and [email protected]. Work done when Rakshit Trivedi interned at Amazon. and many more. This has led to the increased efforts in constructing numerous large-scale Knowledge Bases (e.g. Freebase (Bollacker et al., 2008), DBpedia (Auer et al., 2007), Google’s Knowledge graph (Dong et al., 2014), Yago (Suchanek et al., 2007) and NELL (Carlson et al., 2010)), that can cater to these applications, by representing information available on the web in relational format. All knowledge graphs share common drawback of incompleteness and sparsity and hence most existing relational learning techniques focus on using observed triplets in an incomplete graph to infer unobserved triplets for that graph (Nickel et al., 2016a). Neural embedding techniques that learn vector space representations of entities and relationships have achieved remarkable success in this task. However, these techniques only focus on learning from a single graph. In addition to incompleteness property, these knowledge graphs also share a set of overlapping entities and relationships with varying information about them. This makes a compelling case to design a technique that can learn over multiple graphs and eventually aid in constructing a unified giant graph out of them. While research on learning representations over single graph has progressed rapidly in recent years (Nickel et al., 2011; Dong et al., 2014; Trouillon et al., 2016; Bordes et al., 2013; Xiao et al., 2016; Yang et al., 2015), there is a conspicuous lack of principled approach to tackle the unique challenges involved in learning across multiple graphs. One approach to multi-graph representation learning could be to first solve graph alignment problem to merge the graphs and then use existing relational learning methods on merged graph. Unfortunately, graph alignment is an important but still unsolved problem and there exist several techniques addressing its challenges (Liu and Yang, 2016; Pershina et al., 2015; Koutra et al., 2013; Buneman and Staworko, 2016) in limited settings. 253 The key challenges for the graph alignment problem emanate from the fact that the real world data are noisy and intricate in nature. The noisy or sparse data make it difficult to learn robust alignment features, and data abundance leads to computational challenges due to the combinatorial permutations needed for alignment. These challenges are compounded in multi-relational settings due to heterogeneous nodes and edges in such graphs. Recently, deep learning has shown significant impact in learning useful information over noisy, large-scale and heterogeneous graph data (Rossi et al., 2017). We, therefore, posit that combining graph alignment task with deep representation learning across multi-relational graphs has potential to induce a synergistic effect on both tasks. Specifically, we identify that a key component of graph alignment process—entity linkage—also plays a vital role in learning across graphs. For instance, the embeddings learned over two knowledge graphs for an actor should be closer to one another compared to the embeddings of all the other entities. Similarly, the entities that are already aligned together across the two graphs should produce better embeddings due to the shared context and data. To model this phenomenon, we propose LinkNBed, a novel deep learning framework that jointly performs representation learning and graph linkage task. To achieve this, we identify key challenges involved in the learning process and make the following contributions to address them: • We propose novel and principled approach towards jointly learning entity representations and entity linkage. The novelty of our framework stems from its ability to support linkage task across heterogeneous types of entities. • We devise a graph-independent inductive framework that learns functions to capture contextual information for entities and relations. It combines the structural and semantic information in individual graphs for joint inference in a principled manner. • Labeled instances (specifically positive instances for linkage task) are typically very sparse and hence we design a novel multi-task loss function where entity linkage task is tackled in robust manner across various learning scenarios such as learning only with unlabeled instances or only with negative instances. • We design an efficient training procedure to perform joint training in linear time in the number of triples. We demonstrate superior performance of our method on two datasets curated from Freebase and IMDB against stateof-the-art neural embedding methods. 2 Preliminaries 2.1 Knowledge Graph Representation A knowledge graph G comprises of set of facts represented as triplets (es, r, eo) denoting the relationship r between subject entity es and object entity eo. Associated to this knowledge graph, we have a set of attributes that describe observed characteristics of an entity. Attributes are represented as set of key-value pairs for each entity and an attribute can have null (missing) value for an entity. We follow Open World Assumption - triplets not observed in knowledge graph are considered to be missing but not false. We assume that there are no duplicate triplets or self-loops. 2.2 Multi-Graph Relational Learning Definition. Given a collection of knowledge graphs G, Multi-Graph Relational Learning refers to the the task of learning information rich representations of entities and relationships across graphs. The learned embeddings can further be used to infer new knowledge in the form of link prediction or learn new labels in the form of entity linkage. We motivate our work with the setting of two knowledge graphs where given two graphs G1, G2 ∈G, the task is to match an entity eG1 ∈G1 to an entity eG2 ∈G2 if they represent the same real-world entity. We discuss a straightforward extension of this setting to more than two graphs in Section 7. Notations. Let X and Y represent realization of two such knowledge graphs extracted from two different sources. Let nX e and nY e represent number of entities in X and Y respectively. Similarly, nX r and nY r represent number of relations in X and Y . We combine triplets from both X and Y to obtain set of all observed triplets D = {(es, r, eo)p}P p=1 where P is total number of available records across from both graphs. Let E and R be the set of all entities and all relations in D respectively. Let |E| = n and |R| = m. In addition to D, we also have set of linkage labels L for entities between X and Y . Each record in L is represented as triplet (eX ∈X, eY ∈Y , l ∈{0, 1}) where l = 1 when the entities are matched and l = 0 otherwise. 254 3 Proposed Method: LinkNBed We present a novel inductive multi-graph relational learning framework that learns a set of aggregator functions capable of ingesting various contextual information for both entities and relationships in multi-relational graph. These functions encode the ingested structural and semantic information into low-dimensional entity and relation embeddings. Further, we use these representations to learn a relational score function that computes how two entities are likely to be connected in a particular relationship. The key idea behind this formulation is that when a triplet is observed, the relationship between the two entities can be explained using various contextual information such as local neighborhood features of both entities, attribute features of both entities and type information of the entities which participate in that relationship. We outline two key insights for establishing the relationships between embeddings of the entities over multiple graphs in our framework: Insight 1 (Embedding Similarity): If the two entities eX ∈X and eY ∈Y represent the same real-world entity then their embeddings eX and eY will be close to each other. Insight 2 (Semantic Replacement): For a given triplet t = (es, r, eo) ∈X, denote g(t) as the function that computes a relational score for t using entity and relation embeddings. If there exists a matching entity es′ ∈Y for es ∈X, denote t′ = (es′, r, eo) obtained after replacing es with es′. In this case, g(t) ∼g(t′) i.e. score of triplets t and t′ will be similar. For a triplet (es, r, eo) ∈D, we describe encoding mechanism of LinkNBed as three-layered architecture that computes the final output representations of zr, zes, zeo for the given triplet. Figure 1 provides an overview of LinkNBed architecture and we describe the three steps below: 3.1 Atomic Layer Entities, Relations, Types and Attributes are first encoded in its basic vector representations. We use these basic representations to derive more complex contextual embeddings further. Entities, Relations and Types. The embedding vectors corresponding to these three components are learned as follows: ves = f(WEes) veo = f(WEeo) (1) vr = f(WRr) vt = f(WTt) (2) where ves,veo ∈Rd. es, eo ∈Rn are “one-hot” representations of es and eo respectively. vr ∈ Rk and r ∈Rm is “one-hot” representation of r. vt ∈Rq and t ∈Rz is ”one-hot” representation of t . WE ∈Rd×n, WR ∈Rk×m and WT ∈ Rq×z are the entity, relation and type embedding matrices respectively. f is a nonlinear activation function (Relu in our case). WE, WR and WT can be initialized randomly or using pre-trained word embeddings or vector compositions based on name phrases of components (Socher et al., 2013). Attributes. For a given attribute a represented as key-value pair, we use paragraph2vec (Le and Mikolov, 2014) type of embedding network to learn attribute embedding. Specifically, we represent attribute embedding vector as: a = f(Wkeyakey + Wvalaval) (3) where a ∈Ry, akey ∈Ru and aval ∈Rv. Wkey ∈Ry×u and Wval ∈Ry×v. akey will be “one-hot” vector and aval will be feature vector. Note that the dimensions of the embedding vectors do not necessarily need to be the same. 3.2 Contextual Layer While the entity and relationship embeddings described above help to capture very generic latent features, embeddings can be further enriched to capture structural information, attribute information and type information to better explain the existence of a fact. Such information can be modeled as context of nodes and edges in the graph. To this end, we design the following canonical aggregator function that learns various contextual information by aggregating over relevant embedding vectors: c(z) = AGG({z′, ∀z′ ∈C(z)}) (4) where c(z) is the vector representation of the aggregated contextual information for component z. Here, component z can be either an entity or a relation. C(z) is the set of components in the context of z and z′ correspond to the vector embeddings of those components. AGG is the aggregator function which can take many forms such Mean, Max, Pooling or more complex LSTM based aggregators. It is plausible that different components in a context may have varied impact on the component for which the embedding is being learned. To account for this, we employ a soft attention mechanism where we learn attention coefficients 255 Figure 1: LinkNBed Architecture Overview - one step score computation for a given triplet (es, r, eo). The Attribute embeddings are not simple lookups but they are learned as shown in Eq 3 to weight components based on their impact before aggregating them. We modify Eq. 4 as: c(z) = AGG(q(z) ∗{z′, ∀z′ ∈C(z)}) (5) where q(z) = exp(θz) P z′∈C(z) exp(θz′) (6) and θz’s are the parameters of attention model. Following contextual information is modeled in our framework: Entity Neighborhood Context Nc(e) ∈ Rd. Given a triplet (es, r, eo), the neighborhood context for an entity es will be the nodes located near es other than the node eo. This will capture the effect of local neighborhood in the graph surrounding es that drives es to participate in fact (es, r, eo). We use Mean as aggregator function. As there can be large number of neighbors, we collect the neighborhood set for each entity as a pre-processing step using a random walk method. Specifically, given a node e, we run k rounds of random-walks of length l following (Hamilton et al., 2017) and create set N(e) by adding all unique nodes visited across these walks. This context can be similarly computed for object entity. Entity Attribute Context Ac(e) ∈Ry. For an entity e, we collect all attribute embeddings for e obtained from Atomic Layer and learn aggregated information over them using Max operator given in Eq. 4. Relation Type Context Tc(r) ∈Rq. We use type context for relation embedding i.e. for a given relationship r, this context aims at capturing the effect of type of entities that have participated in this relationship. For a given triplet (es, r, eo), type context for relationship r is computed by aggregation with mean over type embeddings corresponding to the context of r. Appendix C provides specific forms of contextual information. 3.3 Representation Layer Having computed the atomic and contextual embeddings for a triplet (es, r, eo), we obtain the final embedded representations of entities and relation in the triplet using the following formulation: zes = σ( W1ves | {z } Subject Entity Embedding + W2Nc(es) | {z } Neighborhood Context + W3Ac(es)) | {z } Subject Entity Attributes (7) zeo = σ( W1veo | {z } Object Entity Embedding + W2Nc(eo) | {z } Neighborhood Context + W3Ac(eo)) | {z } Object Entity Attributes (8) zr = σ( W4vr | {z } Relation Embedding + W5Tc(r)) | {z } Entity Type Context (9) where W1, W2 ∈Rd×d, W3 ∈Rd×y, W4 ∈ Rd×k and W5 ∈Rd×q. σ is nonlinear activation function – generally Tanh or Relu. 256 Following is the rationale for our formulation: An entity’s representation can be enriched by encoding information about the local neighborhood features and attribute information associated with the entity in addition to its own latent features. Parameters W1, W2, W3 learn to capture these different aspects and map them into the entity embedding space. Similarly, a relation’s representation can be enriched by encoding information about entity types that participate in that relationship in addition to its own latent features. Parameters W4, W5 learn to capture these aspects and map them into the relation embedding space. Further, as the ultimate goal is to jointly learn over multiple graphs, shared parameterization in our model facilitate the propagation of information across graphs thereby making it a graph-independent inductive model. The flexibility of the model stems from the ability to shrink it (to a very simple model considering atomic entity and relation embeddings only) or expand it (to a complex model by adding different contextual information) without affecting any other step in the learning procedure. 3.4 Relational Score Function Having observed a triplet (es, r, eo), we first use Eq. 7, 8 and 9 to compute entity and relation representations. We then use these embeddings to capture relational interaction between two entities using the following score function g(·): g(es, r, eo) = σ(zrT · (zes ⊙zeo)) (10) where zr, zes, zeo ∈Rd are d-dimensional representations of entity and relationships as described below. σ is the nonlinear activation function and ⊙ represent element-wise product. 4 Efficient Learning Procedure 4.1 Objective Function The complete parameter space of the model can be given by: Ω= {{Wi}5 i=1, WE, WR, Wkey, Wval, Wt, Θ}. To learn these parameters, we design a novel multitask objective function that jointly trains over two graphs. As identified earlier, the goal of our model is to leverage the available linkage information across graphs for optimizing the entity and relation embeddings such that they can explain the observed triplets across the graphs. Further, we want to leverage these optimized embeddings to match entities across graphs and expand the available linkage information. To achieve this goal, we define following two different loss functions catering to each learning task and jointly optimize over them as a multi-task objective to learn model parameters: Relational Learning Loss. This is conventional loss function used to learn knowledge graph embeddings. Specifically, given a p-th triplet (es, r, eo)p from training set D, we sample C negative samples by replacing either head or tail entity and define a contrastive max margin function as shown in (Socher et al., 2013): Lrel = C X c=1 max(0, γ −g(es p, rp, eo p) + g′(es c, rp, eo p)) (11) where, γ is margin, es c represent corrupted entity and g′(es c, rp, eo p) represent corrupted triplet score. Linkage Learning Loss: We design a novel loss function to leverage pairwise label set L. Given a triplet (es X, rX, eo X) from knowledge graph X, we first find the entity e+ Y from graph Y that represent the same real-world entity as es X. We then replace es X with e+ Y and compute score g(e+ Y , rX, eo X). Next, we find set of all entities E− Y from graph Y that has a negative label with entity es X. We consider them analogous to the negative samples we generated for Eq. 11. We then propose the label learning loss function as: Llab = Z X z=1 max(0, γ −g(e+ Y , rX, eo X) + (g′(e− Y , rX, eo X)z)) (12) where, Z is the total number of negative labels for eX. γ is margin which is usually set to 1 and e− Y ∈E− Y represent entity from graph Y with which entity es X had a negative label. Please note that this applies symmetrically for the triplets that originate from graph Y in the overall dataset. Note that if both entities of a triplet have labels, we will include both cases when computing the loss. Eq. 12 is inspired by Insight 1 and Insight 2 defined earlier in Section 2. Given a set D of N observed triplets across two graphs, we define complete multi-task objective as: L(Ω) = N X i=1 [b·Lrel+(1−b)·Llab]+λ ∥Ω∥2 2 (13) 257 Algorithm 1 LinkNBed mini-batch Training Input: Mini-batch M, Negative Sample Size C, Negative Label Size Z, Attribute data att data, Neighborhood data nhbr data, Type data type data, Positive Label Dict pos dict, Negative Label Dict neg dict Output: Mini-batch Loss LM. LM = 0 score pos = []; score neg = []; score pos lab = []; score neg lab = [] for i = 0 to size(M) do input tuple = M[i] = (es, r, eo) sc = compute triplet score(es, r, eo) (Eq. 10) score pos.append(sc) for j = 0 to C do Select es c from entity list such that es c ̸= es and es c ̸= eo and (es c, r, eo) /∈D sc neg = compute triplet score(es c, r, eo) score neg.append(sc neg) end for if es in pos dict then es+ = positive label for es sc pos l = compute triplet score(es+, r, eo) score pos lab.append(sc pos l) end if for k = 0 to Z do Select es−from neg dict sc neg l = compute triplet score(es−, r, eo) score neg lab.append(sc neg l) end for end for LM += compute minibatch loss(score pos, score neg, score pos lab, score neg lab) (Eq. 13) Back-propagate errors and update parameters Ω return LM where Ωis set of all model parameters and λ is regularization hyper-parameter. b is weight hyperparameter used to attribute importance to each task. We train with mini-batch SGD procedure (Algorithm 1) using Adam Optimizer. Missing Positive Labels. It is expensive to obtain positive labels across multiple graphs and hence it is highly likely that many entities will not have positive labels available. For those entities, we will modify Eq. 12 to use the original triplet (es X, rX, eo X) in place of perturbed triplet g(e+ Y , rX, eo X) for the positive label. The rationale here again arises from Insight 2 wherein embeddings of two duplicate entities should be able to replace each other without affecting the score. Training Time Complexity. Most contextual information is pre-computed and available to all training steps which leads to constant time embedding lookup for those context. But for attribute network, embedding needs to be computed for each attribute separately and hence the complexity to compute score for one triplet is O(2a) where a is number of attributes. Also for training, we generate C negative samples for relational loss function and use Z negative labels for label loss function. Let k = C + Z. Hence, the training time complexity for a set of n triplets will be O(2ak ∗n) which is linear in number of triplets with a constant factor as ak << n for real world knowledge graphs. This is desirable as the number of triplets tend to be very large per graph in multi-relational settings. Memory Complexity. We borrow notations from (Nickel et al., 2016a) and describe the parameter complexity of our model in terms of the number of each component and corresponding embedding dimension requirements. Let Ha = 2∗NeHe+NrHr+NtHt+NkHk+NvHv. The parameter complexity of our model is: Ha ∗(Hb +1). Here, Ne, Nr, Nt, Nk, Nv signify number of entities, relations, types, attribute keys and vocab size of attribute values across both datasets. Here Hb is the output dimension of the hidden layer. 5 Experiments 5.1 Datasets We evaluate LinkNBed and baselines on two real world knowledge graphs: D-IMDB (derived from large scale IMDB data snapshot) and D-FB (derived from large scale Freebase data snapshot). Table 5.1 provides statistics for our final dataset used in the experiments. Appendix B.1 provides complete details about dataset processing. Dataset # Entities # Relations # Attributes # Entity # Available Name Types Triples D-IMDB 378207 38 23 41 143928582 D-FB 39667 146 69 324 22140475 Table 1: Statistics for Datasets: D-IMDB and D-FB 5.2 Baselines We compare the performance of our method against state-of-the-art representation learning baselines that use neural embedding techniques to learn entity and relation representation. Specifically, we consider compositional methods of 258 RESCAL (Nickel et al., 2011) as basic matrix factorization method, DISTMULT (Yang et al., 2015) as simple multiplicative model good for capturing symmetric relationships, and Complex (Trouillon et al., 2016), an upgrade over DISTMULT that can capture asymmetric relationships using complex valued embeddings. We also compare against translational model of STransE that combined original structured embedding with TransE and has shown state-of-art performance in benchmark testing (Kadlec et al., 2017). Finally, we compare with GAKE (Feng et al., 2016), a model that captures context in entity and relationship representations. In addition to the above state-of-art models, we analyze the effectiveness of different components of our model by comparing with various versions that use partial information. Specifically, we report results on following variants: LinkNBed - Embed Only. Only use entity embeddings, LinkNBed - Attr Only. Only use Attribute Context, LinkNBed - Nhbr Only. Only use Neighborhood Context, LinkNBed - Embed + Attr. Use both Entity embeddings and Attribute Context, LinkNBed - Embed + Nhbr. Use both Entity embeddings and Neighbor Context and LinkNBed - Embed All. Use all three Contexts. 5.3 Evaluation Scheme We evaluate our model using two inference tasks: Link Prediction. Given a test triplet (es, r, eo), we first score this triplet using Eq. 10. We then replace eo with all other entities in the dataset and filter the resulting set of triplets as shown in (Bordes et al., 2013). We score the remaining set of perturbed triplets using Eq. 10. All the scored triplets are sorted based on the scores and then the rank of the ground truth triplet is used for the evaluation. We use this ranking mechanism to compute HITS@10 (predicted rank ≤10) and reciprocal rank ( 1 rank) of each test triplet. We report the mean over all test samples. Entity Linkage. In alignment with Insight 2, we pose a novel evaluation scheme to perform entity linkage. Let there be two ground truth test sample triplets: (eX, e+ Y , 1) representing a positive duplicate label and (eX, e− Y , 0) representing a negative duplicate label. Algorithm 2 outlines the procedure to compute linkage probability or score q (∈[0, 1]) for the pair (eX, eY ). We use L1 distance between the two vectors analogous Algorithm 2 Entity Linkage Score Computation Input: Test pair – (eX ∈X, eY ∈Y ). Output: Linkage Score – q. 1. Collect all triplets involving eX from graph X and all triplets involving eY from graph Y into a combined set O. Let |O| = k. 2. Construct Sorig ∈Rk. For each triplet o ∈O, compute score g(o) using Eq. 10 and store the score in Sorig. 3. Create triplet set O′ as following: if o ∈O contain eX ∈X then Replace eX with eY to create perturbed triplet o′ and store it in O′ end if if o ∈O contain eY ∈Y then Replace eY with eX to create perturbed triplet o′ and store it in O′ end if 4. Construct Srepl ∈Rk. For each triplet o′ ∈O′, compute score g(o′) using Eq. 10 and store the score in Srepl. 5. Compute q. Elements in Sorig and Srepl have one-one correspondence so take the mean absolute difference: q = |Sorig - Srepl|1 return q to Mean Absolute Error (MAE). In lieu of hard-labeling test pairs, we use score q to compute Area Under the Precision-Recall Curve (AUPRC). For the baselines and the unsupervised version (with no labels for entity linkage) of our model, we use second stage multilayer Neural Network as classifier for evaluating entity linkage. Appendix B.2 provides training configuration details. 5.4 Predictive Analysis Link Prediction Results. We train LinkNBed model jointly across two knowledge graphs and then perform inference over individual graphs to report link prediction reports. For baselines, we train each baseline on individual graphs and use parameters specific to the graph to perform link prediction inference over each individual graph. Table 5.4 shows link prediction performance for all methods. Our model variant with attention mechanism outperforms all the baselines with 4.15% improvement over single graph state-of-the-art Complex model on D-IMDB and 8.23% improvement on DFB dataset. D-FB is more challenging dataset to 259 Method D-IMDB-HITS10 D-IMDB-MRR D-FB-HITS10 D-FB-MRR RESCAL 75.3 0.592 69.99 0.147 DISTMULT 79.5 0.691 72.34 0.556 Complex 83.2 0.752 75.67 0.629 STransE 80.7 0.421 69.87 0.397 GAKE 69.5 0.114 63.22 0.093 LinkNBed-Embed Only 79.9 0.612 73.2 0.519 LinkNBed-Attr Only 82.2 0.676 74.7 0.588 LinkNBed-Nhbr Only 80.1 0.577 73.4 0.572 LinkNBed-Embed + Attr 84.2 0.673 78.39 0.606 LinkNBed-Embed + Nhbr 81.7 0.544 73.45 0.563 LinkNBed-Embed All 84.3 0.725 80.2 0.632 LinkNBed-Embed All (Attention) 86.8 0.733 81.9 0.677 Improvement (%) 4.15 1.10 8.23 7.09 Table 2: Link Prediction Results on both datasets learn as it has a large set of sparse relationships, types and attributes and it has an order of magnitude lesser relational evidence (number of triplets) compared to D-IMDB. Hence, LinkNBed’s pronounced improvement on D-FB demonstrates the effectiveness of the model. The simplest version of LinkNBed with only entity embeddings resembles DISTMULT model with different objective function. Hence closer performance of those two models aligns with expected outcome. We observed that the Neighborhood context alone provides only marginal improvements while the model benefits more from the use of attributes. Despite being marginal, attention mechanism also improves accuracy for both datasets. Compared to the baselines which are obtained by trained and evaluated on individual graphs, our superior performance demonstrates the effectiveness of multi-graph learning. Entity Linkage Results. We report entity linkage results for our method in two settings: a.) Supervised case where we train using both the objective functions. b.) Unsupervised case where we learn with only the relational loss function. The latter case resembles the baseline training where each model is trained separately on two graphs in an unsupervised manner. For performing the entity linkage in unsupervised case for all models, we first train a second stage of simple neural network classifier and then perform inference. In the supervised case, we use Algorithm 2 for performing the inference. Table 5.4 demonstrates the performance of all methods on this task. Our method significantly outperforms all the baselines with 33.86% over second best baseline in supervised case and 17.35% better performance in unsupervised case. The difference in the performance of our method in two cases demonstrate that the two training objectives are helping one another by learning across the graphs. GAKE’s superior performance on this task compared to the other state-of-the-art relational baselines shows the importance of using contexMethod AUPRC (Supervised) AUPRC (Unsupervised) RESCAL 0.327 DISTMULT 0.292 Complex 0.359 STransE 0.231 GAKE 0.457 LinkNBed-Embed Only 0.376 0.304 LinkNBed-Attr Only 0.451 0.397 LinkNBed-Nhbr Only 0.388 0.322 LinkNBed-Embed + Attr 0.512 0.414 LinkNBed-Embed + Nhbr 0.429 0.356 LinkNBed-Embed All 0.686 0.512 LinkNBed-Embed All (Attention) 0.691 0.553 Improvement (%) 33.86 17.35 Table 3: Entity Linkage Results - Unsupervised case uses classifier at second step tual information for entity linkage. Performance of other variants of our model again demonstrate that attribute information is more helpful than neighborhood context and attention provides marginal improvements. We provide further insights with examples and detailed discussion on entity linkage task in Appendix A. 6 Related Work 6.1 Neural Embedding Methods for Relational Learning Compositional Models learn representations by various composition operators on entity and relational embeddings. These models are multiplicative in nature and highly expressive but often suffer from scalability issues. Initial models include RESCAL (Nickel et al., 2011) that uses a relation specific weight matrix to explain triplets via pairwise interactions of latent features, Neural Tensor Network (Socher et al., 2013), more expressive model that combines a standard NN layer with a bilinear tensor layer and (Dong et al., 2014) that employs a concatenation-projection method to project entities and relations to lower dimensional space. Later, many sophisticated models (Neural Association Model (Liu et al., 2016), HoLE (Nickel et al., 2016b)) have been proposed. Path based composition models (Toutanova et al., 2016) and contextual models GAKE (Feng et al., 2016) have been recently studied to capture more information from graphs. Recently, model like Complex (Trouillon et al., 2016) and Analogy (Liu et al., 2017) have demonstrated state-of-the art performance on relational learning tasks. Translational Models ( (Bordes et al., 2014), (Bordes et al., 2011), (Bordes et al., 2013), (Wang et al., 2014), (Lin et al., 2015), (Xiao et al., 2016)) learn representation by 260 employing translational operators on the embeddings and optimizing based on their score. They offer an additive and efficient alternative to expensive multiplicative models. Due to their simplicity, they often loose expressive power. For a comprehensive survey of relational learning methods and empirical comparisons, we refer the readers to (Nickel et al., 2016a), (Kadlec et al., 2017), (Toutanova and Chen, 2015) and (Yang et al., 2015). None of these methods address multi-graph relational learning and cannot be adapted to tasks like entity linkage in straightforward manner. 6.2 Entity Resolution in Relational Data Entity Resolution refers to resolving entities available in knowledge graphs with entity mentions in text. (Dredze et al., 2010) proposed entity disambiguation method for KB population, (He et al., 2013) learns entity embeddings for resolution, (Huang et al., 2015) propose a sophisticated DNN architecture for resolution, (Campbell et al., 2016) proposes entity resolution across multiple social domains, (Fang et al., 2016) jointly embeds text and knowledge graph to perform resolution while (Globerson et al., 2016) proposes Attention Mechanism for Collective Entity Resolution. 6.3 Learning across multiple graphs Recently, learning over multiple graphs have gained traction. (Liu and Yang, 2016) divides a multi-relational graph into multiple homogeneous graphs and learns associations across them by employing product operator. Unlike our work, they do not learn across multiple multi-relational graphs. (Pujara and Getoor, 2016) provides logic based insights for cross learning, (Pershina et al., 2015) does pairwise entity matching across multirelational graphs and is very expensive, (Chen et al., 2017) learns embeddings to support multi-lingual learning and Big-Align (Koutra et al., 2013) tackles graph alignment problem efficiently for bipartite graphs. None of these methods learn latent representations or jointly train graph alignment and learning which is the goal of our work. 7 Concluding Remarks and Future Work We present a novel relational learning framework that learns entity and relationship embeddings across multiple graphs. The proposed representation learning framework leverage an efficient learning and inference procedure which takes into account the duplicate entities representing the same real-world entity in a multi-graph setting. We demonstrate superior accuracies on link prediction and entity linkage tasks compared to the existing approaches that are trained only on individual graphs. We believe that this work opens a new research direction in joint representation learning over multiple knowledge graphs. Many data driven organizations such as Google and Microsoft take the approach of constructing a unified super-graph by integrating data from multiple sources. Such unification has shown to significantly help in various applications, such as search, question answering, and personal assistance. To this end, there exists a rich body of work on linking entities and relations, and conflict resolution (e.g., knowledge fusion (Dong et al., 2014). Still, the problem remains challenging for large scale knowledge graphs and this paper proposes a deep learning solution that can play a vital role in this construction process. In real-world setting, we envision our method to be integrated in a large scale system that would include various other components for tasks like conflict resolution, active learning and human-in-loop learning to ensure quality of constructed super-graph. However, we point out that our method is not restricted to such use cases—one can readily apply our method to directly make inference over multiple graphs to support applications like question answering and conversations. For future work, we would like to extend the current evaluation of our work from a two-graph setting to multiple graphs. A straightforward approach is to create a unified dataset out of more than two graphs by combining set of triplets as described in Section 2, and apply learning and inference on the unified graph without any major change in the methodology. Our inductive framework learns functions to encode contextual information and hence is graph independent. Alternatively, one can develop sophisticated approaches with iterative merging and learning over pairs of graphs until exhausting all graphs in an input collection. Acknowledgments We would like to give special thanks to Ben London, Tong Zhao, Arash Einolghozati, Andrew Borthwick and many others at Amazon for helpful comments and discussions. We thank the reviewers for their valuable comments and efforts towards improving our manuscript. This project was supported in part by NSF(IIS-1639792, IIS-1717916). 261 References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. DBpedia: A nucleus for a web of open data. In The Semantic Web. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD Conference. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Machine Learning. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In AAAI. Peter Buneman and Slawek Staworko. 2016. Rdf graph alignment with bisimulation. Proc. VLDB Endow. W. M. Campbell, Lin Li, C. Dagli, J. AcevedoAviles, K. Geyer, J. P. Campbell, and C. Priebe. 2016. Cross-domain entity resolution in social media. arXiv:1608.01386v1. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the Twenty-Fourth Conference on Artificial Intelligence (AAAI 2010). Rose Catherine and William Cohen. 2016. Personalized recommendations using knowledge graphs: A probabilistic logic programming approach. In Proceedings of the 10th ACM Conference on Recommender Systems. Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, and Wei Wang. 2017. Kbqa: Learning question answering over qa corpora and knowledge bases. Proc. VLDB Endow. Jeffrey Dalton, Laura Dietz, and James Allan. 2014. Entity query feature expansion using knowledge base links. In Proceedings of the 37th International ACM SIGIR Conference on Research &#38; Development in Information Retrieval. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 601–610. Mark Dredze, Paul McNamee, Delip Rao, Adam Gerber, and Tim Finin. 2010. Entity disambiguation for knowledge base population. In Proceedings of the 23rd International Conference on Computational Linguistics. Wei Fang, Jianwen Zhang, Dilin Wang, Zheng Chen, and Ming Li. 2016. Entity disambiguation by knowledge and text jointly embedding. In CoNLL. Jun Feng, Minlie Huang, Yang Yang, and Xiaoyan Zhu. 2016. Gake: Graph aware knowledge embedding. In COLING. Evgeniy Gabrilovich and Shaul Markovitch. 2009. Wikipedia-based semantic interpretation for natural language processing. J. Artif. Int. Res. Amir Globerson, Nevena Lazic, Soumen Chakrabarti, Amarnag Subramanya, Michael Ringaard, and Fernando Pereira. 2016. Collective entity resolution with multi-focal attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Representation learning on graphs: Methods and applications. arXiv:1709.05584. Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. Hongzhao Huang, Larry Heck, and Heng Ji. 2015. Leveraging deep neural networks and knowledge graphs for entity disambiguation. arXiv:1504.07678v1. Rudolph Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Danai Koutra, HangHang Tong, and David Lubensky. 2013. Big-align: Fast bipartite graph alignment. In 2013 IEEE 13th International Conference on Data Mining. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning. Yankai Lin, Zhiyuan Liu, Maosong Sun, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. AAAI Conference on Artificial Intelligence. 262 Hanxiao Liu, Yuexin Wu, and Yimin Yang. 2017. Analogical inference for multi-relatinal embeddings. In Proceedings of the 34th International Conference on Machine Learning. Hanxiao Liu and Yimin Yang. 2016. Cross-graph learning of multi-relational associations. In Proceedings of the 33rd International Conference on Machine Learning. Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. 2016. Probabilistic reasoning via deep learning: Neural association models. arXiv:1603.07704v2. Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016a. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE. Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016b. Holographic embeddings of knowledge graphs. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 809–816. Maria Pershina, Mohamed Yakout, and Kaushik Chakrabarti. 2015. Holistic entity matching across knowledge graphs. In 2015 IEEE International Conference on Big Data (Big Data). Jay Pujara and Lise Getoor. 2016. Generic statistical relational entity resolution in knowledge graphs. In Sixth International Workshop on Statistical Relational AI. Ryan A. Rossi, Rong Zhou, and Nesreen K. Ahmed. 2017. Deep feature learning for graphs. arXiv:1704.08829. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926–934. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In ACL. Kristina Toutanova, Xi Victoria Lin, Wen-tau Yih, Hoifung Poon, and Chris Quirk. 2016. Compositional learning of embeddings for relation paths in knowledge bases and text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1434–1444. Theo Trouillon, Johannes Welbl, Sebastian Riedel, Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33rd International Conference on Machine Learning. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. Transg: A generative model for knowledge graph embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. arXiv:1412.6575.
2018
24
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2577–2586 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2577 On the Automatic Generation of Medical Imaging Reports Baoyu Jing†* Pengtao Xie†* Eric P. Xing† †Petuum Inc, USA *School of Computer Science, Carnegie Mellon University, USA {baoyu.jing, pengtao.xie, eric.xing}@petuum.com Abstract Medical imaging is widely used in clinical practice for diagnosis and treatment. Report-writing can be error-prone for unexperienced physicians, and timeconsuming and tedious for experienced physicians. To address these issues, we study the automatic generation of medical imaging reports. This task presents several challenges. First, a complete report contains multiple heterogeneous forms of information, including findings and tags. Second, abnormal regions in medical images are difficult to identify. Third, the reports are typically long, containing multiple sentences. To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the prediction of tags and the generation of paragraphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs. We demonstrate the effectiveness of the proposed methods on two publicly available datasets. 1 Introduction Medical images, such as radiology and pathology images, are widely used in hospitals for the diagnosis and treatment of many diseases, such as pneumonia and pneumothorax. The reading and interpretation of medical images are usually conducted by specialized medical professionals. For example, radiology images are read by radiologists. They write textual reports (Figure 1) to narrate the findings regarding each area of the body examined in the imaging study, specifically Figure 1: An exemplar chest x-ray report. In the impression section, the radiologist provides a diagnosis. The findings section lists the radiology observations regarding each area of the body examined in the imaging study. The tags section lists the keywords which represent the critical information in the findings. These keywords are identified using the Medical Text Indexer (MTI). whether each area was found to be normal, abnormal or potentially abnormal. For less-experienced radiologists and pathologists, especially those working in the rural area where the quality of healthcare is relatively low, writing medical-imaging reports is demanding. For instance, to correctly read a chest x-ray image, the following skills are needed (Delrue et al., 2011): (1) thorough knowledge of the normal anatomy of the thorax, and the basic physiology of chest diseases; (2) skills of analyzing the radiograph through a fixed pattern; (3) ability of evaluating the evolution over time; (4) knowledge of clinical presentation and history; (5) knowledge of the correlation with other diagnostic results (laboratory results, electrocardiogram, and respiratory function tests). For experienced radiologists and pathologists, writing imaging reports is tedious and timeconsuming. In nations with large population such as China, a radiologist may need to read hundreds 2578 of radiology images per day. Typing the findings of each image into computer takes about 5-10 minutes, which occupies most of their working time. In sum, for both unexperienced and experienced medical professionals, writing imaging reports is unpleasant. This motivates us to investigate whether it is possible to automatically generate medical image reports. Several challenges need to be addressed. First, a complete diagnostic report is comprised of multiple heterogeneous forms of information. As shown in Figure 1, the report for a chest xray contains impression which is a sentence, findings which are a paragraph, and tags which are a list of keywords. Generating this heterogeneous information in a unified framework is technically demanding. We address this problem by building a multi-task framework, which treats the prediction of tags as a multi-label classification task, and treats the generation of long descriptions as a text generation task. Second, how to localize image-regions and attach the right description to them are challenging. We solve these problems by introducing a co-attention mechanism, which simultaneously attends to images and predicted tags and explores the synergistic effects of visual and semantic information. Third, the descriptions in imaging reports are usually long, containing multiple sentences. Generating such long text is highly nontrivial. Rather than adopting a single-layer LSTM (Hochreiter and Schmidhuber, 1997), which is less capable of modeling long word sequences, we leverage the compositional nature of the report and adopt a hierarchical LSTM to produce long texts. Combined with the co-attention mechanism, the hierarchical LSTM first generates high-level topics, and then produces fine-grained descriptions according to the topics. Overall, the main contributions of our work are: • We propose a multi-task learning framework which can simultaneously predict the tags and generate the text descriptions. • We introduce a co-attention mechanism for localizing sub-regions in the image and generating the corresponding descriptions. • We build a hierarchical LSTM to generate long paragraphs. • We perform extensive experiments to show the effectiveness of the proposed methods. The rest of the paper is organized as follows. Section 2 reviews related works. Section 3 introduces the method. Section 4 present the experimental results and Section 5 concludes the paper. 2 Related Works Textual labeling of medical images There have been several works aiming at attaching “texts” to medical images. In their settings, the target “texts” are either fully-structured or semi-structured (e.g. tags, templates), rather than natural texts. Kisilev et al. (2015) build a pipeline to predict the attributes of medical images. Shin et al. (2016) adopt a CNN-RNN based framework to predict tags (e.g. locations, severities) of chest x-ray images. The work closest to ours is recently contributed by (Zhang et al., 2017), which aims at generating semi-structured pathology reports, whose contents are restricted to 5 predefined topics. However, in the real-world, different physicians usually have different writing habits and different x-ray images will represent different abnormalities. Therefore, collecting semi-structured reports is less practical and thus it is important to build models to learn from natural reports. To the best of our knowledge, our work represents the first one that generates truly natural reports written by physicians, which are usually long and cover diverse topics. Image captioning with deep learning Image captioning aims at automatically generating text descriptions for given images. Most recent image captioning models are based on a CNN-RNN framework (Vinyals et al., 2015; Fang et al., 2015; Karpathy and Fei-Fei, 2015; Xu et al., 2015; You et al., 2016; Krause et al., 2017). Recently, attention mechanisms have been shown to be useful for image captioning (Xu et al., 2015; You et al., 2016). Xu et al. (2015) introduce a spatial-visual attention mechanism over image features extracted from intermediate layers of the CNN. You et al. (2016) propose a semantic attention mechanism over tags of given images. To better leverage both the visual features and semantic tags, we propose a co-attention mechanism for report generation. Instead of only generating one-sentence caption 2579 Figure 2: Illustration of the proposed model. MLC denotes a multi-label classification network. Semantic features are the word embeddings of the predicted tags. The boldfaced tags “calcified granuloma” and “granuloma” are attended by the co-attention network. for images, Krause et al. (2017) and Liang et al. (2017) generate paragraph captions using a hierarchical LSTM. Our method also adopts a hierarchical LSTM for paragraph generation, but unlike Krause et al. (2017), we use a co-attention network to generate topics. 3 Methods 3.1 Overview A complete diagnostic report for a medical image is comprised of both text descriptions (long paragraphs) and lists of tags, as shown in Figure 1. We propose a multi-task hierarchical model with coattention for automatically predicting keywords and generating long paragraphs. Given an image which is divided into regions, we use a CNN to learn visual features for these patches. Then these visual features are fed into a multi-label classification (MLC) network to predict the relevant tags. In the tag vocabulary, each tag is represented by a word-embedding vector. Given the predicted tags for a specific image, their word-embedding vectors serve as the semantic features of this image. Then the visual features and semantic features are fed into a co-attention model to generate a context vector that simultaneously captures the visual and semantic information of this image. As of now, the encoding process is completed. Next, starting from the context vector, the decoding process generates the text descriptions. The description of a medical image usually contains multiple sentences, and each sentence focuses on one specific topic. Our model leverages this compositional structure to generate reports in a hierarchical way: it first generates a sequence of high-level topic vectors representing sentences, then generates a sentence from each topic vector. Specifically, the context vector is inputted into a sentence LSTM, which unrolls for a few steps and produces a topic vector at each step. A topic vector represents the semantics of a sentence to be generated. Given a topic vector, the word LSTM takes it as input and generates a sequence of words to form a sentence. The termination of the unrolling process is controlled by the sentence LSTM. 3.2 Tag Prediction The first task of our model is predicting the tags of the given image. We treat the tag prediction task as a multi-label classification task. Specifically, given an image I, we first extract its features {vn}N n=1 ∈RD from an intermediate layer of a CNN, and then feed {vn}N n=1 into a multi-label classification (MLC) network to generate a distribution over all of the L tags: pl,pred(li = 1|{vn}N n=1) ∝exp(MLCi({vn}N n=1)) (1) where l ∈RL is a tag vector, li = 1/0 denote the presence and absence of the i-th tag respectively, and MLCi means the i-th output of the MLC network. For simplicity, we extract visual features from the last convolutional layer of the VGG-19 model (Simonyan and Zisserman, 2014) and use the last two fully connected layers of VGG-19 for MLC. Finally, the embeddings of the M most likely tags {am}M m=1 ∈RE are used as semantic features for topic generation. 3.3 Co-Attention Previous works have shown that visual attention alone can perform fairly well for localizing objects (Ba et al., 2015) and aiding caption generation (Xu et al., 2015). However, visual attention 2580 does not provide sufficient high level semantic information. For example, only looking at the right lower region of the chest x-ray image (Figure 1) without accounting for other areas, we might not be able to recognize what we are looking at, not to even mention detecting the abnormalities. In contrast, the tags can always provide the needed high level information. To this end, we propose a co-attention mechanism which can simultaneously attend to visual and semantic modalities. In the sentence LSTM at time step s, the joint context vector ctx(s) ∈RC is generated by a co-attention network fcoatt({vn}N n=1, {am}M m=1, h(s−1) sent ), where h(s−1) sent ∈RH is the sentence LSTM hidden state at time step s −1. The coattention network fcoatt uses a single layer feedforward network to compute the soft visual attentions and soft semantic attentions over input image features and tags: αv,n ∝exp(Wvatt tanh(Wvvn + Wv,hh(s−1) sent )) (2) αa,m ∝exp(Waatt tanh(Waam + Wa,hh(s−1) sent )) (3) where Wv, Wv,h, and Wvatt are parameter matrices of the visual attention network. Wa, Wa,h, and Waatt are parameter matrices of the semantic attention network. The visual and semantic context vectors are computed as: v(s) att = N X n=1 αv,nvn, a(s) att = M X m=1 αa,mam. There are many ways to combine the visual and semantic context vectors such as concatenation and element-wise operations. In this paper, we first concatenate these two vectors as [v(s) att; a(s) att], and then use a fully connected layer Wfc to obtain a joint context vector: ctx(s) = Wfc[v(s) att; a(s) att]. (4) 3.4 Sentence LSTM The sentence LSTM is a single-layer LSTM that takes the joint context vector ctx ∈RC as its input, and generates topic vector t ∈RK for word LSTM through topic generator and determines whether to continue or stop generating captions by a stop control component. Topic generator We use a deep output layer (Pascanu et al., 2014) to strengthen the context information in topic vector t(s), by combining the hidden state h(s) sent and the joint context vector ctx(s) of the current step: t(s) = tanh(Wt,hsenth(s) sent + Wt,ctxctx(s)) (5) where Wt,hsent and Wt,ctx are weight parameters. Stop control We also apply a deep output layer to control the continuation of the sentence LSTM. The layer takes the previous and current hidden state h(s−1) sent , h(s) sent as input and produces a distribution over {STOP=1, CONTINUE=0}: p(STOP|h(s−1) sent , h(s) sent) ∝ exp{Wstop tanh(Wstop,s−1h(s−1) sent + Wstop,sh(s) sent)} (6) where Wstop, Wstop,s−1 and Wstop,s are parameter matrices. If p(STOP|h(s−1) sent , h(s) sent) is greater than a predefined threshold (e.g. 0.5), then the sentence LSTM will stop producing new topic vectors and the word LSTM will also stop producing words. 3.5 Word LSTM The words of each sentence are generated by a word LSTM. Similar to (Krause et al., 2017), the topic vector t produced by the sentence LSTM and the special START token are used as the first and second input of the word LSTM, and the subsequent inputs are the word sequence. The hidden state hword ∈RH of the word LSTM is directly used to predict the distribution over words: p(word|hword) ∝exp(Wouthword) (7) where Wout is the parameter matrix. After each word-LSTM has generated its word sequences, the final report is simply the concatenation of all the generated sequences. 3.6 Parameter Learning Each training example is a tuple (I, l, w) where I is an image, l denotes the ground-truth tag vector, and w is the diagnostic paragraph, which is comprised of S sentences and each sentence consists of Ts words. 2581 Given a training example (I, l, w), our model first performs multi-label classification on I and produces a distribution pl,pred over all tags. Note that l is a binary vector which encodes the presence and absence of tags. We can obtain the ground-truth tag distribution by normalizing l: pl = l/||l||1. The training loss of this step is a cross-entropy loss ℓtag between pl and pl,pred. Next, the sentence LSTM is unrolled for S steps to produce topic vectors and also distributions over {STOP, CONTINUE}: ps stop. Finally, the S topic vectors are fed into the word LSTM to generate words ws,t. The training loss of caption generation is the combination of two cross-entropy losses: ℓsent over stop distributions ps stop and ℓword over word distributions ps,t. Combining the pieces together, we obtain the overall training loss: ℓ(I, l, w) = λtagℓtag + λsent S X s=1 ℓsent(ps stop, I{s = S}) + λword S X s=1 Ts X t=1 ℓword(ps,t, ws,t) (8) In addition to the above training loss, there is also a regularization term for visual and semantic attentions. Similar to (Xu et al., 2015), let α ∈RN×S and β ∈RM×S be the matrices of visual and semantic attentions respectively, then the regularization loss over α and β is: ℓreg = λreg[ N X n (1− S X s αn,s)2+ M X m (1− S X s βm,s)2] (9) Such regularization encourages the model to pay equal attention over different image regions and different tags. 4 Experiments In this section, we evaluate the proposed model with extensive quantitative and qualitative experiments. 4.1 Datasets We used two publicly available medical image datasets to evaluate our proposed model. IU X-Ray The Indiana University Chest XRay Collection (IU X-Ray) (Demner-Fushman et al., 2015) is a set of chest x-ray images paired with their corresponding diagnostic reports. The dataset contains 7,470 pairs of images and reports. Each report consists of the following sections: impression, findings, tags1, comparison, and indication. In this paper, we treat the contents in impression and findings as the target captions2 to be generated and the Medical Text Indexer (MTI) annotated tags as the target tags to be predicted (Figure 1 provides an example). We preprocessed the data by converting all tokens to lowercases, removing all of non-alpha tokens, which resulting in 572 unique tags and 1915 unique words. On average, each image is associated with 2.2 tags, 5.7 sentences, and each sentence contains 6.5 words. Besides, we find that top 1,000 words cover 99.0% word occurrences in the dataset, therefore we only included top 1,000 words in the dictionary. Finally, we randomly selected 500 images for validation and 500 images for testing. PEIR Gross The Pathology Education Informational Resource (PEIR) digital library3 is a public medical image library for medical education. We collected the images together with their descriptions in the Gross sub-collection, resulting in the PEIR Gross dataset that contains 7,442 imagecaption pairs from 21 different sub-categories. Different from the IU X-Ray dataset, each caption in PEIR Gross contains only one sentence. We used this dataset to evaluate our model’s ability of generating single-sentence report. For PEIR Gross, we applied the same preprocessing as IU X-Ray, which yields 4,452 unique words. On average, each image contains 12.0 words. Besides, for each caption, we selected 5 words with the highest tf-idf scores as tags. 4.2 Implementation Details We used the full VGG-19 model (Simonyan and Zisserman, 2014) for tag prediction. As for the training loss of the multi-label classification (MLC) task, since the number of tags for semantic attention is fixed as 10, we treat MLC as a multilabel retrieval task and adopt a softmax crossentropy loss (a multi-label ranking loss), similar to (Gong et al., 2013; Guillaumin et al., 2009). 1There are two types of tags: manually generated (MeSH) and Medical Text Indexer (MTI) generated. 2The impression and findings sections are concatenated together as a long paragraph, since impression can be viewed as a conclusion or topic sentence of the report. 3PEIR is c⃝University of Alabama at Birmingham, Department of Pathology. (http://peir.path.uab.edu/library/) 2582 Dataset Methods BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDER IU X-Ray CNN-RNN (Vinyals et al., 2015) 0.316 0.211 0.140 0.095 0.159 0.267 0.111 LRCN (Donahue et al., 2015) 0.369 0.229 0.149 0.099 0.155 0.278 0.190 Soft ATT (Xu et al., 2015) 0.399 0.251 0.168 0.118 0.167 0.323 0.302 ATT-RK (You et al., 2016) 0.369 0.226 0.151 0.108 0.171 0.323 0.155 Ours-no-Attention 0.505 0.383 0.290 0.224 0.200 0.420 0.259 Ours-Semantic-only 0.504 0.371 0.291 0.230 0.207 0.418 0.286 Ours-Visual-only 0.507 0.373 0.297 0.238 0.211 0.426 0.300 Ours-CoAttention 0.517 0.386 0.306 0.247 0.217 0.447 0.327 PEIR Gross CNN-RNN (Vinyals et al., 2015) 0.247 0.178 0.134 0.092 0.129 0.247 0.205 LRCN (Donahue et al., 2015) 0.261 0.184 0.136 0.088 0.135 0.254 0.203 Soft ATT (Xu et al., 2015) 0.283 0.212 0.163 0.113 0.147 0.271 0.276 ATT-RK (You et al., 2016) 0.274 0.201 0.154 0.104 0.141 0.264 0.279 Ours-No-Attention 0.248 0.180 0.133 0.093 0.131 0.242 0.206 Ours-Semantic-only 0.263 0.191 0.145 0.098 0.138 0.261 0.274 Ours-Visual-only 0.284 0.209 0.156 0.105 0.149 0.274 0.280 Ours-CoAttention 0.300 0.218 0.165 0.113 0.149 0.279 0.329 Table 1: Main results for paragraph generation on the IU X-Ray dataset (upper part), and single sentence generation on the PEIR Gross dataset (lower part). BLUE-n denotes the BLEU score that uses up to n-grams. In paragraph generation, we set the dimensions of all hidden states and word embeddings as 512. For words and tags, different embedding matrices were used since a tag might contain multiple words. We utilized the embeddings of the 10 most likely tags as the semantic feature vectors {am}M=10 m=1 . We extracted the visual features from the last convolutional layer of the VGG-19 network, which yields a 14 × 14 × 512 feature map. We used the Adam (Kingma and Ba, 2014) optimizer for parameter learning. The learning rates for the CNN (VGG-19) and the hierarchical LSTM were 1e-5 and 5e-4 respectively. The weights (λtag, λsent, λword and λreg) of different losses were set to 1.0. The threshold for stop control was 0.5. Early stopping was used to prevent over-fitting. 4.3 Baselines We compared our method with several stateof-the-art image captioning models: CNN-RNN (Vinyals et al., 2015), LRCN (Donahue et al., 2015), Soft ATT (Xu et al., 2015), and ATT-RK (You et al., 2016). We re-implemented all of these models and adopt VGG-19 (Simonyan and Zisserman, 2014) as the CNN encoder. Considering these models are built for single sentence captions and to better show the effectiveness of the hierarchical LSTM and the attention mechanism for paragraph generation, we also implemented a hierarchical model without any attention: Oursno-Attention. The input of Ours-no-Attention is the overall image feature of VGG-19, which has a dimension of 4096. Ours-no-Attention can be viewed as a CNN-RNN (Vinyals et al., 2015) equipped with a hierarchical LSTM decoder. To further show the effectiveness of the proposed coattention mechanism, we also implemented two ablated versions of our model: Ours-Semanticonly and Ours-Visual-only, which takes solely the semantic attention or visual attention context vector to produce topic vectors. 4.4 Quantitative Results We report the paragraph generation (upper part of Table 1) and one sentence generation (lower part of Table 1) results using the standard image captioning evaluation tool 4 which provides evaluation on the following metrics: BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), ROUGE (Lin, 2004), and CIDER (Vedantam et al., 2015). For paragraph generation, as shown in the upper part of Table 1, it is clear that models with a single LSTM decoder perform much worse than those with a hierarchical LSTM decoder. Note that the only difference between Ours-no-Attention and CNN-RNN (Vinyals et al., 2015) is that Oursno-Attention adopts a hierarchical LSTM decoder while CNN-RNN (Vinyals et al., 2015) adopts a single-layer LSTM. The comparison between these two models directly demonstrates the effectiveness of the hierarchical LSTM. This result is not surprising since it is well-known that a single-layer LSTM cannot effectively model long sequences (Liu et al., 2015; Martin and Cundy, 2018). Additionally, employing semantic attention alone (Ours-Semantic-only) or visual attention alone (Ours-Visual-only) to generate topic vectors does not seem to help caption generation a lot. The potential reason might be that visual at4https://github.com/tylin/coco-caption 2583 Figure 3: Illustration of paragraph generated by Ours-CoAttention, Ours-no-Attention, and Soft Attention models. The underlined sentences are the descriptions of detected abnormalities. The second image is a lateral x-ray image. Top two images are positive results, the third one is a partial failure case and the bottom one is failure case. These images are from test dataset. tention can only capture the visual information of sub-regions of the image and is unable to correctly capture the semantics of the entire image. Semantic attention is inadequate of localizing small abnormal image-regions. Finally, our full model (Ours-CoAttention) achieves the best results on all of the evaluation metrics, which demonstrates the effectiveness of the proposed co-attention mechanism. For the single-sentence generation results (shown in the lower part of Table 1), the ablated versions of our model (Ours-Semantic-only and Ours-Visual-only) achieve competitive scores compared with the state-of-the-art methods. Our full model (Ours-CoAttention) outperforms all of the baseline, which indicates the effectiveness of the proposed co-attention mechanism. 4.5 Qualitative Results 4.5.1 Paragraph Generation An illustration of paragraph generation by three models (Ours-CoAttention, Ours-no-Attention and Soft Attention models) is shown in Figure 3. We can find that different sentences have different topics. The first sentence is usually a high level description of the image, while each of the following sentences is associated with one area of the image (e.g. “lung”, “heart”). Soft Attention and Oursno-Attention models detect only a few abnormalities of the images and the detected abnormalities are incorrect. In contrast, Ours-CoAttention model is able to correctly describe many true abnormalities (as shown in top three images). This comparison demonstrates that co-attention is better at capturing abnormalities. For the third image, Ours-CoAttention model successfully detects the area (“right lower lobe”) which is abnormal (“eventration”), however, it fails to precisely describe this abnormality. In addition, the model also finds abnormalities about “interstitial opacities” and “atheroscalerotic calcification”, which are not considered as true abnormality by human experts. The potential reason for this mis-description might be that this x-ray image is darker (compared with the above images), and our model might be very sensitive to this change. 2584 Figure 4: Visualization of co-attention for three examples. Each example is comprised of four things: (1) image and visual attentions; (2) ground truth tags and semantic attention on predicted tags; (3) generated descriptions; (4) ground truth descriptions. For the semantic attention, three tags with highest attention scores are highlighted. The underlined tags are those appearing in the ground truth. The image at the bottom is a failure case of Ours-CoAttention. However, even though the model makes the wrong judgment about the major abnormalities in the image, it does find some unusual regions: “lateral lucency” and “left lower lobe”. To further understand models’ ability of detecting abnormalities, we present the portion of sentences which describe the normalities and abnormalities in Table 2. We consider sentences which contain “no”, “normal”, “clear”, “stable” as sentences describing normalities. It is clear that OursCoAttention best approximates the ground truth distribution over normality and abnormality. Method Normality Abnormality Total Soft Attention 0.510 0.490 1.0 Ours-no-Attention 0.753 0.247 1.0 Ours-CoAttention 0.471 0.529 1.0 Ground Truth 0.385 0.615 1.0 Table 2: Portion of sentences which describe the normalities and abnormalities in the image. 4.5.2 Co-Attention Learning Figure 4 presents visualizations of co-attention. The first property shown by Figure 4 is that the sentence LSTM can generate different topics at different time steps since the model focuses on different image regions and tags for different sentences. The next finding is that visual attention can guide our model to concentrate on relevant re2585 gions of the image. For example, the third sentence of the first example is about “cardio”, and the visual attention concentrates on regions near the heart. Similar behavior can also be found for semantic attention: for the last sentence in the first example, our model correctly concentrates on “degenerative change” which is the topic of the sentence. Finally, the first sentence of the last example presents a mis-description caused by incorrect semantic attention over tags. Such incorrect attention can be reduced by building a better tag prediction module. 5 Conclusion In this paper, we study how to automatically generate textual reports for medical images, with the goal to help medical professionals produce reports more accurately and efficiently. Our proposed methods address three major challenges: (1) how to generate multiple heterogeneous forms of information within a unified framework, (2) how to localize abnormal regions and produce accurate descriptions for them, (3) how to generate long texts that contain multiple sentences or even paragraphs. To cope with these challenges, we propose a multi-task learning framework which jointly predicts tags and generates descriptions. We introduce a co-attention mechanism that can simultaneously explore visual and semantic information to accurately localize and describe abnormal regions. We develop a hierarchical LSTM network that can more effectively capture long-range semantics and produce high quality long texts. On two medical datasets containing radiology and pathology images, we demonstrate the effectiveness of the proposed methods through quantitative and qualitative studies. References Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. 2015. Multiple object recognition with visual attention. ICLR. Louke Delrue, Robert Gosselin, Bart Ilsen, An Van Landeghem, Johan de Mey, and Philippe Duyck. 2011. Difficulties in the interpretation of chest radiography. In Comparative Interpretation of CT and Standard Radiography of the Chest, pages 27–49. Springer. Dina Demner-Fushman, Marc D Kohli, Marc B Rosenman, Sonya E Shooshan, Laritza Rodriguez, Sameer Antani, George R Thoma, and Clement J McDonald. 2015. Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association, 23(2):304–310. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376–380. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625–2634. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1473–1482. Yunchao Gong, Yangqing Jia, Thomas Leung, Alexander Toshev, and Sergey Ioffe. 2013. Deep convolutional ranking for multilabel image annotation. ICLR. Matthieu Guillaumin, Thomas Mensink, Jakob Verbeek, and Cordelia Schmid. 2009. Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation. In Computer Vision, 2009 IEEE 12th International Conference on, pages 309–316. IEEE. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128–3137. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Pavel Kisilev, Eugene Walach, Ella Barkan, Boaz Ophir, Sharon Alpert, and Sharbell Y Hashoul. 2015. From medical image to automatic medical report generation. IBM Journal of Research and Development, 59(2/3):2–1. Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for generating descriptive image paragraphs. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2586 Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, and Eric P. Xing. 2017. Recurrent topictransition gan for visual paragraph generation. In The IEEE International Conference on Computer Vision (ICCV). Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain. Pengfei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, and Xuanjing Huang. 2015. Multi-timescale long short-term memory neural network for modelling sentences and documents. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 2326–2335. Eric Martin and Chris Cundy. 2018. Parallelizing linear recurrent neural nets over sequence length. ICLR. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to construct deep recurrent neural networks. ICLR. Hoo-Chang Shin, Kirk Roberts, Le Lu, Dina DemnerFushman, Jianhua Yao, and Ronald M Summers. 2016. Learning to read chest x-rays: recurrent neural cascade model for automated image annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2497– 2506. K. Simonyan and A. Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048–2057. Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with semantic attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4651–4659. Zizhao Zhang, Yuanpu Xie, Fuyong Xing, Mason McGough, and Lin Yang. 2017. Mdnet: A semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6428–6436.
2018
240
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2587–2597 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2587 Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning Hongge Chen1*, Huan Zhang23*, Pin-Yu Chen3, Jinfeng Yi4, and Cho-Jui Hsieh2 1MIT, Cambridge, MA 02139, USA 2UC Davis, Davis, CA 95616, USA 3IBM Research, NY 10598, USA 4JD AI Research, Beijing, China [email protected], [email protected] [email protected], [email protected], [email protected] *Hongge Chen and Huan Zhang contribute equally to this work Abstract Visual language grounding is widely studied in modern neural image captioning systems, which typically adopts an encoder-decoder framework consisting of two principal components: a convolutional neural network (CNN) for image feature extraction and a recurrent neural network (RNN) for language caption generation. To study the robustness of language grounding to adversarial perturbations in machine vision and perception, we propose Show-and-Fool, a novel algorithm for crafting adversarial examples in neural image captioning. The proposed algorithm provides two evaluation approaches, which check whether neural image captioning systems can be mislead to output some randomly chosen captions or keywords. Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems. Consequently, our approach leads to new robustness implications of neural image captioning and novel insights in visual language grounding. 1 Introduction In recent years, language understanding grounded in machine vision and perception has made remarkable progress in natural language processing (NLP) and artificial intelligence (AI), such as image captioning and visual question answering. Image captioning is a multimodal learning task and has been used to study the interaction between language and vision models (Shekhar et al., 2017). It takes an image as an input and generates a language caption that best describes its visual contents, and has many important applications such as developing image search engines with complex natural language queries, building AI agents that can see and talk, and promoting equal web access for people who are blind or visually impaired. Modern image captioning systems typically adopt an encoder-decoder framework composed of two principal modules: a convolutional neural network (CNN) as an encoder for image feature extraction and a recurrent neural network (RNN) as a decoder for caption generation. This CNN+RNN architecture includes popular image captioning models such as Show-and-Tell (Vinyals et al., 2015), Show-Attend-and-Tell (Xu et al., 2015) and NeuralTalk (Karpathy and Li, 2015). Recent studies have highlighted the vulnerability of CNN-based image classifiers to adversarial examples: adversarial perturbations to benign images can be easily crafted to mislead a well-trained classifier, leading to visually indistinguishable adversarial examples to human (Szegedy et al., 2014; Goodfellow et al., 2015). In this study, we investigate a more challenging problem in visual language grounding domain that evaluates the robustness of multimodal RNN in the form of a CNN+RNN architecture, and use neural image captioning as a case study. Note that crafting adversarial examples in image captioning tasks is strictly harder than in well-studied image classification tasks, due to the following reasons: (i) class attack v.s. caption attack: unlike classification tasks where the class labels are well defined, the output of image captioning is a set of top-ranked captions. Simply treating different captions as distinct classes will result in an enormous number of classes that can even precede the number of training images. In addition, semantically similar 2588 Figure 1: Adversarial examples crafted by Showand-Fool using the targeted caption method. The target captioning model is Show-and-Tell (Vinyals et al., 2015), the original images are selected from the MSCOCO validation set, and the targeted captions are randomly selected from the top-1 inferred caption of other validation images. captions can be expressed in different ways and hence should not be viewed as different classes; and (ii) CNN v.s. CNN+RNN: attacking RNN models is significantly less well-studied than attacking CNN models. The CNN+RNN architecture is unique and beyond the scope of adversarial examples in CNN-based image classifiers. In this paper, we tackle the aforementioned challenges by proposing a novel algorithm called Show-and-Fool. We formulate the process of crafting adversarial examples in neural image captioning systems as optimization problems with novel objective functions designed to adopt the CNN+RNN architecture. Specifically, our objective function is a linear combination of the distortion between benign and adversarial examples as well as some carefully designed loss functions. The proposed Show-and-Fool algorithm provides two approaches to craft adversarial examples in neural image captioning under different scenarios: 1. Targeted caption method: Given a targeted caption, craft adversarial perturbations to any image such that its generated caption matches the targeted caption. 2. Targeted keyword method: Given a set of keywords, craft adversarial perturbations to any image such that its generated caption contains the specified keywords. The captioning model has the freedom to make sentences with target keywords in any order. As an illustration, Figure 1 shows an adversarial example crafted by Show-and-Fool using the targeted caption method. The adversarial perturbations are visually imperceptible while can successfully mislead Show-and-Tell to generate the targeted captions. Interestingly and perhaps surprisingly, our results pinpoint the Achilles heel of the language and vision models used in the tested image captioning systems. Moreover, the adversarial examples in neural image captioning highlight the inconsistency in visual language grounding between humans and machines, suggesting a possible weakness of current machine vision and perception machinery. Below we highlight our major contributions: • We propose Show-and-Fool, a novel optimization based approach to crafting adversarial examples in image captioning. We provide two types of adversarial examples, targeted caption and targeted keyword, to analyze the robustness of neural image captioners. To the best of our knowledge, this is the very first work on crafting adversarial examples for image captioning. • We propose powerful and generic loss functions that can craft adversarial examples and evaluate the robustness of the encoder-decoder pipelines in the form of a CNN+RNN architecture. In particular, our loss designed for targeted keyword attack only requires the adversarial caption to contain a few specified keywords; and we allow the neural network to make meaningful sentences with these keywords on its own. • We conduct extensive experiments on the MSCOCO dataset. Experimental results show that our targeted caption method attains a 95.8% attack success rate when crafting adversarial examples with randomly assigned captions. In addition, our targeted keyword attack yields an even higher success rate. We also show that attacking CNN+RNN models is inherently different and more challenging than only attacking 2589 CNN models. • We also show that Show-and-Fool can produce highly transferable adversarial examples: an adversarial image generated for fooling Showand-Tell can also fool other image captioning models, leading to new robustness implications of neural image captioning systems. 2 Related Work In this section, we review the existing work on visual language grounding, with a focus on neural image captioning. We also review related work on adversarial attacks on CNN-based image classifiers. Due to space limitations, we defer the second part to the supplementary material. Visual language grounding represents a family of multimodal tasks that bridge visual and natural language understanding. Typical examples include image and video captioning (Karpathy and Li, 2015; Vinyals et al., 2015; Donahue et al., 2015b; Pasunuru and Bansal, 2017; Venugopalan et al., 2015), visual dialog (Das et al., 2017; De Vries et al., 2017), visual question answering (Antol et al., 2015; Fukui et al., 2016; Lu et al., 2016; Zhu et al., 2017), visual storytelling (Huang et al., 2016), natural question generation (Mostafazadeh et al., 2017, 2016), and image generation from captions (Mansimov et al., 2016; Reed et al., 2016). In this paper, we focus on studying the robustness of neural image captioning models, and believe that the proposed method also sheds lights on robustness evaluation for other visual language grounding tasks using a similar multimodal RNN architecture. Many image captioning methods based on deep neural networks (DNNs) adopt a multimodal RNN framework that first uses a CNN model as the encoder to extract a visual feature vector, followed by a RNN model as the decoder for caption generation. Representative works under this framework include (Chen and Zitnick, 2015; Devlin et al., 2015; Donahue et al., 2015a; Karpathy and Li, 2015; Mao et al., 2015; Vinyals et al., 2015; Xu et al., 2015; Yang et al., 2016; Liu et al., 2017a,b), which are mainly differed by the underlying CNN and RNN architectures, and whether or not the attention mechanisms are considered. Other lines of research generate image captions using semantic information or via a compositional approach (Fang et al., 2015; Gan et al., 2017; Tran et al., 2016; Jia et al., 2015; Wu et al., 2016; You et al., 2016). The recent work in (Shekhar et al., 2017) touched upon the robustness of neural image captioning for language grounding by showing its insensitivity to one-word (foil word) changes in the language caption, which corresponds to the untargeted attack category in adversarial examples. In this paper, we focus on the more challenging targeted attack setting that requires to fool the captioning models and enforce them to generate prespecified captions or keywords. 3 Methodology of Show-and-Fool 3.1 Overview of the Objective Functions We now formally introduce our approaches to crafting adversarial examples for neural image captioning. The problem of finding an adversarial example for a given image I can be cast as the following optimization problem: min δ c · loss(I + δ) + ∥δ∥2 2 s.t. I + δ ∈[−1, 1]n. (1) Here δ denotes the adversarial perturbation to I. ∥δ∥2 2 = ∥(I + δ) −I∥2 2 is an ℓ2 distance metric between the original image and the adversarial image. loss(·) is an attack loss function which takes different forms in different attacking settings. We will provide the explicit expressions in Sections 3.2 and 3.3. The term c > 0 is a pre-specified regularization constant. Intuitively, with larger c, the attack is more likely to succeed but at the price of higher distortion on δ. In our algorithm, we use a binary search strategy to select c. The box constraint on the image I ∈[−1, 1]n ensures that the adversarial example I + δ ∈[−1, 1]n lies within a valid image space. For the purpose of efficient optimization, we convert the constrained minimization problem in (1) into an unconstrained minimization problem by introducing two new variables y ∈Rn and w ∈Rn such that y = arctanh(I) and w = arctanh(I + δ) −y, where arctanh denotes the inverse hyperbolic tangent function and is applied element-wisely. Since tanh(yi + wi) ∈[−1, 1], the transformation will automatically satisfy the box constraint. Consequently, the constrained optimization problem in 2590 (1) is equivalent to minw∈Rn c · loss(tanh(w + y)) (2) +∥tanh(w + y) −tanh(y)∥2 2. In the following sections, we present our designed loss functions for different attack settings. 3.2 Targeted Caption Method Note that a targeted caption is denoted by S = (S1, S2, ..., St, ..., SN), where St indicates the index of the t-th word in the vocabulary list V, S1 is a start symbol and SN indicates the end symbol. N is the length of caption S, which is not fixed but does not exceed a predefined maximum caption length. To encourage the neural image captioning system to output the targeted caption S, one needs to ensure the log probability of the caption S conditioned on the image I + δ attains the maximum value among all possible captions, that is, log P(S|I + δ) = max S′∈Ωlog P(S′|I + δ), (3) where Ωis the set of all possible captions. It is also common to apply the chain rule to the joint probability and we have log P(S′|I+δ) = N X t=2 log P(S′ t|I+δ, S′ 1, ..., S′ t−1). In neural image captioning networks, p(S′ t|I + δ, S′ 1, ..., S′ t−1) is usually computed by a RNN/LSTM cell f, with its hidden state ht−1 and input S′ t−1: zt = f(ht−1, S′ t−1) and pt = softmax(zt), (4) where zt := [z(1) t , z(2) t , ..., z(|V|) t ] ∈R|V| is a vector of the logits (unnormalized probabilities) for each possible word in the vocabulary. The vector pt represents a probability distribution on V with each coordinate p(i) t defined as: p(i) t := P(S′ t = i|I + δ, S′ 1, ..., S′ t−1). Following the definition of softmax function: P(S′ t|I+δ, S′ 1, ..., S′ t−1) = exp(z(S′ t) t )/ X i∈V exp(z(i) t ). Intuitively, to maximize the targeted caption’s probability, we can directly use its negative log probability (5) as a loss function. The inputs of the RNN are the first N −1 words of the targeted caption (S1, S2, ..., SN−1). lossS,log-prob(I + δ) = −log P(S|I + δ) = − N X t=2 log P(St|I + δ, S1, ..., St−1). (5) Applying (5) to (2), the formulation of targeted caption method given a targeted caption S is: min w∈Rnc · lossS,log prob(tanh(w + y)) + ∥tanh(w + y) −tanh(y)∥2 2. Alternatively, using the definition of the softmax function, log P(S′|I + δ) = N X t=2 [z(S′ t) t −log( X i∈V exp(z(i) t ))] = N X t=2 z(S′ t) t −constant, (6) (3) can be simplified as log P(S|I + δ) ∝ N X t=2 z(St) t = max S′∈Ω N X t=2 z(S′ t) t . Instead of making each z(St) t as large as possible, it is sufficient to require the target word St to attain the largest (top-1) logit (or probability) among all the words in the vocabulary at position t. In other words, we aim to minimize the difference between the maximum logit except St, denoted by maxk∈V,k̸=St{z(k) t }, and the logit of St, denoted by z(St) t . We also propose a ramp function on top of this difference as the final loss function: lossS,logits(I+δ) = N−1 X t=2 max{−ϵ, max k̸=St{z(k) t }−z(St) t }, (7) where ϵ > 0 is a confidence level accounting for the gap between maxk̸=St{z(k) t } and z(St) t . When z(St) t > maxk̸=St{z(k) t } + ϵ, the corresponding term in the summation will be kept at −ϵ and does not contribute to the gradient of the loss function, encouraging the optimizer to focus on minimizing other terms where z(St) t is not large enough. Applying the loss (7) to (1), the final formulation of targeted caption method given a targeted 2591 caption S is min w∈Rn c · N−1 X t=2 max{−ϵ, max k̸=St{z(k) t } −z(St) t } + ∥tanh(w + y) −tanh(y)∥2 2. We note that (Carlini and Wagner, 2017) has reported that in CNN-based image classification, using logits in the attack loss function can produce better adversarial examples than using probabilities, especially when the target network deploys some gradient masking schemes such as defensive distillation (Papernot et al., 2016b). Therefore, we provide both logit-based and probability-based attack loss functions for neural image captioning. 3.3 Targeted Keyword Method In addition to generating an exact targeted caption by perturbing the input image, we offer an intermediate option that aims at generating captions with specific keywords, denoted by K := {K1, · · · , KM} ⊂V. Intuitively, finding an adversarial image generating a caption with specific keywords might be easier than generating an exact caption, as we allow more degree of freedom in caption generation. However, as we need to ensure a valid and meaningful inferred caption, finding an adversarial example with specific keywords in its caption is difficult in an optimization perspective. Our target keyword method can be used to investigate the generalization capability of a neural captioning system given only a few keywords. In our method, we do not require a target keyword Kj, j ∈[M] to appear at a particular position. Instead, we want a loss function that allows Kj to become the top-1 prediction (plus a confidence margin ϵ) at any position. Therefore, we propose to use the minimum of the hinge-like loss terms over all t ∈[N] as an indication of Kj appearing at any position as the top-1 prediction, leading to the following loss function: lossK,logits = M X j=1 min t∈[N]{max{−ϵ,max k̸=Kj{z(k) t }−z(Kj) t }}. (8) We note that the loss functions in (4) and (5) require an input S′ t−1 to predict zt for each t ∈ {2, . . . , N}. For the targeted caption method, we use the targeted caption S as the input of RNN. In contrast, for the targeted keyword method we no longer know the exact targeted sentence, but only require the presence of specified keywords in the final caption. To bridge the gap, we use the originally inferred caption S0 = (S0 1, · · · , S0 N) from the benign image as the initial input to RNN. Specifically, after minimizing (8) for T iterations, we run inference on I + δ and set the RNN’s input S1 as its current top-1 prediction, and continue this process. With this iterative optimization process, the desired keywords are expected to gradually appear in top-1 prediction. Another challenge arises in targeted keyword method is the problem of “keyword collision”. When the number of keywords M ≥2, more than one keywords may have large values of maxk̸=Kj{z(k) t } −z(Kj) t at a same position t. For example, if dog and cat are top-2 predictions for the second word in a caption, the caption can either start with “A dog ...” or “A cat ...”. In this case, despite the loss (8) being very small, a caption with both dog and cat can hardly be generated, since only one word is allowed to appear at the same position. To alleviate this problem, we define a gate function gt,j(x) which masks off all the other keywords when a keyword becomes top1 at position t: gt,j(x) = ( A, if arg maxi∈V z(i) t ∈K \ {Kj} x, otherwise, where A is a predefined value that is significantly larger than common logits values. Then (8) becomes: M X j=1 min t∈[N]{gt,j(max{−ϵ, max k̸=Kj{z(k) t } −z(Kj) t })}. (9) The log-prob loss for targeted keyword method is discussed in the Supplementary Material. 4 Experiments 4.1 Experimental Setup and Algorithms We performed extensive experiments to test the effectiveness of our Show-and-Fool algorithm and study the robustness of image captioning systems under different problem settings. In our experiments1, we use the pre-trained TensorFlow implementation2 of Show-and-Tell (Vinyals et al., 2015) 1Our source code is available at: https://github.com/ huanzhang12/ImageCaptioningAttack 2https://github.com/tensorflow/models/tree/master/ research/im2txt 2592 with Inception-v3 as the CNN for visual feature extraction. Our testbed is Microsoft COCO (Lin et al., 2014) (MSCOCO) data set. Although some more recent neural image captioning systems can achieve better performance than Show-and-Tell, they share a similar framework that uses CNN for feature extraction and RNN for caption generation, and Show-and-Tell is the vanilla version of this CNN+RNN architecture. Indeed, we find that the adversarial examples on Show-and-Tell are transferable to other image captioning models such as Show-Attend-and-Tell (Xu et al., 2015) and NeuralTalk23, suggesting that the attention mechanism and the choice of CNN and RNN architectures do not significantly affect the robustness. We also note that since Show-and-Fool is the first work on crafting adversarial examples for neural image captioning, to the best of our knowledge, there is no other method for comparison. We use ADAM to minimize our loss functions and set the learning rate to 0.005. The number of iterations is set to 1, 000. All the experiments are performed on a single Nvidia GTX 1080 Ti GPU. For targeted caption and targeted keyword methods, we perform a binary search for 5 times to find the best c: initially c = 1, and c will be increased by 10 times until a successful adversarial example is found. Then, we choose a new c to be the average of the largest c where an adversarial example can be found and the smallest c where an adversarial example cannot be found. We fix ϵ = 1 except for transferability experiments. For each experiment, we randomly select 1,000 images from the MSCOCO validation set. We use BLEU-1 (Papineni et al., 2002), BLEU-2, BLEU-3, BLEU4, ROUGE (Lin, 2004) and METEOR (Lavie and Agarwal, 2005) scores to evaluate the correlations between the inferred captions and the targeted captions. These scores are widely used in NLP community and are adopted by image captioning systems for quality assessment. Throughout this section, we use the logits loss (7)(9). The results of using the log-prob loss (5) are similar and are reported in the supplementary material. 4.2 Targeted Caption Results Unlike the image classification task where all possible labels are predefined, the space of possible captions in a captioning system is almost infinite. However, the captioning system is only able to 3https://github.com/karpathy/neuraltalk2 Table 1: Summary of targeted caption method (Section 3.2) and targeted keyword method (Section 3.3) using logits loss. The ℓ2 distortion of adversarial noise ∥δ∥2 is averaged over successful adversarial examples. For comparison, we also include CNN based attack methods (Section 4.5). Experiments Success Rate Avg. ∥δ∥2 targeted caption 95.8% 2.213 1-keyword 97.1% 1.589 2-keyword 97.5% 2.363 3-keyword 96.0% 2.626 C&W on CNN 22.4% 2.870 I-FGSM on CNN 34.5% 15.596 Table 2: Statistics of the 4.2% failed adversarial examples using the targeted caption method and logits loss (7). All correlation scores are computed using the top-5 inferred captions of an adversarial image and the targeted caption (higher score means better targeted attack performance). c 1 10 102 103 104 ℓ2 Distortion 1.726 3.400 7.690 16.03 23.31 BLEU-1 .567 .725 .679 .701 .723 BLEU-2 .420 .614 .559 .585 .616 BLEU-3 .320 .509 .445 .484 .514 BLEU-4 .252 .415 .361 .402 .417 ROUGE .502 .664 .629 .638 .672 METEOR .258 .407 .375 .403 .399 output relevant captions learned from the training set. For instance, the captioning model cannot generate a passive-voice sentence if the model was never trained on such sentences. Therefore, we need to ensure that the targeted caption lies in the space where the captioning system can possibly generate. To address this issue, we use the generated caption of a randomly selected image (other than the image under investigation) from MSCOCO validation set as the targeted caption S. The use of a generated caption as the targeted caption excludes the effect of out-of-domain captioning, and ensures that the target caption is within the output space of the captioning network. Here we use the logits loss (7) plus a ℓ2 distortion term (as in (2)) as our objective function. A successful adversarial example is found if the inferred caption after adding the adversarial perturbation δ is exactly the same as the targeted caption. In our setting, 1,000 ADAM iterations take about 38 seconds for one image. The overall success rate and average distortion of adversarial perturbation δ are shown in Table 1. Among all the tested images, our method attains 95.8% attack success 2593 rate. Moreover, our adversarial examples have small ℓ2 distortions and are visually identical to the original images, as displayed in Figure 1. We also examine the failed adversarial examples and summarize their statistics in Table 2. We find that their generated captions, albeit not entirely identical to the targeted caption, are in fact highly correlated to the desired one. Overall, the high success rate and low ℓ2 distortion of adversarial examples clearly show that Show-and-Tell is not robust to targeted adversarial perturbations. 4.3 Targeted Keyword Results In this task, we use (9) as our loss function, and choose the number of keywords M = {1, 2, 3}. We run an inference step on I + δ every T = 5 iterations, and use the top-1 caption as the input of RNN/LSTMs. Similar to Section 4.2, for each image the targeted keywords are selected from the caption generated by a randomly selected validation set image. To exclude common words like “a”, “the”, “and”, we look up each word in the targeted sentence and only select nouns, verbs, adjectives or adverbs. We say an adversarial image is successful when its caption contains all specified keywords. The overall success rate and average distortion are shown in Table 1. When compared to the targeted caption method, targeted keyword method achieves an even higher success rate (at least 96% for 3-keyword case and at least 97% for 1-keyword and 2-keyword cases). Figure 2 shows an adversarial example crafted from our targeted keyword method with three keywords “dog”, “cat” and “frisbee”. Using Show-and-Fool, the top-1 caption of a cake image becomes “A dog and a cat are playing with a frisbee” while the adversarial image remains visually indistinguishable to the original one. When M = 2 and 3, even if we cannot find an adversarial image yielding all specified keywords, we might end up with a caption that contains some of the keywords (partial success). For example, when M = 3, Table 3 shows the number of keywords appeared in the captions (M′) for those failed examples (not all 3 targeted keywords are found). These results clearly show that the 4% failed examples are still partially successful: the generated captions contain about 1.5 targeted keywords on average. 4.4 Transferability of Adversarial Examples It has been shown that in image classification tasks, adversarial examples found for one machine Figure 2: An adversarial example (∥δ∥2 = 1.284) of an cake image crafted by the Show-and-Fool targeted keyword method with three keywords “dog”, “cat” and “frisbee”. Table 3: Percentage of partial success with different c in the 4.0% failed images that do not contain all the 3 targeted keywords. c Avg. ∥δ∥2 M ′ ≥1 M ′ = 2 Avg. M ′ 1 2.49 72.4% 34.5% 1.07 10 5.40 82.7% 37.9% 1.21 102 12.95 93.1% 58.6% 1.52 103 24.77 96.5% 51.7% 1.48 104 29.37 100.0% 58.6% 1.59 learning model may also be effective against another model, even if the two models have different architectures (Papernot et al., 2016a; Liu et al., 2017c). However, unlike image classification where correct labels are made explicit, two different image captioning systems may generate quite different, yet semantically similar, captions for the same benign image. In image captioning, we say an adversarial example is transferable when the adversarial image found on model A with a target sentence SA can generate a similar (rather than exact) sentence SB on model B. In our setting, model A is Show-and-Tell, and we choose Show-Attend-and-Tell (Xu et al., 2015) as model B. The major differences between Show-and-Tell and Show-Attend-and-Tell are the addition of attention units in LSTM network for caption generation, and the use of last convolutional layer (rather than the last fully-connected layer) feature maps for feature extraction. We use Inception-v3 as the CNN architecture for both models and train them on the MSCOCO 2014 data set. However, their CNN parameters are different due to the fine-tuning process. 2594 Table 4: Transferability of adversarial examples from Show-and-Tell to Show-Attend-and-Tell, using different ϵ and c. ori indicates the scores between the generated captions of the original images and the transferred adversarial images on Show-Attend-and-Tell. tgt indicates the scores between the targeted captions on Show-and-Tell and the generated captions of transferred adversarial images on Show-Attendand-Tell. A smaller ori or a larger tgt value indicates better transferability. mis measures the differences between captions generated by the two models given the same benign image (model mismatch). When C = 1000, ϵ = 10, tgt is close to mis, indicating the discrepancy between adversarial captions on the two models is mostly bounded by model mismatch, and the adversarial perturbation is highly transferable. ϵ = 1 ϵ = 5 ϵ = 10 C=10 C=100 C=1000 C=10 C=100 C=1000 C=10 C=100 C=1000 ori tgt ori tgt ori tgt ori tgt ori tgt ori tgt ori tgt ori tgt ori tgt mis BLEU-1 .474 .395 .384 .462 .347 .484 .441 .429 .368 .488 .337 .527 .431 .421 .360 .485 .339 .534 .649 BLEU-2 .337 .236 .230 .331 .186 .342 .300 .271 .212 .343 .175 .389 .287 .266 .204 .342 .174 .398 .521 BLEU-3 .256 .154 .151 .224 .114 .254 .220 .184 .135 .254 .103 .299 .210 .185 .131 .254 .102 .307 .424 BLEU-4 .203 .109 .107 .172 .077 .198 .170 .134 .093 .197 .068 .240 .162 .138 .094 .197 .066 .245 .352 ROUGE .463 .371 .374 .438 .336 .465 .429 .402 .359 .464 .329 .502 .421 .398 .351 .463 .328 .507 .604 METEOR .201 .138 .139 .180 .118 .201 .177 .157 .131 .199 .110 .228 .172 .157 .127 .202 .110 .232 .300 ∥δ∥2 3.268 4.299 4.474 7.756 10.487 10.952 15.757 21.696 21.778 Figure 3: A highly transferable adversarial example (∥δ∥2 = 15.226) crafted by Show-and-Tell targeted caption method, transfers to Show-Attendand-Tell, yielding similar adversarial captions. To investigate the transferability of adversarial examples in image captioning, we first use the targeted caption method to find adversarial examples for 1,000 images in model A with different c and ϵ, and then transfer successful adversarial examples (which generate the exact target captions on model A) to model B. The generated captions by model B are recorded for transferability analysis. The transferability of adversarial examples depends on two factors: the intrinsic difference between two models even when the same benign image is used as the input, i.e., model mismatch, and the transferability of adversarial perturbations. To measure the mismatch between Show-andTell and Show-Attend-and-Tell, we generate captions of the same set of 1,000 original images from both models, and report their mutual BLEU, ROUGE and METEOR scores in Table 4 under the mis column. To evaluate the effectiveness of transferred adversarial examples, we measure the scores for two set of captions: (i) the captions of original images and the captions of transferred adversarial images, both generated by Show-Attendand-Tell (shown under column ori in Table 4); and (ii) the targeted captions for generating adversarial examples on Show-and-Tell, and the captions of the transferred adversarial image on Show-Attendand-Tell (shown under column tgt in Table 4). Small values of ori suggest that the adversarial images on Show-Attend-and-Tell generate significantly different captions from original images’ captions. Large values of tgt suggest that the adversarial images on Show-Attend-and-Tell generate similar adversarial captions as on the Showand-Tell model. We find that increasing c or ϵ helps to enhance transferability at the cost of larger (but still acceptable) distortion. When C = 1, 000 and ϵ = 10, Show-and-Fool achieves the best transferability results: tgt is close to mis, indicating that the discrepancy between adversarial captions on the two models is mostly bounded by the intrinsic model mismatch rather than the transferability of adversarial perturbations, and implying that the adversarial perturbations are easily transferable. In addition, the adversarial examples generated by our method can also fool NeuralTalk2. When c = 104, ϵ = 10, the average ℓ2 distortion, BLEU-4 and METEOR scores between the original and transferred adversarial captions are 38.01, 0.440 and 0.473, respectively. The high transferability of adversarial examples crafted by Show2595 and-Fool also indicates the problem of common robustness leakage between different neural image captioning models. 4.5 Attacking Image Captioning v.s. Attacking Image Classification In this section we show that attacking image captioning models is inherently more challenging than attacking image classification models. In the classification task, a targeted attack usually becomes harder when the number of labels increases, since an attack method needs to change the classification prediction to a specific label over all the possible labels. In the targeted attack on image captioning, if we treat each caption as a label, we need to change the original label to a specific one over an almost infinite number of possible labels, corresponding to a nearly zero volume in the search space. This constraint forces us to develop non-trivial methods that are significantly different from the ones designed for attacking image classification models. To verify that the two tasks are inherently different, we conducted additional experiments on attacking only the CNN module using two stateof-the-art image classification attacks on ImageNet dataset. Our experiment setup is as follows. Each selected ImageNet image has a label corresponding to a WordNet synset ID. We randomly selected 800 images from ImageNet dataset such that their synsets have at least one word in common with Show-and-Tell’s vocabulary, while ensuring the Inception-v3 CNN (Showand-Tell’s CNN) classify them correctly. Then, we perform Iterative Fast Gradient Sign Method (I-FGSM) (Kurakin et al., 2017) and Carlini and Wagner’s (C&W) attack (Carlini and Wagner, 2017) on these images. The attack target labels are randomly chosen and their synsets also have at least one word in common with Showand-Tell’s vocabulary. Both I-FGSM and C&W achieve 100% targeted attack success rate on the Inception-v3 CNN. These adversarial examples were further employed to attack Show-and-Tell model. An attack is considered successful if any word in the targeted label’s synset or its hypernyms up to 5 levels is presented in the resulting caption. For example, for the chain of hypernyms ‘broccoli’⇒‘cruciferous vegetable’⇒‘vegetable, veggie, veg’⇒‘produce, green goods, green groceries, garden truck’⇒‘food, solid food’, we include ‘broccoli’,‘cruciferous’,‘vegetable’,‘veggie’ and all other following words. Note that this criterion of success is much weaker than the criterion we use in the targeted caption method, since a caption with the targeted image’s hypernyms does not necessarily leads to similar meaning of the targeted image’s captions. To achieve higher attack success rates, we allow relatively larger distortions and set ϵ∞= 0.3 (maximum ℓ∞distortion) in IFGSM and κ = 10, C = 100 in C&W. However, as shown in Table 1, the attack success rates are only 34.5% for I-FGSM and 22.4% for C&W, respectively, which are much lower than the success rates of our methods despite larger distortions. This result further confirms that performing targeted attacks on neural image captioning requires a careful design (as proposed in this paper), and attacking image captioning systems is not a trivial extension to attacking image classifiers. 5 Conclusion In this paper, we proposed a novel algorithm, Show-and-Fool, for crafting adversarial examples and providing robustness evaluation of neural image captioning. Our extensive experiments show that the proposed targeted caption and keyword methods yield high attack success rates while the adversarial perturbations are still imperceptible to human eyes. We further demonstrate that Showand-Fool can generate highly transferable adversarial examples. The high-quality and transferable adversarial examples in neural image captioning crafted by Show-and-Fool highlight the inconsistency in visual language grounding between humans and machines, suggesting a possible weakness of current machine vision and perception machinery. We also show that attacking neural image captioning systems are inherently different from attacking CNN-based image classifiers. Our method stands out from the well-studied adversarial learning on image classifiers and CNN models. To the best of our knowledge, this is the very first work on crafting adversarial examples for neural image captioning systems. Indeed, our Show-and-Fool algorithm1 can be easily extended to other applications with RNN or CNN+RNN architectures. We believe this paper provides potential means to evaluate and possibly improve the robustness (for example, by adversarial training or data augmentation) of a wide range of visual language grounding and other NLP models. 2596 References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, pages 39–57. Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. 2018. EAD: elastic-net attacks to deep neural networks via adversarial examples. AAAI. Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec@CCS, pages 15–26. Xinlei Chen and C. Lawrence Zitnick. 2015. Mind’s eye: A recurrent visual representation for image caption generation. In CVPR, pages 2422–2431. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2. Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In CVPR. Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Mitchell. 2015. Language models for image captioning: The quirks and what works. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 100–105. Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Trevor Darrell, and Kate Saenko. 2015a. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, pages 2625–2634. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015b. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, pages 2625–2634. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In CVPR, pages 1473–1482. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 457–468. Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. 2017. Semantic compositional networks for visual captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5630–5639. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. ICLR; arXiv preprint arXiv:1412.6572. Ting-Hao Kenneth Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 1233–1239. Xu Jia, Efstratios Gavves, Basura Fernando, and Tinne Tuytelaars. 2015. Guiding the long-short term memory model for image caption generation. In Computer Vision (ICCV), 2015 IEEE International Conference on, pages 2407–2415. IEEE. Andrej Karpathy and Fei-Fei Li. 2015. Deep visualsemantic alignments for generating image descriptions. In CVPR, pages 3128–3137. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. ICLR; arXiv preprint arXiv:1611.01236. Alon Lavie and Abhaya Agarwal. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the EMNLP 2011 Workshop on Statistical Machine Translation, pages 65–72. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Chenxi Liu, Junhua Mao, Fei Sha, and Alan L Yuille. 2017a. Attention correctness in neural image captioning. In AAAI, pages 4176–4182. Feng Liu, Tao Xiang, Timothy M Hospedales, Wankou Yang, and Changyin Sun. 2017b. Semantic regularisation for recurrent image annotation. CVPR. 2597 Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017c. Delving into transferable adversarial examples and black-box attacks. ICLR; arXiv preprint arXiv:1611.02770. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In Advances In Neural Information Processing Systems (NIPS), pages 289–297. Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. 2016. Generating images from captions with attention. ICLR; arXiv preprint arXiv:1511.02793. Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L. Yuille. 2015. Deep captioning with multimodal recurrent neural networks (m-rnn). ICLR; arXiv preprint arXiv:1412.6632. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In CVPR. Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Imagegrounded conversations: Multimodal context for natural question and response generation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 462–472. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1802– 1813. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016a. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016b. Distillation as a defense to adversarial perturbations against deep neural networks. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 582–597. IEEE. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In annual meeting on association for computational linguistics (ACL), pages 311–318. Ramakanth Pasunuru and Mohit Bansal. 2017. Multitask video captioning with video and entailment generation. In Annual Meeting of the Association for Computational Linguistics (ACL), pages 1273– 1283. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In International Conference on Machine Learning, pages 1060–1069. Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurelie Herbelot, Moin Nabi, Enver Sangineto, Raffaella Bernardi, et al. 2017. Foil it! Find one mismatch between image and language caption. In Annual Meeting of the Association for Computational Linguistics (ACL). Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. ICLR;arXiv preprint arXiv:1312.6199. Kenneth Tran, Xiaodong He, Lei Zhang, and Jian Sun. 2016. Rich image captioning in the wild. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2016 IEEE Conference on, pages 434– 441. IEEE. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond J. Mooney, and Kate Saenko. 2015. Translating videos to natural language using deep recurrent neural networks. In NAACL-HLT, pages 1494–1504. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In CVPR, pages 3156–3164. Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony Dick, and Anton van den Hengel. 2016. What value do explicit high level concepts have in vision to language problems? In CVPR, pages 203–212. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML, pages 2048–2057. Zhilin Yang, Ye Yuan, Yuexin Wu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Review networks for caption generation. In NIPS, pages 2361–2369. Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with semantic attention. In CVPR, pages 4651–4659. Linchao Zhu, Zhongwen Xu, Yi Yang, and Alexander G Hauptmann. 2017. Uncovering the temporal context for video question answering. International Journal of Computer Vision, 124(3):409–421.
2018
241
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2598–2608 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2598 Think Visually: Question Answering through Virtual Imagery Ankit Goyal Jian Wang Jia Deng Computer Science and Engineering University of Michigan, Ann Arbor {ankgoyal, jianwolf, jiadeng}@umich.edu Abstract In this paper, we study the problem of geometric reasoning in the context of question-answering. We introduce Dynamic Spatial Memory Network (DSMN), a new deep network architecture designed for answering questions that admit latent visual representations. DSMN learns to generate and reason over such representations. Further, we propose two synthetic benchmarks, FloorPlanQA and ShapeIntersection, to evaluate the geometric reasoning capability of QA systems. Experimental results validate the effectiveness of our proposed DSMN for visual thinking tasks1. 1 Introduction The ability to reason is a hallmark of intelligence and a requirement for building question-answering (QA) systems. In AI research, reasoning has been strongly associated with logic and symbol manipulation, as epitomized by work in automated theorem proving (Fitting, 2012). But for humans, reasoning involves not only symbols and logic, but also images and shapes. Einstein famously wrote: “The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined... Conventional words or other signs have to be sought for laboriously only in a secondary state...” And the history of science abounds with discoveries from visual thinking, from the Benzene ring to the structure of DNA (Pinker, 2003). There are also plenty of ordinary examples of human visual thinking. Consider a square room 1 Code and datasets: https://github.com/ umich-vl/think_visually with a door in the middle of its southern wall. Suppose you are standing in the room such that the eastern wall of the room is behind you. Where is the door with respect to you? The answer is ‘to your left.’ Note that in this case both the question and answer are just text. But in order to answer the question, it is natural to construct a mental picture of the room and use it in the process of reasoning. Similar to humans, the ability to ‘think visually’ is desirable for AI agents like household robots. An example could be to construct a rough map and navigation plan for an unknown environment from verbal descriptions and instructions. In this paper, we investigate how to model geometric reasoning (a form of visual reasoning) using deep neural networks (DNN). Specifically, we address the task of answering questions through geometric reasoning—both the question and answer are expressed in symbols or words, but a geometric representation is created and used as part of the reasoning process. In order to focus on geometric reasoning, we do away with natural language by designing two synthetic QA datasets, FloorPlanQA and ShapeIntersection. In FloorPlanQA, we provide the blueprint of a house in words and ask questions about location and orientation of objects in it. For ShapeIntersection, we give a symbolic representation of various shapes and ask how many places they intersect. In both datasets, a reference visual representation is provided for each sample. Further, we propose Dynamic Spatial Memory Network (DSMN), a novel DNN that uses virtual imagery for QA. DSMN is similar to existing memory networks (Kumar et al., 2016; Sukhbaatar et al., 2015; Henaff et al., 2016) in that it uses vector embeddings of questions and memory modules to perform reasoning. The main novelty of DSMN is that it creates virtual images for the input question and uses a spatial memory to aid the reasoning 2599 process. We show through experiments that with the aid of an internal visual representation and a spatial memory, DSMN outperforms strong baselines on both FloorPlanQA and ShapeIntersection. We also demonstrate that explicitly learning to create visual representations further improves performance. Finally, we show that DSMN is substantially better than the baselines even when visual supervision is provided for only a small proportion of the samples. It’s important to note that our proposed datasets consist of synthetic questions as opposed to natural texts. Such a setup allows us to sidestep difficulties in parsing natural language and instead focus on geometric reasoning. However, synthetic data lacks the complexity and diversity of natural text. For example, spatial terms used in natural language have various ambiguities that need to resolved by context (e.g. how far is ”far” and whether ”to the left” is relative to the speaker or the listener) (Shariff, 1998; Landau and Jackendoff, 1993), but our synthetic data lacks such complexities. Therefore, our method and results do not automatically generalize to real-life tasks involving natural language. Additional research is needed to extend and validate our approach on natural data. Our contributions are three-fold: First, we present Dynamic Spatial Memory Network (DSMN), a novel DNN that performs geometric reasoning for QA. Second, we introduce two synthetic datasets that evaluate a system’s visual thinking ability. Third, we demonstrate that on synthetic data, DSMN achieves superior performance for answering questions that require visual thinking. 2 Related Work Natural language datasets for QA: Several natural language QA datasets have been proposed to test AI systems on various reasoning abilities (Levesque et al., 2011; Richardson et al., 2013). Our work differs from them in two key aspects: first, we use synthetic data instead of natural data; and second, we specialize in geometrical reasoning instead of general language understanding. Using synthetic data helps us simplify language parsing and thereby focus on geometric reasoning. However, additional research is necessary to generalize our work to natural data. Synthetic datasets for QA: Recently, synthetic datasets for QA are also becoming crucial in AI. In particular, bAbI (Weston et al., 2015) has driven the development of several recent DNN-based QA systems (Kumar et al., 2016; Sukhbaatar et al., 2015; Henaff et al., 2016). bAbI consists of 20 tasks to evaluate different reasoning abilities. Two tasks, Positional Reasoning (PR) and Path Finding (PF), are related to geometric reasoning. However, each Positional Reasoning question contains only two sentences, and can be solved through simple logical deduction such as ‘A is left of B implies B is right of A’. Similarly, Path Finding involves a search problem that requires simple spatial deductions such as ‘A is east of B implies B is west of A’. In contrast, the questions in our datasets involve longer descriptions, more entities, and more relations; they are thus harder to answer with simple deductions. We also provide reference visual representation for each sample, which is not available in bAbI. Mental Imagery and Visual Reasoning: The importance of visual reasoning has been long recognized in AI (Forbus et al., 1991; Lathrop and Laird, 2007). Prior works in NLP (Seo et al., 2015; Lin and Parikh, 2015) have also studied visual reasoning. Our work is different from them as we use synthetic language instead of natural language. Our synthetic language is easier to parse, allowing our evaluation to mainly reflect the performance of geometric reasoning. On the other hand, while our method and conclusions can potentially apply to natural text, this remains to be validated and involves nontrivial future work. There are other differences to prior works as well. Specifically, (Seo et al., 2015) combined information from textual questions and diagrams to build a model for solving SAT geometry questions. However, our task is different as diagrams are not provided as part of the input, but are generated from the words/symbols themselves. Also, (Lin and Parikh, 2015) take advantage of synthetic images to gather semantic common sense knowledge (visual common sense) and use it to perform fill-inthe-blank (FITB) and visual paraphrasing tasks. Similar to us, they also form ‘mental images’. However, there are two differences (apart from natural vs synthetic language): first, their benchmark tests higher level semantic knowledge (like “Mike is having lunch when he sees a bear.” =⇒ “Mike tries to hide.”), while ours is more focused 2600 on geometric reasoning. Second, their model is based on hand-crafted features while we use a DNN. Spatial language for Human-Robot Interaction: Our work is also related to prior work on making robots understand spatial commands (e.g. “put that box here”, “move closer to the box”) and complete tasks such as navigation and assembly. Earlier work (M¨uller et al., 2000; Gribble et al., 1998; Zelek, 1997) in this domain used template-based commands, whereas more recent work (Skubic et al., 2004) tried to make the commands more natural. This line of work differs from ours in that the robot has visual perception of its environment that allows grounding of the textual commands, whereas in our case the agent has no visual perception, and an environment needs to be imagined. Image Generation: Our work is related to image generation using DNNs which has a large body of literature, with diverse approaches (Reed et al., 2016; Gregor et al., 2015). We also generate an image from the input. But in our task, image generation is in the service of reasoning rather than an end goal in itself—as a result, photorealism or artistic style of generated images is irrelevant and not considered. Visual Question Answering: Our work is also related to visual QA (VQA) (Johnson et al., 2016; Antol et al., 2015; Lu et al., 2016). Our task is different from VQA because our questions are in terms of words/symbols whereas in VQA the questions are visual, consisting of both text descriptions and images. The images involved in our task are internal and virtual, and are not part of the input or output. Memory and Attention: Memory and attention have been increasingly incorporated into DNNs, especially for tasks involving algorithmic inference and/or natural language (Graves et al., 2014; Vaswani et al., 2017). For QA tasks, memory and attention play an important role in state-ofthe-art (SOTA) approaches. (Sukhbaatar et al., 2015) introduced End-To-End Memory Network (MemN2N), a DNN with memory and recurrent attention mechanism, which can be trained end-toend for diverse tasks like textual QA and language modeling. Concurrently, (Kumar et al., 2016) introduced Dynamic Memory Network (DMN), which also uses attention and memory. (Xiong et al., 2016) proposed DMN+, with several im[3, 8.00, 7.46, 1.80, 1.83] [3, 0.61, 5.40, 8.94, 2.79] [1, 0.66, 9.70, 8.14, 3.59] [2, 3.67, 5.51, 0.80, 0.00] Description and visual representation 1: line 2: circle 3: rectangle Question: How many places do the shapes intersect? Figure 1: An example in the ShapeIntersection dataset. provements over the previous version of DMN and achieved SOTA results on VQA (Antol et al., 2015) and bAbI (Weston et al., 2015). Our proposed DSMN is a strict generalization of DMN+ (see Sec. 4.1). On removing the images and spatial memory from DSMN, it reduces to DMN+. Recently (Gupta et al., 2017) also used spatial memory in their deep learning system, but for visual navigation. We are using spatial memory for QA. 3 Datasets We introduce two synthetically-generated QA datasets to evaluate a system’s goemetrical reasoning ability: FloorPlanQA and ShapeIntersection. These datasets are not meant to test natural language understanding, but instead focus on geometrical reasoning. Owing to their synthetic nature, they are easy to parse, but nevertheless they are still challenging for DNNs like DMN+ (Xiong et al., 2016) and MemN2N (Sukhbaatar et al., 2015) that achieved SOTA results on existing QA datasets (see Table 2a). The proposed datasets are similar in spirit to bAbI (Weston et al., 2015), which is also synthetic. In spite of its synthetic nature, bAbI has proved to be a crucial benchmark for the development of new models like MemN2N, DMN+, variants of which have proved successful in various natural domains (Kumar et al., 2016; Perez and Liu, 2016). Our proposed dataset is first to explicitly test ‘visual thinking’, and its synthetic nature helps us avoid the expensive and tedious task of collecting human annotations. Meanwhile, it is important to note that conclusions drawn from synthetic data do not automatically translate to natural data, and methods developed on synthetic benchmarks need additional validation on natural domains. The proposed datasets also contain visual representations of the questions. Each of them has 38,400 questions, evenly split into a training set, a validation set and a test set (12,800 each). 2601 Component Template House door The house door is in the middle of the {nr, sr, er, wr} wall of the house. The house door is located in the {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr} side of the house, such that it opens towards {n, s, e, w}. Room door The door for this room is in the middle of its {nr, sr, er, wr} wall. This room’s door is in the middle of its {nr, sr, er, wr} wall. The door for this room is located in its {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr} side, such that it opens towards {n, s, e, w}. This room’s door is located in its {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr} side, such that it opens towards {n, s, e, w}. Small room Room {1, 2, 3} is small in size and it is located in the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house. Room {1, 2, 3} is located in the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house and is small in size. Medium room Room {1, 2, 3} is medium in size and it extends from the {n, s, e, w, c, n-e, s-e, n-w, s-w} to the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house. Room {1, 2, 3} extends from the {n, s, e, w, c, n-e, s-e, n-w, s-w} to the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house and is medium in size. Large room Room {1, 2, 3} is large in size and it stretches along the {n-s, e-w}direction in the {n, s, e, w, c} of the house. Room {1, 2, 3} stretches along the {n-s, e-w} direction in the {n, s, e, w, c} of the house and is large in size. Object A {cu, cd, sp, co} is located in the middle of the {nr, sr, er, wr} part of the house. A {cu, cd, sp, co} is located in the {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr, cr} part of the house. A {cu, cd, sp, co} is located in the middle of the {nr, sr, er, wr} part of this room. A {cu, cd, sp, co} is located in the {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr, cr} part of this room. Table 1: Templates used by the description generator for FloorPlanQA. For compactness we used the following notations, n - north, s - south, e - east, w - west, c - center, nr - northern, sr - southern, er eastern, wr - western, cr - central, cu - cube, cd - cuboid, sp - sphere and co - cone. FloorPlanQA: Each sample in FloorPlanQA involves the layout of a house that has multiple rooms (max 3). The rooms are either small, medium or large. All the rooms and the house have a door. Additionally, each room and empty-space in the house (i.e. the space in the house that is not part of any room) might also contain an object (either a cube, cuboid, sphere, or cone). Each sample has four components, a description, a question, an answer, and a visual representation. Each sentence in the description describes either a room, a door or an object. A question is of the following template: Suppose you are entering the {house, room 1, room 2, room 3}, where is the {house door, room 1 door, room 2 door, room 3 door, cube, cuboid, sphere, cone} with respect to you?. The answer is either of left, right, front, or back. Other characteristics of FloorPlanQA are summarized in Fig. 2. The visual representation of a sample consists of an ordered set of image channels, one per sentence in the description. An image channel pictorially represents the location and/or orientation of the described item (room, door, object) w.r.t. the house. An example is shown in Fig. 2. To generate samples for FloorPlanQA, we define a probabilistic generative process which produces tree structures representing layouts of houses, similar to scene graphs used in computer graphics. The root node of a tree represents an entire house, and the leaf nodes represent rooms. We use a description and visual generator to produce respectively the description and visual representation from the tree structure. The templates used by the description generator are described in Table 1. Furthermore, the order of sentences in a description is randomized while making sure that the description still makes sense. For example, in some sample, the description of room 1 can appear before that of the house-door, while in another sample, it could be reversed. Similarly, for a room, the sentence describing the room’s door could appear before or after the sentence describing the object in the room (if the room contains one). We perform rejection sampling to ensure that all the answers are equally likely, and thus removing bias. ShapeIntersection: As the name suggests, ShapeIntersection is concerned with counting the number of intersection points between shapes. In this dataset, the description consists of symbols representing various shapes, and the question is always “how many points of intersection are there among these shapes?” There are three types of shapes in ShapeIntersection: rectangles, circles, and lines. The description of shapes is provided in the form of a sequence of 1D vectors, each vector representing one shape. A vector in ShapeIntersection is analogous to a sentence in FloorPlanQA. Hence, 2602 A cube is located in the south-eastern part of the house. Room 1 is located in the north-west of the house and is small in size. The door for this room is in the middle of its southern wall. The house door is located in the north-eastern side of the house, such that it opens towards east. Question: If you are entering the house through its door, where is the cube with respect to you? Answer: Left Description and visual representation vocabulary size 66 # unique sentences 264 # unique descriptions 38093 # unique questions 32 # unique question-description pairs 38228 Avg. # words per sentence 15 Avg. # sentences per description 6.61 Figure 2: An example and characteristics of FloorPlanQA (when considering all the 38,400 samples i.e. training, validation and test sets combined). for ShapeIntersection, the term ‘sentence’ actually refers to a vector. Each sentence describing a shape consists of 5 real numbers. The first number stands for the type of shape: 1 - line, 2 - circle, and 3 - rectangle. The subsequent four numbers specify the size and location of the shape. For example, in case of a rectangle, they represent its height, its width, and coordinates of its bottom-left corner. Note that one can also describe the shapes using a sentence, e.g. “there is a rectangle at (5, 5), with a height of 2 cm and width of 8 cm.” However, as our focus is to evaluate ‘visual thinking’, we work directly with the symbolic encoding. In a given description, there are 6.5 shapes on average, and at most 6 lines, 3 rectangles and 3 circles. All the shapes in the dataset are unique and lie on a 10 × 10 canvas. While generating the dataset, we do rejection sampling to ensure that the number of intersections is uniformly distributed from 0 to the maximum possible number of intersections, regardless of the number of lines, rectangles, and circles. This ensures that the number of intersections cannot be estimated from the number of lines, circles or rectangles. Similar to FloorPlanQA, the visual representation for a sample in this dataset is an ordered set of image channels. Each channel is associated with a sentence, and it plots the described shape. An example is shown in Figure 1. 4 Dynamic Spatial Memory Network We propose Dynamic Spatial Memory Network (DSMN), a novel DNN designed for QA with geometric reasoning. What differentiates DSMN from other QA DNNs is that it forms an internal visual representation of the input. It then uses a spatial memory to reason over this visual representation. A DSMN can be divided into five modules: the input module, visual representation module, question module, spatial memory module, and answer module. The input module generates an embedding for each sentence in the description. The visual representation module uses these embeddings to produce an intermediate visual representation for each sentence. In parallel, the question module produces an embedding for the question. The spatial memory module then goes over the question embedding, the sentence embeddings, and the visual representation multiple times to update the spatial memory. Finally, the answer module uses the spatial memory to output the answer. Fig. 3 illustrates the overall architecture of DSMN. Input Module: This module produces an embedding for each sentence in the description. It is therefore customized based on how the descriptions are provided in a dataset. Since the descriptions are in words for FloorPlanQA, a position encoding (PE) layer is used to produce the initial sentence embeddings. This is done to ensure a fair comparison with DMN+ (Xiong et al., 2016) and MemN2N (Sukhbaatar et al., 2015), which also use a PE layer. A PE layer combines the wordembeddings to encode the position of words in a sentence (Please see (Sukhbaatar et al., 2015) for more information). For ShapeIntersection, the description is given as a sequence of vectors. Therefore, two FC layers (with ReLU in between) are used to obtain the initial sentence embeddings. These initial sentence embeddings are then fed into a bidirectional Gated Recurrent Unit (GRU) (Cho et al., 2014) to propagate the information across sentences. Let −→ si and ←− si be the respective output of the forward and backward GRU at ith step. Then, the final sentence embedding for the ith sentence is given by si = −→ si + ←− si. Question Module: This module produces an embedding for the question. It is also customized to the dataset. For FloorPlanQA, the embeddings of the words in the question are fed to a GRU, and the final hidden state of the GRU is used as the question embedding. For ShapeIntersection, the question is always fixed, so we use an all-zero vector as the question embedding. Visual Representation Module: This module 2603 generates a visual representation for each sentence in the description. It consists of two subcomponents: an attention network and an encoderdecoder network. The attention network gathers information from previous sentences that is important to produce the visual representation for the current sentence. For example, suppose the current sentence describes the location of an object with respect to a room. Then in order to infer the location of the object with respect to the house, one needs the location of the room with respect to the house, which is described in some previous sentence. The encoder-decoder network encodes the visual information gathered by the attention network, combines it with the current sentence embedding, and decodes the visual representation of the current sentence. An encoder (En(.)) takes an image as input and produces an embedding, while a decoder (De(.)) takes an embedding as input and produces an image. An encoder is composed of series of convolution layers and a decoder is composed of series of deconvolution layers. Suppose we are currently processing the sentence st. This means we have already processed the sentences s1, s2, . . . , st−1 and produced the corresponding visual representations S1, S2, . . . , St−1. We also add s0 and S0, which are all-zero vectors to represent the null sentence. The attention network produces a scalar attention weight ai for the ith sentence which is given by ai = Softmax(wstzi + bs) where zi = [|si − st|; si ◦st]. Here, ws is a vector, bs is a scalar, ◦represents element-wise multiplication, |.| represents element-wise absolute value, and [v1; v2] represents the concatenation of vectors v1 and v2. The gathered visual information is ¯St = Pt−1 i=0 aiSi. It is fed into the encoder-decoder network. The visual representation for st is given by St = Des  st; Ens( ¯St)  . The parameters of Ens(.), Des(), ws, and bs are shared across multiple iterations. In the proposed model, we make the simplifying assumption that the visual representation of the current sentence does not depend on future sentences. In other words, it can be completely determined from the previous sentences in the description. Both FloorPlanQA and ShapeIntersection satisfy this assumption. Spatial Memory Module: This module gathers relevant information from the description and updates memory accordingly. Similar to DMN+ and MemN2N, it collects information and updates memory multiple times to perform transitive reasoning. One iteration of information collection and memory update is referred as a ‘hop’. The memory consists of two components: a 2D spatial memory and a tag vector. The 2D spatial memory can be thought of as a visual scratch pad on which the network ‘sketches’ out the visual information. The tag vector is meant to represent what is ‘sketched’ on the 2D spatial memory. For example, the network can sketch the location of room 1 on its 2D spatial memory, and store the fact that it has sketched room 1 in the tag vector. As mentioned earlier, each step of the spatial memory module involves gathering of relevant information and updating of memory. Suppose we are in step t. Let M (t−1) represent the 2D spatial memory and m(t−1) represent the tag vector after step t −1. The network gathers the relevant information by calculating the attention value for each sentence based on the question and the current memory. For sentence si, the scalar attention value g(t) i equal to Softmax(wt yp(t) i + by), where p(t) i is given as p(t) i =  |m(t−1) −si|; m(t−1) ◦si; |q −si|; q ◦si; En(t) p1 (|M (t−1) −Si|); En(t) p2 (M (t−1) ◦Si)  (1) M (0) and m(0) represent initial blank memory, and their elements are all zero. Then, gathered information is represented as a context tag vector, c(t) = AttGRU(gi(t)si) and 2D context, C(t) = Pn i=0 gi(t)Si. Please refer to (Xiong et al., 2016) for information about AttGRU(.). Finally, we use the 2D context and context tag vector to update the memory as follows: m(t) = ReLU  Wm(t) m(t−1); q; c(t); Enc(C(t))  + bm(t) (2) M (t) = De(t) m  m(t); En(t) m (M (t−1))  (3) Answer Module: This module uses the final memory and question embedding to generate the output. The feature vector used for predicting the answer is given by f, where M (T ) and m(T ) represent the final memory. f =  Enf(M (T )); m(T ); q  (4) 2604 Visual Representation Module Spatial Memory Module (a) Overall architecture (b) Visual represenation module (c) Spatial memory module attention memory update attention Answer Module S1 s1 SN q MT mT S1 Sn-1 s1 sn-1 sn Mt-1 mt-1 Mt-1 mt Ct ct sN Sn ~ Sn Mt S1 SN s1 sN q Figure 3: The architecture of the proposed Dynamic Spatial Memory Network (DSMN). To obtain the output, an FC layer is applied to f in case of regression, while the FC layer is followed by softmax in case of classification. To keep DSMN similar to DMN+, we apply a dropout layer on sentence encodings (si) and f. 4.1 DSMN as a strict generalization of DMN DSMN is a strict generalization of a DMN+. If we remove the visual representation of the input along with the 2D spatial memory, and just use vector representations with memory tags, then a DSMN reduces to DMN+. This ensures that comparison with DMN+ is fair. 4.2 DSMN with or without intermediate visual supervision As described in previous sections, a DSMN forms an intermediate visual representation of the input. Therefore, if we have a ‘ground-truth’ visual representation for the training data, we could use it to train our network better. This leads to two different ways for training a DSMN, one with intermediate visual supervision and one without it. Without intermediate visual supervision, we train the network in an end-to-end fashion by using a loss (Lw/o vi) that compares the predicted answer with the ground truth. With intermediate visual supervision, we train our network using an additional visual representation loss (Lvi) that measures how close the generated visual representation is to the ground-truth representation. Thus, the loss used for training with intermediate supervision is given by Lw vi = λviLvi + (1 −λvi)Lw/o vi, where λvi is a hyperparameter which can be tuned for each dataset. Note that in neither case do we need any visual input once the network is trained. During testing, the only input to the network is the description and question. Also note that we can provide intermediate visual supervision to DSMN even when the visual representations for only a portion of samples in the training data are available. This can be useful when obtaining visual representation is expensive and time-consuming. 5 Experiments Baselines: LSTM (Hochreiter and Schmidhuber, 1997) is a popular neural network for sequence processing tasks. We use two versions of LSTM-based baselines. LSTM-1 is a common version that is used as a baseline for textual QA (Sukhbaatar et al., 2015; Graves et al., 2016). In LSTM-1, we concatenate all the sentences and the question to a single string. For FloorPlanQA, we do word embedding look-up, while for ShapeIntersection, we project each real number into higher dimension via a series of FC layers. The sequence of vectors is fed into an LSTM. The final output vector of the LSTM is then used for prediction. We develop another version of LSTM that we call LSTM-2, in which the question is concatenated to the description. We use a two-level hierarchy to embed the description. We first extract an embedding for each sentence. For FloorPlanQA, we use an LSTM to get the sentence embeddings, and for ShapeIntersection, we use a series of FC layers. We then feed the sentence embeddings into an LSTM, whose output is used for prediction. Further, we compare our model to DMN+ (Xiong et al., 2016) and MemN2N (Sukhbaatar et al., 2015), which achieved state-of-the-art results on bAbI (Weston et al., 2015). In particular, we compare the 3-hop versions of DSMN, DMN+, and MemN2N. Training Details: We used ADAM (Kingma and Ba, 2014) to train all models, and the learning rate 2605 FloorPlanQA ShapeIntersection MODEL (accuracy in %) (rmse) LSTM-1 41.36 3.28 LSTM-2 50.69 2.99 MemN2N 45.92 3.51 DMN+ 60.29 2.98 DSMN 68.01 2.84 DSMN* 97.73 2.14 (a) The test set performance of different models on FloorPlanQA and ShapeIntersection. DSMN* refers to the model with intermediate supervision. FloorPlanQA MODEL f in Eqn. 4 (accuracy in %) DSMN  m(T ); q  67.65 DSMN  Enf(M (T )); q  43.90 DSMN  Enf(M (T )); m(T ); q  68.12 DSMN*  m(T ); q  97.24 DSMN*  Enf(M (T )); q  95.17 DSMN*  Enf(M (T )); m(T ); q  98.08 (b) The validation set performances for the ablation study on the usefulness of tag (m(T )) and 2D spatial memory (M (T )) in the answer feature vector for f. FloorPlanQA MODEL (accuracy in %) 1-Hop DSMN 63.32 2-Hop DSMN 65.59 3-Hop DSMN 68.12 1-Hop DSMN* 90.09 2-Hop DSMN* 97.45 3-Hop DSMN* 98.08 (c) The validation set performance for the ablation study on variation in performance with hops. Table 2: Experimental results showing comparison with baselines, and ablation study of DSMN for each model is tuned for each dataset. We tune the embedding size and l2 regularization weight for each model and dataset pair separately. For reproducibility, the value of the best-tuned hyperparameters is mentioned in the supplementary material. As reported by (Sukhbaatar et al., 2015; Kumar et al., 2016; Henaff et al., 2016), we also observe that the results of memory networks are unstable across multiple runs. Therefore for each hyperparameter choice, we run all the models 10 times and select the run with the best performance on the validation set. For FloorPlanQA, all models are trained up to a maximum of 1600 epochs, with early stopping after 80 epochs if the validation accuracy did not increase. The maximum number of epochs for ShapeIntersection is 800 epochs, with early stopping after 80 epochs. Additionally, we modify the input module and question module of DMN+ and MemN2N to be same as ours for the ShapeIntersection dataset. For MemN2N, we use the publicly available im(a) Test set rmse on ShapeIntersection. (b) Test set accuracy on FloorPlanQA. Figure 4: Performance of DSMN* with varying percentage of intermediate visual supervision. plementation2 and train it exactly as all other models (same optimizer, total epochs, and early stopping criteria) for fairness. While the reported best result for MemN2N is on the version with position encoding, linear start training, and randominjection of time index noise (Sukhbaatar et al., 2015), the version we use has only position encoding. Note that the comparison is still meaningful because linear start training and time index noise are not used in DMN+ (and as a result, neither in our proposed DSMN). Results: The results for FloorPlanQA and ShapeIntersection are summarized in Table 2a. For brevity, we will refer to the DSMN model trained without intermediate visual supervision as DSMN, and the one with intermediate visual supervision as DSMN*. We see that DSMN (i.e the one without intermediate supervision) outperforms DMN+, MemN2N and the LSTM baselines on both datasets. However, we consider DSMN to be only slightly better than DMN+ because both are observed to be unstable across multiple runs and so the gap between the two has a large variance. Finally, DSMN* outperforms all other approaches by a large margin on both datasets, which demonstrates the utility of visual supervision in proposed tasks. While the variation can be significant across runs, if we run each model 10 times and choose the best run, we observe consistent results. We visualized the intermediate visual representations, but when no visual supervision is pro2https://github.com/domluna/memn2n 2606 Figure 5: Attention values on each sentence during different memory ‘hops’ for a sample from FloorPlanQA. Darker color indicates more attention. To answer, one needs the location of room 1’s door and the house door. To infer the location of room 1’s door, DSMN* directly jumps to sent. 3. Since DMN+ does not form a visual representation, it tries to infer the location of room 1’s door w.r.t the house by finding the location of the room’s door w.r.t the room (sent. 3) and the location of the room w.r.t the house (sent. 2). Both DSMN* and DMN+ use one hop to infer the location of the house door (sent. 1). vided, they were not interpretable (sometimes they looked like random noise, sometimes blank). In the case when visual supervision is provided, the intermediate visual representation is well-formed and similar to the ground-truth. We further investigate how DSMN* performs when intermediate visual supervision is available for only a portion of training samples. As shown in Fig. 4, DSMN* outperforms DMN+ by a large margin, even when intermediate visual supervision is provided for only 1% of the training samples. This can be useful when obtaining visual representations is expensive and time-consuming. One possible justification for why visual supervision (even in a small amount) helps a lot is that it constrains the high-dimensional space of possible intermediate visual representations. With limited data and no explicit supervision, automatically learning these high-dimensional representations can be difficult. Additonally, we performed ablation study (see Table 2b) on the usefulness of final memory tag vector (m(T )) and 2D spatial memory (M (T )) in the answer feature vector f (see Eqn. 4). We removed each of them one at a time, and retrained (with hyperparameter tuning) the DSMN and DSMN* models. Note that they are removed only from the final feature vector f, and both of them are still coupled. The model with both tag and 2D spatial memory (f =  Enf(M (T )); m(T ); q  ) performs slightly better than the only tag vector model (f =  m(T ); q  ). Also, as expected the only 2D spatial memory model (f =  Enf(M (T )); q  ) performs much better for DSMN* than DSMN becuase of the intermdiate supervision. Further, Table 2c shows the effect of varying the number of memory ‘hops’ for DSMN and DSMN* on FloorPlanQA. The performance of both DSMN and DSMN* increases with the number of ‘hops’. Note that even the 1-hop DSMN* performs well (better than baselines). Also, note that the difference in performance between 2-hop DSMN* and 3-hop DSMN* is not much. A possible justification for why DSMN* performs well even with fewer memory ‘hops’ is that DSMN* completes some ‘hops of reasoning’ in the visual representation module itself. Suppose one needs to find the location of an object placed in a room, w.r.t. the house. To do so, one first needs to find the location of the room w.r.t. the house, and then the location of the object w.r.t. the room. However, if one has already ‘sketched’ out the location of the object in the house, one can directly fetch it. It is during sketching the object’s location that one has completed a ‘hop of reasoning’. For a sample from FloorPlanQA, we visualize the attention maps in the memory module of 3-hop DMN+ and 3-hop DSMN* in Fig. 5. To infer the location of room 1’s door, DSMN* directly fetches sentence 3, while DMN+ tries to do so by fetching two sentences (one for the room’s door location w.r.t the room and one for the room’s location w.r.t the house). Conclusion: We have investigated how to use DNNs for modeling visual thinking. We have introduced two synthetic QA datasets, FloorPlanQA and ShapeIntersection, that test a system’s ability to think visually. We have developed DSMN, a novel DNN that reasons in the visual space for answering questions. Experimental results have demonstrated the effectiveness of DSMN for geometric reasoning on synthetic data. Acknowledgements: This work is partially supported by the National Science Foundation under Grant No. 1633157. 2607 References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In ICCV, pages 2425–2433. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Melvin Fitting. 2012. First-order logic and automated theorem proving. Springer Science & Business Media. Kenneth D Forbus, Paul Nielsen, and Boi Faltings. 1991. Qualitative spatial reasoning: The clock project. Artificial Intelligence, 51(1-3):417–471. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, pages 471– 476. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623. William S Gribble, Robert L Browning, Micheal Hewett, Emilio Remolina, and Benjamin J Kuipers. 1998. Integrating vision and spatial reasoning for assistive navigation. In Assistive Technology and artificial intelligence, pages 179–193. Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. 2017. Cognitive mapping and planning for visual navigation. arXiv preprint arXiv:1702.03920. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, pages 1735–1780. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2016. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. arXiv preprint arXiv:1612.06890. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In ICML, pages 1378– 1387. Barbara Landau and Ray Jackendoff. 1993. Whence and whither in spatial language and spatial cognition? Behavioral and brain sciences, 16:255–265. Scott D Lathrop and John E Laird. 2007. Towards incorporating visual imagery into a cognitive architecture. In International conference on cognitive modeling, page 25. Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The winograd schema challenge. In AAAI Spring Symposium, volume 46, page 47. Xiao Lin and Devi Parikh. 2015. Don’t just listen, use your imagination: Leveraging visual common sense for non-visual tasks. In ICCV, pages 2984–2993. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In NIPS, pages 289–297. Rolf M¨uller, Thomas R¨ofer, Axel Lankenau, Alexandra Musto, Klaus Stein, and Andreas Eisenkolb. 2000. Coarse qualitative descriptions in robot navigation. In Spatial Cognition II, pages 265–276. Julien Perez and Fei Liu. 2016. Dialog state tracking, a machine reading approach using memory network. arXiv preprint arXiv:1606.04052. Steven Pinker. 2003. The language instinct: How the mind creates language. Penguin UK. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 3, page 4. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geometry problems: Combining text and diagram interpretation. In EMNLP, pages 1466–1476. A Rashid BM Shariff. 1998. Natural-language spatial relations between linear and areal objects: the topology and metric of english-language terms. International journal of geographical information science, 12:215–245. Marjorie Skubic, Dennis Perzanowski, Samuel Blisard, Alan Schultz, William Adams, Magda Bugajska, and Derek Brock. 2004. Spatial language for human-robot dialogs. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), pages 154–167. 2608 Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In NIPS, pages 2440–2448. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In ICML, pages 2397– 2406. John S Zelek. 1997. Human-robot interaction with minimal spanning natural language template for autonomous and tele-operated control. In IROS, pages 299–305.
2018
242
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2609–2619 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2609 Interactive Language Acquisition with One-shot Visual Concept Learning through a Conversational Game Haichao Zhang†, Haonan Yu†, and Wei Xu †§ † Baidu Research - Institue of Deep Learning, Sunnyvale USA § National Engineering Laboratory for Deep Learning Technology and Applications, Beijing China {zhanghaichao,haonanyu,wei.xu}@baidu.com Abstract Building intelligent agents that can communicate with and learn from humans in natural language is of great value. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the justlearned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach. 1 Introduction Language is one of the most natural forms of communication for human and is typically viewed as fundamental to human intelligence; therefore it is crucial for an intelligent agent to be able to use language to communicate with human as well. While supervised training with deep neural networks has led to encouraging progress in language learning, it suffers from the problem of capturing mainly the statistics of training data, and from a lack of adaptiveness to new scenarios and being flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. Moreover, supervised training of deep neural network models needs a large number of training samples while many interesting applications require rapid learning from a small amount of data, which poses an even greater challenge to the supervised setting. In contrast, humans learn in a way very different from the supervised setting (Skinner, 1957; Kuhl, 2004). First, humans act upon the world and learn from the consequences of their actions (Skinner, 1957; Kuhl, 2004; Petursdottir and Mellor, 2016). While for mechanical actions such as movement, the consequences mainly follow geometrical and mechanical principles, for language, humans act by speaking, and the consequence is typically a response in the form of verbal and other behavioral feedback (e.g., nodding) from the conversation partner (i.e., teacher). These types of feedback typically contain informative signals on how to improve language skills in subsequent conversations and play an important role in humans’ language acquisition process (Kuhl, 2004; Petursdottir and Mellor, 2016). Second, humans have shown a celebrated ability to learn new concepts from small amount of data (Borovsky et al., 2003). From even just one example, children seem to be able to make inferences and draw plausible boundaries between concepts, demonstrating the ability of one-shot learning (Lake et al., 2011). The language acquisition process and the oneshot learning ability of human beings are both impressive as a manifestation of human intelligence, and are inspiring for designing novel settings and algorithms for computational language learning. In this paper, we leverage conversation as both an interactive environment for language learning (Skinner, 1957) and a natural interface for acquiring new knowledge (Baker et al., 2002). We propose an approach for interactive language acquisition with one-shot concept learning ability. The proposed approach allows an agent to learn grounded language from scratch, acquire the trans2610 ferable skill of actively seeking and memorizing information about novel objects, and develop the one-shot learning ability, purely through conversational interaction with a teacher. 2 Related Work Supervised Language Learning. Deep neural network-based language learning has seen great success on many applications, including machine translation (Cho et al., 2014b), dialogue generation (Wen et al., 2015; Serban et al., 2016), image captioning and visual question answering (?Antol et al., 2015). For training, a large amount of labeled data is needed, requiring significant efforts to collect. Moreover, this setting essentially captures the statistics of training data and does not respect the interactive nature of language learning, rendering it less flexible for acquiring new knowledge without retraining or forgetting (Stent and Bangalore, 2014). Reinforcement Learning for Sequences. Some recent studies used reinforcement learning (RL) to tune the performance of a pre-trained language model according to certain metrics (Ranzato et al., 2016; Bahdanau et al., 2017; Li et al., 2016; Yu et al., 2017). Our work is also related to RL in natural language action space (He et al., 2016) and shares a similar motivation with Weston (2016) and Li et al. (2017), which explored language learning through pure textual dialogues. However, in these works (He et al., 2016; Weston, 2016; Li et al., 2017), a set of candidate sequences is provided and the action is to select one from the set. Our main focus is rather on learning language from scratch: the agent has to learn to generate a sequence action rather than to simply select one from a provided candidate set. Communication and Emergence of Language. Recent studies have examined learning to communicate (Foerster et al., 2016; Sukhbaatar et al., 2016) and invent language (Lazaridou et al., 2017; Mordatch and Abbeel, 2018). The emerged language needs to be interpreted by humans via postprocessing (Mordatch and Abbeel, 2018). We, however, aim to achieve language learning from the dual perspectives of understanding and generation, and the speaking action of the agent is readily understandable without any post-processing. Some studies on language learning have used a guesser-responder setting in which the guesser tries to achieve the final goal (e.g., classification) by collecting additional information through asking the responder questions (Strub et al., 2017; Das et al., 2017). These works try to optimize the question being asked to help the guesser achieve the final goal, while we focus on transferable speaking and one-shot ability. One-shot Learning and Active Learning. Oneshot learning has been investigated in some recent works (Lake et al., 2011; Santoro et al., 2016; Woodward and Finn, 2016). The memoryaugmented network (Santoro et al., 2016) stores visual representations mixed with ground truth class labels in an external memory for one-shot learning. A class label is always provided following the presentation of an image; thus the agent receives information from the teacher in a passive way. Woodward and Finn (2016) present efforts toward active learning, using a vanilla recurrent neural network (RNN) without an external memory. Both lines of study focus on image classification only, meaning the class label is directly provided for memorization. In contrast, we target language and one-shot learning via conversational interaction, and the learner has to learn to extract important information from the teacher’s sentences for memorization. 3 The Conversational Game We construct a conversational game inspired by experiments on language development in infants from cognitive science (Waxman, 2004). The game is implemented with the XWORLD simulator (Yu et al., 2018; Zhang et al., 2017) and is publicly available online.1 It provides an environment for the agent2 to learn language and develop the one-shot learning ability. One-shot learning here means that during test sessions, no further training happens to the agent and it has to answer teacher’s questions correctly about novel images of neverbefore-seen classes after being taught only once by the teacher, as illustrated in Figure 1. To succeed in this game, the agent has to learn to 1) speak by generating sentences, 2) extract and memorize useful information with only one exposure and use it in subsequent conversations, and 3) behave adaptively according to context and its own knowledge (e.g., asking questions about unknown objects and answering questions about something known), all achieved through interacting with the 1https://github.com/PaddlePaddle/XWorld 2We use the term agent interchangeably with learner. 2611 S1  Train Teacher Learner    Sl    Test (novel data)   Figure 1: Interactive language and one-shot concept learning. Within a session Sl, the teacher may ask questions, answer learner’s questions, make statements, or say nothing. The teacher also provides reward feedback based on learner’s responses as (dis-)encouragement. The learner alternates between interpreting teacher’s sentences and generating a response through interpreter and speaker. Left: Initially, the learner can barely say anything meaningful. Middle: Later it can produce meaningful responses for interaction. Right: After training, when confronted with an image of cherry, which is a novel class that the learner never saw before during training, the learner can ask a question about it (“what is it”) and generate a correct statement (“this is cherry”) for another instance of cherry after only being taught once. teacher. This makes our game distinct from other seemingly relevant games, in which the agent cannot speak (Wang et al., 2016) or “speaks” by selecting a candidate from a provided set (He et al., 2016; Weston, 2016; Li et al., 2017) rather than generating sentences by itself, or games mainly focus on slow learning (Das et al., 2017; Strub et al., 2017) and falls short on one-shot learning. In this game, sessions (Sl) are randomly instantiated during interaction. Testing sessions are constructed with a separate dataset with concepts that never appear before during training to evaluate the language and one-shot learning ability. Within a session, the teacher randomly selects an object and interacts with the learner about the object by randomly 1) posing a question (e.g., “what is this”), 2) saying nothing (i.e., “”) or 3) making a statement (e.g., “this is monkey”). When the teacher asks a question or says nothing, i) if the learner raises a question, the teacher will provide a statement about the object asked (e.g., “it is frog”) with a question-asking reward (+0.1); ii) if the learner says nothing, the teacher will still provide an answer (e.g., “this is elephant”) but with an incorrect-reply reward (−1) to discourage the learner from remaining silent; iii) for all other incorrect responses from the learner, the teacher will provide an incorrect-reply reward and move on to the next random object for interaction. When the teacher generates a statement, the learner will receive no reward if a correct statement is generated otherwise an incorrect-reply reward will be given. The session ends if the learner answers the teacher’s question correctly, generates a correct statement when the teacher says nothing (receiving a correct-answer reward +1), or when the maximum number of steps is reached. The sentence from teacher at each time step is generated using a context-free grammar as shown in Table 1. A success is reached if the learner behaves correctly during the whole session: asking questions about novel objects, generating answers when asked, and making statements when the teacher says nothing about objects that have been taught within the session. Otherwise it is a failure. Table 1: Grammar for the teacher’s sentences. start →question | silence | statement question →Q1 | Q2 | Q3 silence →“ ” statement →A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 Q1 →“what” Q2 →“what” M Q3 →“tell what” N M →“is it” | “is this” | “is there” | “do you see” | “can you see” | “do you observe” | “can you observe” N →“it is” | “this is” | “there is” | “you see” | “you can see” | “you observe” | “you can observe” A1 →G A2 →“it is” G A3 →“this is” G A4 →“there is” G A5 →“i see” G A6 →“i observe” G A7 →“i can see” G A8 →“i can observe” G G →object name 4 Interactive Language Acquisition via Joint Imitation and Reinforcement Motivation. The goal is to learn to converse and develop the one-shot learning ability by conversing with a teacher and improving from teacher’s feedback. We propose to use a joint imitation and reinforce approach to achieve this goal. Imitation 2612 helps the agent to develop the basic ability to generate sensible sentences. As learning is done by observing the teacher’s behaviors during conversion, the agent essentially imitates the teacher from a third-person perspective (Stadie et al., 2017) rather than imitating an expert agent who is conversing with the teacher (Das et al., 2017; Strub et al., 2017). During conversations, the agent perceives sentences and images without any explicit labeling of ground truth answers, and it has to learn to make sense of raw perceptions, extract useful information, and save it for later use when generating an answer to teacher’s question. While it is tempting to purely imitate the teacher, the agent trained this way only develops echoic behavior (Skinner, 1957), i.e., mimicry. Reinforce leverages confirmative feedback from the teacher for learning to converse adaptively beyond mimicry by adjusting the action policy. It enables the learner to use the acquired speaking ability and adapt it according to reward feedback. This is analogous to some views on the babies’ language-learning process that babies use the acquired speaking skills by trial and error with parents and improve according to the consequences of speaking actions (Skinner, 1957; Petursdottir and Mellor, 2016). The fact that babies don’t fully develop the speaking capabilities without the ability to hear (Houston and Miyamoto, 2011), and that it is hard to make a meaningful conversation with a trained parrot signifies the importance of both imitation and reinforcement in language learning. Formulation. The agent’s response can be modeled as a sample from a probability distribution over the possible sequences. Specifically, for one session, given the visual input vt and conversation history Ht={w1, a1, · · · , wt}, the agent’s response at can be generated by sampling from a distribution of the speaking action at ∼ pS θ(a|Ht, vt). The agent interacts with the teacher by outputting the utterance at and receives feedback from the teacher in the next step, with wt+1 a sentence as verbal feedback and rt+1 reward feedback (with positive values as encouragement while negative values as discouragement, according to at, as described in Section 3). Central to the goal is learning pS θ(·). We formulate the problem as the minimization of a cost function as: Lθ=EW  −P t log pI θ(wt|·)  | {z } Imitation LI θ +EpS θ  −P t[γ]t−1 · rt | {z } Reinforce LR θ where EW(·) is the expectation over all the sentences W from teacher, γ is a reward discount factor, and [γ]t denotes the exponentiation over γ. While the imitation term learns directly the predictive distribution pI θ(wt|Ht−1, at), it contributes to pS θ(·) through parameter sharing between them. Architecture. The learner comprises four major components: external memory, interpreter, speaker, and controller, as shown in Figure 2. External memory is flexible for storing and retrieving information (Graves et al., 2014; Santoro et al., 2016), making it a natural component of our network for one-shot learning. The interpreter is responsible for interpreting the teacher’s sentences, extracting information from the perceived signals, and saving it to the external memory. The speaker is in charge of generating sentence responses with reading access to the external memory. The response could be a question asking for information or a statement answering a teacher’s question, leveraging the information stored in the external memory. The controller modulates the behavior of the speaker to generate responses according to context (e.g., the learner’s knowledge status). At time step t, the interpreter uses an interpreter-RNN to encode the input sentence wt from the teacher as well as historical conversational information into a state vector ht I. ht I is then passed through a residue-structured network, which is an identity mapping augmented with a learnable controller f(·) implemented with fully connected layers for producing ct. Finally, ct is used as the initial state of the speaker-RNN for generating the response at. The final state ht last of the speaker-RNN will be used as the initial state of the interpreter-RNN at the next time step. 4.1 Imitation with Memory Augmented Neural Network for Echoic Behavior The teacher’s way of speaking provides a source for the agent to imitate. For example, the syntax for composing a sentence is a useful skill the agent can learn from the teacher’s sentences, which could benefit both interpreter and speaker. Imitation is achieved by predicting teacher’s future sentences with interpreter and parameter sharing between interpreter and speaker. For prediction, we can represent the probability of the next sentence wt conditioned on the image vt as well as previous sentences from both the teacher and the 2613 it is monkey it is monkey monkey write what is this what is this < , mix> mix read read monkey write read read monkey tiger this is tiger monkey write read read monkey < , tiger> tiger controller + controller + controller + < , monkey> External Memory External Memory External Memory External Memory External Memory External Memory mix mix t 7−→ t + 1 7−→ t + 2 7−→ wt vt ht−1 last ht last ht I rt I ct rt s at Interpreter Speaker 6 ? share parameter Teacher Learner (a) Interpreter-RNN Speaker-RNN 6 ? share para. memory-RNN fusion gate additive aggregation ht−1 last rt I ht I ct rt s ht last (b) Figure 2: Network structure. (a) Illustration of the overall architecture. At each time step, the learner uses the interpreter module to encode the teacher’s sentence. The visual perception is also encoded and used as a key to retrieve information from the external memory. The last state of the interpreter-RNN will be passed through a controller. The controller’s output will be added to the input and used as the initial state of the speaker-RNN. The interpreter-RNN will update the external memory with an importance (illustrated with transparency) weighted information extracted from the perception input. ‘Mix’ denotes a mixture of word embedding vectors. (b) The structures of the interpreter-RNN (top) and the speakerRNN (bottom). The interpreter-RNN and speaker-RNN share parameters. learner {w1, a1, · · · , wt−1, at−1} as pI θ(wt|Ht−1, at−1, vt) = Q i pI θ(wt i|wt 1:i−1, ht−1 last, vt), (1) where ht−1 last is the last state of the RNN at time step t−1 as the summarization of {Ht−1, at−1} (c.f., Figure 2), and i indexes words within a sentence. It is natural to model the probability of the i-th word in the t-th sentence with an RNN, where the sentences up to t and words up to i within the t-th sentence are captured by a fixed-length state vector ht i = RNN(ht i−1, wt i). To incorporate knowledge learned and stored in the external memory, the generation of the next word is adaptively based on i) the predictive distribution of the next word from the state of the RNN to capture the syntactic structure of sentences, and ii) the information from the external memory to represent the previously learned knowledge, via a fusion gate g: pI θ(wt i|ht i, vt) = (1 −g) · ph + g · pr, (2) where ph = softmax ETfMLP(ht i)  and pr = softmax ETr  . E∈Rd×k is the word embedding table, with d the embedding dimension and k the vocabulary size. r is a vector read out from the external memory using a visual key as detailed in the next section. fMLP(·) is a multi-layer MultiLayer Perceptron (MLP) for bridging the semantic gap between the RNN state space and the word embedding space. The fusion gate g is computed as g = f(ht i, c), where c is the confidence score c=max(ETr), and a well-learned concept should have a large score by design (Appendix A.2). Multimodal Associative Memory. We use a multimodal memory for storing visual (v) and sentence (s) features with each modality while preserving the correspondence between them (Baddeley, 1992). Information organization is more structured than the single modality memory as used in Santoro et al. (2016) and cross modality retrieval is straightforward under this design. A visual encoder implemented as a convolutional neural network followed by fully connected layers is used to encode the visual image v into a visual key kv, and then the corresponding sentence feature can be retrieved from the memory as: r ←READ(kv, Mv, Ms). (3) Mv and Ms are memories for visual and sentence modalities with the same number of slots (columns). Memory read is implemented as r= Msα with α a soft reading weight obtained through the visual modality by calculating the cosine similarities between kv and slots of Mv. Memory write is similar to Neural Turing Machine (Graves et al., 2014), but with a content importance gate gmem to adaptively control whether the content c should be written into memory: Mm ←WRITE(Mm, cm, gmem), m∈{v, s}. 2614 For the visual modality cv≜kv. For the sentence modality, cs has to be selectively extracted from the sentence generated by the teacher. We use an attention mechanism to achieve this by cs=Wη, where W denotes the matrix with columns being the embedding vectors of all the words in the sentence. η is a normalized attention vector representing the relative importance of each word in the sentence as measured by the cosine similarity between the sentence representation vector and each word’s context vector, computed using a bidirectional-RNN. The scalar-valued content importance gate gmem is computed as a function of the sentence from the teacher, meaning that the importance of the content to be written into memory depends on the content itself (c.f., Appendix A.3 for more details). The memory write is achieved with an erase and an add operation: ˜Mm = Mm −Mm ⊙(gmem · 1 · βT), Mm = ˜Mm + gmem · cm · βT, m∈{v, s}. ⊙denotes Hadamard product and the write location β is determined with a Least Recently Used Access mechanism (Santoro et al., 2016). 4.2 Context-adaptive Behavior Shaping through Reinforcement Learning Imitation fosters the basic language ability for generating echoic behavior (Skinner, 1957), but it is not enough for conversing adaptively with the teacher according to context and the knowledge state of the learner. Thus we leverage reward feedback to shape the behavior of the agent by optimizing the policy using RL. The agent’s response at is generated by the speaker, which can be modeled as a sample from a distribution over all possible sequences, given the conversation history Ht={w1, a1, · · · , wt} and visual input vt: at ∼pS θ(a|Ht, vt). (4) As Ht can be encoded by the interpreter-RNN as ht I, the action policy can be represented as pS θ(a|ht I, vt). To leverage the language skill that is learned via imitation through the interpreter, we can generate the sentence by implementing the speaker with an RNN, sharing parameters with the interpreter-RNN, but with a conditional signal modulated by a controller network (Figure 2): pS θ(at|ht I, vt) = pI θ(at|ht I + f(ht I, c), vt). (5) The reason for using a controller f(·) for modulation is that the basic language model only offers the learner the echoic ability to generate a sentence, but not necessarily the adaptive behavior according to context (e.g. asking questions when facing novel objects and providing an answer for a previously learned object according to its own knowledge state). Without any additional module or learning signals, the agent’s behaviors would be the same as those of the teacher because of parameter sharing; thus, it is difficult for the agent to learn to speak in an adaptive manner. To learn from consequences of speaking actions, the policy pS θ(·) is adjusted by maximizing expected future reward as represented by LR θ . As a non-differentiable sampling operation is involved in Eqn.(4), policy gradient theorem (Sutton and Barto, 1998) is used to derive the gradient for updating pS θ(·) in the reinforce module: ∇θLR θ = EpS θ P tAt · ∇θ log pS θ(at|ct)  , (6) where At = V (ht I, ct)−rt+1 −γV (ht+1 I , ct+1) is the advantage (Sutton and Barto, 1998) estimated using a value network V (·). The imitation module contributes by implementing LI θ with a crossentropy loss (Ranzato et al., 2016) and minimizing it with respect to the parameters in pI θ(·), which are shared with pS θ(·). The training signal from imitation takes the shortcut connection without going through the controller. More details on f(·), V (·) are provided in Appendix A.2. 5 Experiments We conduct experiments with comparison to baseline approaches. We first experiment with a wordlevel task in which the teacher and the learner communicate a single word each time. We then investigate the impact of image variations on concept learning. We further perform evaluation on the more challenging sentence-level task in which the teacher and the agent communicate in the form of sentences with varying lengths. Setup. To evaluate the performance in learning a transferable ability, rather than the ability of fitting a particular dataset, we use an Animal dataset for training and test the trained models on a Fruit dataset (Figure 1). More details on the datasets are provided in Appendix A.1. Each session consists of two randomly sampled classes, and the maximum number of interaction steps is six. 2615 0 1 2 3 4 5 6 7 8 9 -6 -5 -4 -3 -2 -1 0 Number of Games Reward Reinforce Imitation Imitation+Gaussian-RL Proposed ×103 Figure 3: Evolution of reward during training for the word-level task without image variations. Baselines. The following methods are compared: • Reinforce: a baseline model with the same network structure as the proposed model and trained using RL only, i.e. minimizing LR θ ; • Imitation: a recurrent encoder decoder (Serban et al., 2016) model with the same structure as ours and trained via imitation (minimizing LI θ); • Imitation+Gaussian-RL: a joint imitation and reinforcement method using a Gaussian policy (Duan et al., 2016) in the latent space of the control vector ct (Zhang et al., 2017). The policy is changed by modifying the control vector ct the action policy depends upon. Training Details. The training algorithm is implemented with the deep learning platform PaddlePaddle.3 The whole network is trained from scratch in an end-to-end fashion. The network is randomly initialized without any pre-training and is trained with decayed Adagrad (Duchi et al., 2011). We use a batch size of 16, a learning rate of 1×10−5 and a weight decay rate of 1.6×10−3. We also exploit experience replay (Wang et al., 2017; Yu et al., 2018). The reward discount factor γ is 0.99, the word embedding dimension d is 1024 and the dictionary size k is 80. The visual image size is 32×32, the maximum length of generated sentence is 6 and the memory size is 10. Word embedding vectors are initialized as random vectors and remain fixed during training. A sampling operation is used for sentence generation during training for exploration while a max operation is used during testing both for Proposed and for Reinforce baseline. The max operation is 3https://github.com/PaddlePaddle/Paddle 0 20 40 60 80 100 Success Rate (%) Reinforce Imitation Imitation+Gaussian-RL Proposed -6 -5 -4 -3 -2 -1 0 1 Reward Reinforce Imitation Imitation+Gaussian-RL Proposed Figure 4: Test performance for the word-level task without image variations. Models are trained on the Animal dataset and tested on the Fruit dataset. Image Variation Ratio 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 Success Rate (%) Image Variation Ratio 0 0.2 0.4 0.6 0.8 1 -6 -4 -2 0 Reward Reinforce Imitation Imitation+Gaussian-RL Proposed Figure 5: Test success rate and reward for the word-level task on the Fruit dataset under different test image variation ratios for models trained on the Animal dataset with a variation ratio of 0.5 (solid lines) and without variation (dashed lines). used in both training and testing for Imitation and Imitation+Gaussian-RL baselines. 5.1 Word-Level Task In this experiment, we focus on a word-level task, which offers an opportunity to analyze and understand the underlying behavior of different algorithms while being free from distracting factors. Note that although the teacher speaks a word each time, the learner still has to learn to generate a fullsentence ended with an end-of-sentence symbol. Figure 3 shows the evolution curves of the rewards during training for different approaches. It is observed that Reinforce makes very little progress, mainly due to the difficulty of exploration in the large space of sequence actions. Imitation obtains higher rewards than Reinforce during training, as it can avoid some penalty by generating sensible sentences such as questions. Imitation+Gaussian-RL gets higher rewards than both Imitation and Reinforce, indicating that the RL component reshapes the action policy toward higher rewards. However, as the Gaussian policy optimizes the action policy indirectly in a latent feature space, it is less efficient for exploration and learning. Proposed achieves the highest final reward during training. We train the models using the Animal dataset and evaluate them on the Fruit dataset; Figure 4 sum2616 (a) (b) (c) (d) Figure 6: Visualization of the CNN features with t-SNE. Ten classes randomly sampled from (a-b) the Animal dataset and (c-d) the Fruit dataset, with features extracted using the visual encoder trained without (a, c) and with (b, d) image variations on the the Animal dataset. Teacher Learner Interpreter Speaker η η η η g g g g gmem gmem gmem gmem large Figure 7: Example results of the proposed approach on novel classes. The learner can ask about the new class and use the interpreter to extract useful information from the teacher’s sentence via word-level attention η and content importance gmem jointly. The speaker uses the fusion gate g to adaptively switch between signals from RNN (small g) and external memory (large g) to generate sentence responses. marizes the success rate and average reward over 1K testing sessions. As can be observed, Reinforce achieves the lowest success rate (0.0%) and reward (−6.0) due to its inherent inefficiency in learning. Imitation performs better than Reinforce in terms of both its success rate (28.6%) and reward value (−2.7). Imitation+GaussianRL achieves a higher reward (−1.2) during testing, but its success rate (32.1%) is similar to that of Imitation, mainly due to the rigorous criteria for success. Proposed reaches the highest success rate (97.4%) and average reward (+1.1)4, outperforming all baseline methods by a large margin. From this experiment, it is clear that imitation with a proper usage of reinforcement is crucial for achieving adaptive behaviors (e.g., asking questions about novel objects and generating answers or statements about learned objects proactively). 5.2 Learning with Image Variations To evaluate the impact of within-class image variations on one-shot concept learning, we train models with and without image variations, and during testing compare their performance under different image variation ratios (the chance of a novel image instance being present within a session) as shown in Figure 5. It is observed that the performance of 4The testing reward is higher than the training reward mainly due to the action sampling in training for exploration. the model trained without image variations drops significantly as the variation ratio increases. We also evaluate the performance of models trained under a variation ratio of 0.5. Figure 5 clearly shows that although there is also a performance drop, which is expected, the performance degrades more gradually, indicating the importance of image variation for learning one-shot concepts. Figure 6 visualizes sampled training and testing images represented by their corresponding features extracted using the visual encoder trained without and with image variations. Clusters of visually similar concepts emerge in the feature space when trained with image variations, indicating that a more discriminative visual encoder was obtained for learning generalizable concepts. 5.3 Sentence-Level Task We further evaluate the model on sentence-level tasks. Teacher’s sentences are generated using the grammar as shown in Table 1 and have a number of variations with sentence lengths ranging from one to five. Example sentences from the teacher are presented in Appendix A.1. This task is more challenging than the word-level task in two ways: i) information processing is more difficult as the learner has to learn to extract useful information which could appear at different locations of the sentence; ii) the sentence generation is also more 2617 difficult than the word-level task and the learner has to adaptively fuse information from RNN and external memory to generate a complete sentence. Comparison of different approaches in terms of their success rates and average rewards on the novel test set are shown in Figure 8. As can be observed from the figure, Proposed again outperforms all other compared methods in terms of both success rate (82.8%) and average reward (+0.8), demonstrating its effectiveness even for the more complex sentence-level task. We also visualize the information extraction and the adaptive sentence composing process of the proposed approach when applied to a test set. As shown in Figure 7, the agent learns to extract useful information from the teacher’s sentence and use the content importance gate to control what content is written into the external memory. Concretely, sentences containing object names have a larger gmem value, and the word corresponding to object name has a larger value in the attention vector η compared to other words in the sentence. The combined effect of η and gmem suggests that words corresponding to object names have higher likelihoods of being written into the external memory. The agent also successfully learns to use the external memory for storing the information extracted from the teacher’s sentence, to fuse it adaptively with the signal from the RNN (capturing the syntactic structure) and to generate a complete sentence with the new concept included. The value of the fusion gate g is small when generating words like “what,”, “i,” “can,” and “see,” meaning it mainly relies on the signal from the RNN for generation (c.f., Eqn.(2) and Figure 7). In contrast, when generating object names (e.g., “banana,” and “cucumber”), the fusion gate g has a large value, meaning that there is more emphasis on the signal from the external memory. This experiment showed that the proposed approach is applicable to the more complex sentence-level task for language learning and one-shot learning. More interestingly, it learns an interpretable operational process, which can be easily understood. More results including example dialogues from different approaches are presented in Appendix A.4. 6 Discussion We have presented an approach for grounded language acquisition with one-shot visual concept learning in this work. This is achieved by purely 0 20 40 60 80 100 Success Rate (%) Reinforce Imitation Imitation+Gaussian-RL Proposed -6 -5 -4 -3 -2 -1 0 1 Reward Reinforce Imitation Imitation+Gaussian-RL Proposed Figure 8: Test performance for sentence-level task with image variations (variation ratio=0.5). interacting with a teacher and learning from feedback arising naturally during interaction through joint imitation and reinforcement learning, with a memory augmented neural network. Experimental results show that the proposed approach is effective for language acquisition with one-shot visual concept learning across several different settings compared with several baseline approaches. In the current work, we have designed and used a computer game (synthetic task with synthetic language) for training the agent. This is mainly due to the fact that there is no existing dataset to the best of our knowledge that is adequate for developing our addressed interactive language learning and one-shot learning problem. For our current design, although it is an artificial game, there is a reasonable amount of variations both within and across sessions, e.g., the object classes to be learned within a session, the presentation order of the selected classes, the sentence patterns and image instances to be used etc. All these factors contribute to the increased complexity of the learning task, making it non-trivial and already very challenging to existing approaches as shown by the experimental results. While offering flexibility in training, one downside of using a synthetic task is its limited amount of variation compared with real-world scenarios with natural languages. Although it might be non-trivial to extend the proposed approach to real natural language directly, we regard this work as an initial step towards this ultimate ambitious goal and our game might shed some light on designing more advanced games or performing real-world data collection. We plan to investigate the generalization and application of the proposed approach to more realistic environments with more diverse tasks in future work. Acknowledgments We thank the reviewers and PC members for theirs efforts in helping improving the paper. We thank Xiaochen Lian and Xiao Chu for their discussions. 2618 References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). Alan Baddeley. 1992. Working memory. Science, 255(5044):556–559. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In International Conference on Learning Representations (ICLR). Ann C. Baker, Patricia J. Jensen, and David A. Kolb. 2002. Conversational Learning: An Experiential Approach to Knowledge Creation. Copley Publishing Group. Arielle Borovsky, Marta Kutas, and Jeff Elman. 2003. Learning to use words: Event related potentials index single-shot contextual word learning. Cognzition, 116(2):289–296. K. Cho, B. Merrienboer, C. Glehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014a. Learning phrase representations using rnn encoderdecoder for statistical machine translation. In Empirical Methods in Natural Language Processing (EMNLP). Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder– decoder for statistical machine translation. In Empirical Methods in Natural Language Processing (EMNLP). Abhishek Das, Satwik Kottur, , Jos´e M.F. Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learning. In International Conference on Computer Vision (ICCV). Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. 2016. Benchmarking deep reinforcement learning for continuous control. In International Conference on International Conference on Machine Learning (ICML). J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. 2016. Learning to communicate with deep multi-agent reinforcement learning. In Advances in Neural Information Processing Systems (NIPS). Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR, abs/1410.5401. Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. 2016. Deep reinforcement learning with a natural language action space. In Association for Computational Linguistics (ACL). Derek M. Houston and Richard T. Miyamoto. 2011. Effects of early auditory experience on word learning and speech perception in deaf children with cochlear implants: Implications for sensitive periods of language development. Otol Neurotol, 31(8):1248–1253. Patricia K. Kuhl. 2004. Early language acquisition: cracking the speech code. Nat Rev Neurosci, 5(2):831–843. Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. 2011. One shot learning of simple visual concepts. In Proceedings of the 33th Annual Meeting of the Cognitive Science Society. Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-agent cooperation and the emergence of (natural) language. In International Conference on Learning Representations (ICLR). Jiwei Li, Alexander H. Miller, Sumit Chopra, MarcAurelio Ranzato, and Jason Weston. 2017. Learning through dialogue interactions by asking questions. In International Conference on Learning Representations (ICLR). Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In Empirical Methods in Natural Language Processing (EMNLP). V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. 2013. Playing Atari with deep reinforcement learning. In NIPS Deep Learning Workshop. Igor Mordatch and Pieter Abbeel. 2018. Emergence of grounded compositional language in multi-agent populations. In Association for the Advancement of Artificial Intelligence (AAAI). Anna Ingeborg Petursdottir and James R. Mellor. 2016. Reinforcement contingencies in language acquisition. Policy Insights from the Behavioral and Brain Sciences, 4(1):25–32. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In International Conference on Learning Representations (ICLR). Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. Metalearning with memory-augmented neural networks. In International Conference on Machine Learning (ICML). 2619 Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Association for the Advancement of Artificial Intelligence (AAAI). B. F. Skinner. 1957. Verbal Behavior. Copley Publishing Group. Bradly C. Stadie, Pieter Abbeel, and Ilya Sutskever. 2017. Third-person imitation learning. In International Conference on Learning Representations (ICLR). Amanda Stent and Srinivas Bangalore. 2014. Natural Language Generation in Interactive Systems. Cambridge University Press. Florian Strub, Harm de Vries, J´er´emie Mary, Bilal Piot, Aaron C. Courville, and Olivier Pietquin. 2017. End-to-end optimization of goal-driven and visually grounded dialogue systems. In International Joint Conference on Artificial Intelligence (IJCAI). Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2016. Learning multiagent communication with backpropagation. In Advances in Neural Information Processing Systems (NIPS). Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press. S. I. Wang, P. Liang, and C. Manning. 2016. Learning language games through interaction. In Association for Computational Linguistics (ACL). Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. Freitas. 2017. Sample efficient actor-critic with experience replay. In International Conference on Learning Representations (ICLR). Sandra R. Waxman. 2004. Everything had a name, and each name gave birth to a new thought: links between early word learning and conceptual organization. Cambridge, MA: The MIT Press. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Peihao Su, David Vandyke, and Steve J. Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Empirical Methods in Natural Language Processing (EMNLP). Jason Weston. 2016. Dialog-based language learning. In Advances in Neural Information Processing Systems (NIPS). Mark Woodward and Chelsea Finn. 2016. Active oneshot learning. In NIPS Deep Reinforcement Learning Workshop. Haonan Yu, Haichao Zhang, and Wei Xu. 2018. Interactive grounded language acquisition and generalization in a 2D world. In International Conference on Learning Representations (ICLR). Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. SeqGAN: Sequence generative adversarial nets with policy gradient. In Association for the Advancement of Artificial Intelligence (AAAI). Haichao Zhang, Haonan Yu, and Wei Xu. 2017. Listen, interact and talk: Learning to speak via interaction. In NIPS Workshop on Visually-Grounded Interaction and Language.
2018
243
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2620–2630 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2620 A Purely End-to-end System for Multi-speaker Speech Recognition Hiroshi Seki1,2∗, Takaaki Hori1, Shinji Watanabe3, Jonathan Le Roux1, John R. Hershey1 1Mitsubishi Electric Research Laboratories (MERL) 2Toyohashi University of Technology 3Johns Hopkins University Abstract Recently, there has been growing interest in multi-speaker speech recognition, where the utterances of multiple speakers are recognized from their mixture. Promising techniques have been proposed for this task, but earlier works have required additional training data such as isolated source signals or senone alignments for effective learning. In this paper, we propose a new sequence-to-sequence framework to directly decode multiple label sequences from a single speech sequence by unifying source separation and speech recognition functions in an end-toend manner. We further propose a new objective function to improve the contrast between the hidden vectors to avoid generating similar hypotheses. Experimental results show that the model is directly able to learn a mapping from a speech mixture to multiple label sequences, achieving 83.1% relative improvement compared to a model trained without the proposed objective. Interestingly, the results are comparable to those produced by previous endto-end works featuring explicit separation and recognition modules. 1 Introduction Conventional automatic speech recognition (ASR) systems recognize a single utterance given a speech signal, in a one-to-one transformation. However, restricting the use of ASR systems to situations with only a single speaker limits their applicability. Recently, there has been growing inter∗This work was done while H. Seki, Ph.D. candidate at Toyohashi University of Technology, Japan, was an intern at MERL. est in single-channel multi-speaker speech recognition, which aims at generating multiple transcriptions from a single-channel mixture of multiple speakers’ speech (Cooke et al., 2009). To achieve this goal, several previous works have considered a two-step procedure in which the mixed speech is first separated, and recognition is then performed on each separated speech signal (Hershey et al., 2016; Isik et al., 2016; Yu et al., 2017; Chen et al., 2017). Dramatic advances have recently been made in speech separation, via the deep clustering framework (Hershey et al., 2016; Isik et al., 2016), hereafter referred to as DPCL. DPCL trains a deep neural network to map each time-frequency (T-F) unit to a high-dimensional embedding vector such that the embeddings for the T-F unit pairs dominated by the same speaker are close to each other, while those for pairs dominated by different speakers are farther away. The speaker assignment of each T-F unit can thus be inferred from the embeddings by simple clustering algorithms, to produce masks that isolate each speaker. The original method using k-means clustering (Hershey et al., 2016) was extended to allow end-to-end training by unfolding the clustering steps using a permutation-free mask inference objective (Isik et al., 2016). An alternative approach is to perform direct mask inference using the permutation-free objective function with networks that directly estimate the labels for a fixed number of sources. Direct mask inference was first used in Hershey et al. (2016) as a baseline method, but without showing good performance. This approach was revisited in Yu et al. (2017) and Kolbaek et al. (2017) under the name permutationinvariant training (PIT). Combination of such single-channel speaker-independent multi-speaker speech separation systems with ASR was first considered in Isik et al. (2016) using a conventional Gaussian Mixture Model/Hidden Markov Model 2621 (GMM/HMM) system. Combination with an endto-end ASR system was recently proposed in (Settle et al., 2018). Both these approaches either trained or pre-trained the source separation and ASR networks separately, making use of mixtures and their corresponding isolated clean source references. While the latter approach could in principle be trained without references for the isolated speech signals, the authors found it difficult to train from scratch in that case. This ability can nonetheless be used when adapting a pre-trained network to new data without such references. In contrast with this two-stage approach, Qian et al. (2017) considered direct optimization of a deep-learning-based ASR recognizer without an explicit separation module. The network is optimized based on a permutation-free objective defined using the cross-entropy between the system’s hypotheses and reference labels. The best permutation between hypotheses and reference labels in terms of cross-entropy is selected and used for backpropagation. However, this method still requires reference labels in the form of senone alignments, which have to be obtained on the clean isolated sources using a single-speaker ASR system. As a result, this approach still requires the original separated sources. As a general caveat, generation of multiple hypotheses in such a system requires the number of speakers handled by the neural network architecture to be determined before training. However, Qian et al. (2017) reported that the recognition of two-speaker mixtures using a model trained for three-speaker mixtures showed almost identical performance with that of a model trained on two-speaker mixtures. Therefore, it may be possible in practice to determine an upper bound on the number of speakers. Chen et al. (2018) proposed a progressive training procedure for a hybrid system with explicit separation motivated by curriculum learning. They also proposed self-transfer learning and multi-output sequence discriminative training methods for fully exploiting pairwise speech and preventing competing hypotheses, respectively. In this paper, we propose to circumvent the need for the corresponding isolated speech sources when training on a set of mixtures, by using an end-to-end multi-speaker speech recognition without an explicit speech separation stage. In separation based systems, the spectrogram is segmented into complementary regions according to sources, which generally ensures that different utterances are recognized for each speaker. Without this complementarity constraint, our direct multispeaker recognition system could be susceptible to redundant recognition of the same utterance. In order to prevent degenerate solutions in which the generated hypotheses are similar to each other, we introduce a new objective function that enhances contrast between the network’s representations of each source. We also propose a training procedure to provide permutation invariance with low computational cost, by taking advantage of the joint CTC/attention-based encoder-decoder network architecture proposed in (Hori et al., 2017a). Experimental results show that the proposed model is able to directly convert an input speech mixture into multiple label sequences without requiring any explicit intermediate representations. In particular no frame-level training labels, such as phonetic alignments or corresponding unmixed speech, are required. We evaluate our model on spontaneous English and Japanese tasks and obtain comparable results to the DPCL based method with explicit separation (Settle et al., 2018). 2 Single-speaker end-to-end ASR 2.1 Attention-based encoder-decoder network An attention-based encoder-decoder network (Bahdanau et al., 2016) predicts a target label sequence Y = (y1, . . . , yN) without requiring intermediate representation from a T-frame sequence of D-dimensional input feature vectors, O = (ot ∈RD|t = 1, . . . , T), and the past label history. The probability of the n-th label yn is computed by conditioning on the past history y1:n−1: patt(Y |O) = N Y n=1 patt(yn|O, y1:n−1). (1) The model is composed of two main sub-modules, an encoder network and a decoder network. The encoder network transforms the input feature vector sequence into a high-level representation H = (hl ∈RC|l = 1, . . . , L). The decoder network emits labels based on the label history y and a context vector c calculated using an attention mechanism which weights and sums the Cdimensional sequence of representation H with attention weight a. A hidden state e of the decoder is 2622 updated based on the previous state, the previous context vector, and the emitted label. This mechanism is summarized as follows: H = Encoder(O), (2) yn ∼Decoder(cn, yn−1), (3) cn, an = Attention(an−1, en, H), (4) en = Update(en−1, cn−1, yn−1). (5) At inference time, the previously emitted labels are used. At training time, they are replaced by the reference label sequence R = (r1, . . . , rN) in a teacher-forcing fashion, leading to conditional probability patt(YR|O), where YR denotes the output label sequence variable in this condition. The detailed definitions of Attention and Update are described in Section A of the supplementary material. The encoder and decoder networks are trained to maximize the conditional probability of the reference label sequence R using backpropagation: Latt = Lossatt(YR, R) ≜−log patt(YR = R|O), (6) where Lossatt is the cross-entropy loss function. 2.2 Joint CTC/attention-based encoder-decoder network The joint CTC/attention approach (Kim et al., 2017; Hori et al., 2017a), uses the connectionist temporal classification (CTC) objective function (Graves et al., 2006) as an auxiliary task to train the network. CTC formulates the conditional probability by introducing a framewise label sequence Z consisting of a label set U and an additional blank symbol defined as Z = {zl ∈ U ∪{’blank’}|l = 1, · · · , L}: pctc(Y |O) = X Z L Y l=1 p(zl|zl−1, Y )p(zl|O), (7) where p(zl|zl−1, Y ) represents monotonic alignment constraints in CTC and p(zl|O) is the framelevel label probability computed by p(zl|O) = Softmax(Linear(hl)), (8) where hl is the hidden representation generated by an encoder network, here taken to be the encoder of the attention-based encoder-decoder network defined in Eq. (2), and Linear(·) is the final linear layer of the CTC to match the number of labels. Unlike the attention model, the forwardbackward algorithm of CTC enforces monotonic alignment between the input speech and the output label sequences during training and decoding. We adopt the joint CTC/attention-based encoder-decoder network as the monotonic alignment helps the separation and extraction of highlevel representation. The CTC loss is calculated as: Lctc = Lossctc(Y, R) ≜−log pctc(Y = R|O). (9) The CTC loss and the attention-based encoderdecoder loss are combined with an interpolation weight λ ∈[0, 1]: Lmtl = λLctc + (1 −λ)Latt. (10) Both CTC and encoder-decoder networks are also used in the inference step. The final hypothesis is a sequence that maximizes a weighted conditional probability of CTC in Eq. ( 7) and attentionbased encoder decoder network in Eq. (1): ˆY = arg max Y  γ log pctc(Y |O) + (1 −γ) log patt(Y |O) , (11) where γ ∈[0, 1] is an interpolation weight. 3 Multi-speaker end-to-end ASR 3.1 Permutation-free training In situations where the correspondence between the outputs of an algorithm and the references is an arbitrary permutation, neural network training faces a permutation problem. This problem was first addressed by deep clustering (Hershey et al., 2016), which circumvented it in the case of source separation by comparing the relationships between pairs of network outputs to those between pairs of labels. As a baseline for deep clustering, Hershey et al. (2016) also proposed another approach to address the permutation problem, based on an objective which considers all permutations of references when computing the error with the network estimates. This objective was later used in Isik et al. (2016) and Yu et al. (2017). In the latter, it was referred to as permutation-invariant training. This permutation-free training scheme extends the usual one-to-one mapping of outputs and labels for backpropagation to one-to-many by selecting the proper permutation of hypotheses and 2623 references, thus allowing the network to generate multiple independent hypotheses from a singlechannel speech mixture. When a speech mixture contains speech uttered by S speakers simultaneously, the network generates S label sequence variables Y s = (ys 1, . . . , ys Ns) with Ns labels from the T-frame sequence of D-dimensional input feature vectors, O = (ot ∈RD|t = 1, . . . , T): Y s ∼gs(O), s = 1, . . . , S, (12) where the transformations gs are implemented as neural networks which typically share some components with each other. In the training stage, all possible permutations of the S sequences Rs = (rs 1, . . . , rs N′s) of N′ s reference labels are considered (considering permutations on the hypotheses would be equivalent), and the one leading to minimum loss is adopted for backpropagation. Let P denote the set of permutations on {1, . . . , S}. The final loss L is defined as L = min π∈P S X s=1 Loss(Y s, Rπ(s)), (13) where π(s) is the s-th element of a permutation π. For example, for two speakers, P includes two permutations (1, 2) and (2, 1), and the loss is defined as: L = min(Loss(Y 1, R1) + Loss(Y 2, R2), Loss(Y 1, R2) + Loss(Y 2, R1)). (14) Figure 1 shows an overview of the proposed end-to-end multi-speaker ASR system. In the following Section 3.2, we describe an extension of encoder network for the generation of multiple hidden representations. We further introduce a permutation assignment mechanism for reducing the computation cost in Section 3.3, and an additional loss function LKL for promoting the difference between hidden representations in Section 3.4. 3.2 End-to-end permutation-free training To make the network output multiple hypotheses, we consider a stacked architecture that combines both shared and unshared (or specific) neural network modules. The particular architecture we consider in this paper splits the encoder network into three stages: the first stage, also referred to as mixture encoder, processes the input mixture and Figure 1: End-to-end multi-speaker speech recognition. We propose to use the permutation-free training for CTC and attention loss functions Lossctc and Lossatt, respectively. outputs an intermediate feature sequence H; that sequence is then processed by S independent encoder sub-networks which do not share parameters, also referred to as speaker-differentiating (SD) encoders, leading to S feature sequences Hs; at the last stage, each feature sequence Hs is independently processed by the same network, also referred to as recognition encoder, leading to S final high-level representations Gs. Let u ∈{1 . . . , S} denote an output index (corresponding to the transcription of the speech by one of the speakers), and v ∈{1 . . . , S} denote a reference index. Denoting by EncoderMix the mixture encoder, Encoderu SD the u-th speakerdifferentiating encoder, and EncoderRec the recognition encoder, an input sequence O corresponding to an input mixture can be processed by the encoder network as follows: H = EncoderMix(O), (15) Hu = Encoderu SD(H), (16) Gu = EncoderRec(Hu). (17) The motivation for designing such an architecture can be explained as follows, following analogies with the architectures in (Isik et al., 2016) and (Settle et al., 2018) where separation and recog2624 nition are performed explicitly in separate steps: the first stage in Eq. (15) corresponds to a speech separation module which creates embedding vectors that can be used to distinguish between the multiple sources; the speaker-differentiating second stage in Eq. (16) uses the first stage’s output to disentangle each speaker’s speech content from the mixture, and prepare it for recognition; the final stage in Eq. (17) corresponds to an acoustic model that encodes the single-speaker speech for final decoding. The decoder network computes the conditional probabilities for each speaker from the S outputs of the encoder network. In general, the decoder network uses the reference label R as a history to generate the attention weights during training, in a teacher-forcing fashion. However, in the above permutation-free training scheme, the reference label to be attributed to a particular output is not determined until the loss function is computed, so we here need to run the attention decoder for all reference labels. We thus need to consider the conditional probability of the decoder output variable Y u,v for each output Gu of the encoder network under the assumption that the reference label for that output is Rv: patt(Y u,v|O) = Y n patt(yu,v n |O, yu,v 1:n−1), (18) cu,v n , au,v n = Attention(au,v n−1, eu,v n , Gu), (19) eu,v n = Update(eu,v n−1, cu,v n−1, rv n−1), (20) yu,v n ∼Decoder(cu,v n , rv n−1). (21) The final loss is then calculated by considering all permutations of the reference labels as follows: Latt = min π∈P X s Lossatt(Y s,π(s), Rπ(s)). (22) 3.3 Reduction of permutation cost In order to reduce the computational cost, we fixed the permutation of the reference labels based on the minimization of the CTC loss alone, and used the same permutation for the attention mechanism as well. This is an advantage of using a joint CTC/attention based end-to-end speech recognition. Permutation is performed only for the CTC loss by assuming synchronous output where the permutation is decided by the output of CTC: ˆπ = arg min π∈P X s Lossctc(Y s, Rπ(s)), (23) where Y u is the output sequence variable corresponding to encoder output Gu. Attention-based decoding is then performed on the same hidden representations Gu, using teacher forcing with the labels determined by the permutation ˆπ that minimizes the CTC loss: patt(Y u,ˆπ(u)|O) = Y n patt(yu,ˆπ(u) n |O, yu,ˆπ(u) 1:n−1 ), cu,ˆπ(u) n , au,ˆπ(u) n =Attention(au,ˆπ(u) n−1 , eu,ˆπ(u) n , Gu), eu,ˆπ(u) n = Update(eu,ˆπ(u) n−1 , cu,ˆπ(u) n−1 , rˆπ(u) n−1), yu,ˆπ(u) n ∼Decoder(cu,ˆπ(u) n , rˆπ(u) n−1). This corresponds to the “permutation assignment” in Fig. 1. In contrast with Eq. (18), we only need to run the attention-based decoding once for each output Gu of the encoder network. The final loss is defined as the sum of two objective functions with interpolation λ: Lmtl = λLctc + (1 −λ)Latt, (24) Lctc = X s Lossctc(Y s, Rˆπ(s)), (25) Latt = X s Lossatt(Y s,ˆπ(s), Rˆπ(s)). (26) At inference time, because both CTC and attention-based decoding are performed on the same encoder output Gu and should thus pertain to the same speaker, their scores can be incorporated as follows: ˆY u = arg max Y u  γ log pctc(Y u|Gu) + (1 −γ) log patt(Y u|Gu) , (27) where pctc(Y u|Gu) and patt(Y u|Gu) are obtained with the same encoder output Gu. 3.4 Promoting separation of hidden vectors A single decoder network is used to output multiple label sequences by independently decoding the multiple hidden vectors generated by the encoder network. In order for the decoder to generate multiple different label sequences the encoder needs to generate sufficiently differentiated hidden vector sequences for each speaker. We propose to encourage this contrast among hidden vectors by introducing in the objective function a new term based on the negative symmetric Kullback-Leibler 2625 (KL) divergence. In the particular case of twospeaker mixtures, we consider the following additional loss function: LKL = −η X l  KL( ¯G1(l) || ¯G2(l)) + KL( ¯G2(l) || ¯G1(l)) , (28) where η is a small constant value, and ¯Gu = (softmax(Gu(l)) | l = 1, . . . , L) is obtained from the hidden vector sequence Gu at the output of the recognition encoder EncoderRec as in Fig. 1 by applying an additional frame-wise softmax operation in order to obtain a quantity amenable to a probability distribution. 3.5 Split of hidden vector for multiple hypotheses Since the network maps acoustic features to label sequences directly, we consider various architectures to perform implicit separation and recognition effectively. As a baseline system, we use the concatenation of a VGG-motivated CNN network (Simonyan and Zisserman, 2014) (referred to as VGG) and a bi-directional long short-term memory (BLSTM) network as the encoder network. For the splitting point in the hidden vector computation, we consider two architectural variations as follows: • Split by BLSTM: The hidden vector is split at the level of the BLSTM network. 1) the VGG network generates a single hidden vector H; 2) H is fed into S independent BLSTMs whose parameters are not shared with each other; 3) the output of each independent BLSTM Hu, u=1, . . . , S, is further separately fed into a unique BLSTM, the same for all outputs. Each step corresponds to Eqs. (15), (16), and (17). • Split by VGG: The hidden vector is split at the level of the VGG network. The number of filters at the last convolution layer is multiplied by the number of mixtures S in order to split the output into S hidden vectors (as in Eq. (16)). The layers prior to the last VGG layer correspond to the network in Eq. (15), while the subsequent BLSTM layers implement the network in (17). 4 Experiments 4.1 Experimental setup We used English and Japanese speech corpora, WSJ (Wall street journal) (Consortium, 1994; Table 1: Duration (hours) of unmixed and mixed corpora. The mixed corpora are generated by Algorithm 1 in Section B of the supplementary material, using the training, development, and evaluation set respectively. TRAIN DEV. EVAL WSJ (UNMIXED) 81.5 1.1 0.7 WSJ (MIXED) 98.5 1.3 0.8 CSJ (UNMIXED) 583.8 6.6 5.2 CSJ (MIXED) 826.9 9.1 7.5 Garofalo et al., 2007) and CSJ (Corpus of spontaneous Japanese) (Maekawa, 2003). To show the effectiveness of the proposed models, we generated mixed speech signals from these corpora to simulate single-channel overlapped multi-speaker recording, and evaluated the recognition performance using the mixed speech data. For WSJ, we used WSJ1 SI284 for training, Dev93 for development, and Eval92 for evaluation. For CSJ, we followed the Kaldi recipe (Moriya et al., 2015) and used the full set of academic and simulated presentations for training, and the standard test sets 1, 2, and 3 for evaluation. We created new corpora by mixing two utterances with different speakers sampled from existing corpora. The detailed algorithm is presented in Section B of the supplementary material. The sampled pairs of two utterances are mixed at various signal-to-noise ratios (SNR) between 0 dB and 5 dB with a random starting point for the overlap. Duration of original unmixed and generated mixed corpora are summarized in Table 1. 4.1.1 Network architecture As input feature, we used 80-dimensional log Mel filterbank coefficients with pitch features and their delta and delta delta features (83 × 3 = 249dimension) extracted using Kaldi tools (Povey et al., 2011). The input feature is normalized to zero mean and unit variance. As a baseline system, we used a stack of a 6-layer VGG network and a 7-layer BLSTM as the encoder network. Each BLSTM layer has 320 cells in each direction, and is followed by a linear projection layer with 320 units to combine the forward and backward LSTM outputs. The decoder network has an 1-layer LSTM with 320 cells. As described in Section 3.5, we adopted two types of encoder architectures for multi-speaker speech recognition. The network architectures are summarized in Table 2. The split-by-VGG network had speaker differentiating encoders with a convolution layer 2626 Table 2: Network architectures for the encoder network. The number of layers is indicated in parentheses. EncoderMix, Encoderu SD, and EncoderRec correspond to Eqs. (15), (16), and (17). SPLIT BY EncoderMix Encoderu SD EncoderRec NO VGG (6) — BLSTM (7) VGG VGG (4) VGG (2) BLSTM (7) BLSTM VGG (6) BLSTM (2) BLSTM (5) (and the following maxpooling layer). The splitby-BLSTM network had speaker differentiating encoders with two BLSTM layers. The architectures were adjusted to have the same number of layers. We used characters as output labels. The number of characters for WSJ was set to 49 including alphabets and special tokens (e.g., characters for space and unknown). The number of characters for CSJ was set to 3,315 including Japanese Kanji/Hiragana/Katakana characters and special tokens. 4.1.2 Optimization The network was initialized randomly from uniform distribution in the range -0.1 to 0.1. We used the AdaDelta algorithm (Zeiler, 2012) with gradient clipping (Pascanu et al., 2013) for optimization. We initialized the AdaDelta hyperparameters as ρ = 0.95 and ϵ = 1−8. ϵ is decayed by half when the loss on the development set degrades. The networks were implemented with Chainer (Tokui et al., 2015) and ChainerMN (Akiba et al., 2017). The optimization of the networks was done by synchronous data parallelism with 4 GPUs for WSJ and 8 GPUs for CSJ. The networks were first trained on singlespeaker speech, and then retrained with mixed speech. When training on unmixed speech, only one side of the network only (with a single speaker differentiating encoder) is optimized to output the label sequence of the single speaker. Note that only character labels are used, and there is no need for clean source reference corresponding to the mixed speech. When moving to mixed speech, the other speaker-differentiating encoders are initialized using the already trained one by copying the parameters with random perturbation, w′ = w × (1 + Uniform(−0.1, 0.1)) for each parameter w. The interpolation value λ for the multiple objectives in Eqs. (10) and (24) was set to 0.1 for WSJ and to 0.5 for CSJ. Lastly, the model is retrained with the additional negative KL divergence loss in Eq. (28) with η = 0.1. Table 3: Evaluation of unmixed speech without multi-speaker training. TASK AVG. WSJ 2.6 CSJ 7.8 4.1.3 Decoding In the inference stage, we combined a pretrained RNNLM (recurrent neural network language model) in parallel with the CTC and decoder network. Their label probabilities were linearly combined in the log domain during beam search to find the most likely hypothesis. For the WSJ task, we used both character and word level RNNLMs (Hori et al., 2017b), where the character model had a 1-layer LSTM with 800 cells and an output layer for 49 characters. The word model had a 1-layer LSTM with 1000 cells and an output layer for 20,000 words, i.e., the vocabulary size was 20,000. Both models were trained with the WSJ text corpus. For the CSJ task, we used a character level RNNLM (Hori et al., 2017c), which had a 1-layer LSTM with 1000 cells and an output layer for 3,315 characters. The model parameters were trained with the transcript of the training set in CSJ. We added language model probabilities with an interpolation factor of 0.6 for characterlevel RNNLM and 1.2 for word-level RNNLM. The beam width for decoding was set to 20 in all the experiments. Interpolation γ in Eqs. (11) and (27) was set to 0.4 for WSJ and 0.5 for CSJ. 4.2 Results 4.2.1 Evaluation of unmixed speech First, we examined the performance of the baseline joint CTC/attention-based encoder-decoder network with the original unmixed speech data. Table 3 shows the character error rates (CERs), where the baseline model showed 2.6% on WSJ and 7.8% on CSJ. Since the model was trained and evaluated with unmixed speech data, these CERs are considered lower bounds for the CERs in the succeeding experiments with mixed speech data. 4.2.2 Evaluation of mixed speech Table 4 shows the CERs of the generated mixed speech from the WSJ corpus. The first column indicates the position of split as mentioned in Section 3.5. The second, third and forth columns indicate CERs of the high energy speaker (HIGH E. SPK.), the low energy speaker (LOW E. SPK.), and the average (AVG.), respectively. The baseline model has very high CERs because 2627 Table 4: CER (%) of mixed speech for WSJ. SPLIT HIGH E. SPK. LOW E. SPK. AVG. NO (BASELINE) 86.4 79.5 83.0 VGG 17.4 15.6 16.5 BLSTM 14.6 13.3 14.0 + KL LOSS 14.0 13.3 13.7 Table 5: CER (%) of mixed speech for CSJ. SPLIT HIGH E. SPK. LOW E. SPK. AVG. NO (BASELINE) 93.3 92.1 92.7 BLSTM 11.0 18.8 14.9 it was trained as a single-speaker speech recognizer without permutation-free training, and it can only output one hypothesis for each mixed speech. In this case, the CERs were calculated by duplicating the generated hypothesis and comparing the duplicated hypotheses with the corresponding references. The proposed models, i.e., splitby-VGG and split-by-BLSTM networks, obtained significantly lower CERs than the baseline CERs, the split-by-BLSTM model in particular achieving 14.0% CER. This is an 83.1% relative reduction from the baseline model. The CER was further reduced to 13.7% by retraining the split-by-BLSTM model with the negative KL loss, a 2.1% relative reduction from the network without retraining. This result implies that the proposed negative KL loss provides better separation by actively improving the contrast between the hidden vectors of each speaker. Examples of recognition results are shown in Section C of the supplementary material. Finally, we profiled the computation time for the permutations based on the decoder network and on CTC. Permutation based on CTC was 16.3 times faster than that based on the decoder network, in terms of the time required to determine the best match permutation given the encoder network’s output in Eq. (17). Table 5 shows the CERs for the mixed speech from the CSJ corpus. Similarly to the WSJ experiments, our proposed model significantly reduced the CER from the baseline, where the average CER was 14.9% and the reduction ratio from the baseline was 83.9%. 4.2.3 Visualization of hidden vectors We show a visualization of the encoder networks outputs in Fig. 2 to illustrate the effect of the negative KL loss function. Principal component analysis (PCA) was applied to the hidden vectors on the vertical axis. Figures 2(a) and 2(b) show the hidden vectors generated by the split-by-BLSTM model without the negative KL divergence loss for an example mixture of two speakers. We can observe different activation patterns showing that the hidden vectors were successfully separated to the individual utterances in the mixed speech, although some activity from one speaker can be seen as leaking into the other. Figures 2(c) and 2(d) show the hidden vectors generated after retraining with the negative KL divergence loss. We can more clearly observe the different patterns and boundaries of activation and deactivation of hidden vectors. The negative KL loss appears to regularize the separation process, and even seems to help in finding the end-points of the speech. 4.2.4 Comparison with earlier work We first compared the recognition performance with a hybrid (non end-to-end) system including DPCL-based speech separation and a Kaldi-based ASR system. It was evaluated under the same evaluation data and metric as in (Isik et al., 2016) based on the WSJ corpus. However, there are differences in the size of training data and the options in decoding step. Therefore, it is not a fully matched condition. Results are shown in Table 6. The word error rate (WER) reported in (Isik et al., 2016) is 30.8%, which was obtained with jointly trained DPCL and second-stage speech enhancement networks. The proposed end-to-end ASR gives an 8.4% relative reduction in WER even though our model does not require any explicit frame-level labels such as phonetic alignment, or clean signal reference, and does not use a phonetic lexicon for training. Although this is an unfair comparison, our purely end-to-end system outperformed a hybrid system for multi-speaker speech recognition. Next, we compared our method with an endto-end explicit separation and recognition network (Settle et al., 2018). We retrained our model previously trained on our WSJ-based corpus using the training data generated by Settle et al. (2018), because the direct optimization from scratch on their data caused poor recognition performance due to data size. Other experimental conditions are shared with the earlier work. Interestingly, our method showed comparable performance to the end-to-end explicit separation and recognition network, without having to pre-train using clean signal training references. It remains to be seen if this parity of performance holds in other tasks and conditions. 2628 Figure 2: Visualization of the two hidden vector sequences at the output of the split-by-BLSTM encoder on a two-speaker mixture. (a,b): Generated by the model without the negative KL loss. (c,d): Generated by the model with the negative KL loss. Table 6: Comparison with conventional approaches METHOD WER (%) DPCL + ASR (ISIK ET AL., 2016) 30.8 Proposed end-to-end ASR 28.2 METHOD CER (%) END-TO-END DPCL + ASR (CHAR LM) (SETTLE ET AL., 2018) 13.2 Proposed end-to-end ASR (char LM) 14.0 5 Related work Several previous works have considered an explicit two-step procedure (Hershey et al., 2016; Isik et al., 2016; Yu et al., 2017; Chen et al., 2017, 2018). In contrast with our work which uses a single objective function for ASR, they introduced an objective function to guide the separation of mixed speech. Qian et al. (2017) trained a multi-speaker speech recognizer using permutation-free training without explicit objective function for separation. In contrast with our work which uses an end-toend architecture, their objective function relies on a senone posterior probability obtained by aligning unmixed speech and text using a model trained as a recognizer for single-speaker speech. Compared with (Qian et al., 2017), our method directly maps a speech mixture to multiple character sequences and eliminates the need for the corresponding isolated speech sources for training. 6 Conclusions In this paper, we proposed an end-to-end multispeaker speech recognizer based on permutationfree training and a new objective function promoting the separation of hidden vectors in order to generate multiple hypotheses. In an encoderdecoder network framework, teacher forcing at the decoder network under multiple references increases computational cost if implemented naively. We avoided this problem by employing a joint CTC/attention-based encoder-decoder network. Experimental results showed that the model is able to directly convert an input speech mixture into multiple label sequences under the end-to-end framework without the need for any explicit intermediate representation including phonetic alignment information or pairwise unmixed speech. We also compared our model with a method based on explicit separation using deep clustering, and showed comparable result. Future work includes data collection and evaluation in a real world scenario since the data used in our experiments are simulated mixed speech, which is already extremely challenging but still leaves some acoustic aspects, such as Lombard effects and real room impulse responses, that need to be alleviated for further performance improvement. In addition, further study is required in terms of increasing the number of speakers that can be simultaneously recognized, and further comparison with the separation-based approach. 2629 References Takuya Akiba, Keisuke Fukuda, and Shuji Suzuki. 2017. ChainerMN: Scalable Distributed Deep Learning Framework. In Proceedings of Workshop on ML Systems in The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS). Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. Endto-end attention-based large vocabulary speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4945–4949. Zhehuai Chen, Jasha Droppo, Jinyu Li, and Wayne Xiong. 2018. Progressive joint modeling in unsupervised single-channel overlapped speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(1):184–196. Zhuo Chen, Yi Luo, and Nima Mesgarani. 2017. Deep attractor network for single-microphone speaker separation. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 246–250. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems (NIPS), pages 577–585. Linguistic Data Consortium. 1994. CSR-II (wsj1) complete. Linguistic Data Consortium, Philadelphia, LDC94S13A. Martin Cooke, John R Hershey, and Steven J Rennie. 2009. Monaural speech separation and recognition challenge. Computer Speech and Language, 24(1):1–15. John Garofalo, David Graff, Doug Paul, and David Pallett. 2007. CSR-I (wsj0) complete. Linguistic Data Consortium, Philadelphia, LDC93S6A. Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In International Conference on Machine learning (ICML), pages 369–376. John R Hershey, Zhuo Chen, Jonathan Le Roux, and Shinji Watanabe. 2016. Deep clustering: Discriminative embeddings for segmentation and separation. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 31–35. Takaaki Hori, Shinji Watanabe, and John R Hershey. 2017a. Joint CTC/attention decoding for end-to-end speech recognition. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL): Human Language Technologies: long papers. Takaaki Hori, Shinji Watanabe, and John R Hershey. 2017b. Multi-level language modeling and decoding for open vocabulary end-to-end speech recognition. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Takaaki Hori, Shinji Watanabe, Yu Zhang, and Chan William. 2017c. Advances in joint CTC-Attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM. In Interspeech, pages 949–953. Yusuf Isik, Jonathan Le Roux, Zhuo Chen, Shinji Watanabe, and John R. Hershey. 2016. Singlechannel multi-speaker separation using deep clustering. In Proc. Interspeech, pages 545–549. Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint CTC-attention based end-to-end speech recognition using multi-task learning. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4835–4839. Morten Kolbæk, Dong Yu, Zheng-Hua Tan, and Jesper Jensen. 2017. Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(10):1901–1913. Kikuo Maekawa. 2003. Corpus of Spontaneous Japanese: Its design and evaluation. In ISCA & IEEE Workshop on Spontaneous Speech Processing and Recognition. Takafumi Moriya, Takahiro Shinozaki, and Shinji Watanabe. 2015. Kaldi recipe for Japanese spontaneous speech recognition and its evaluation. In Autumn Meeting of ASJ, 3-Q-7. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. International Conference on Machine Learning (ICML), pages 1310–1318. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The kaldi speech recognition toolkit. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Yanmin Qian, Xuankai Chang, and Dong Yu. 2017. Single-channel multi-talker speech recognition with permutation invariant training. arXiv preprint arXiv:1707.06527. Shane Settle, Jonathan Le Roux, Takaaki Hori, Shinji Watanabe, and John R. Hershey. 2018. End-to-end multi-speaker speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4819–4823. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2630 Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in NIPS. Dong Yu, Morten Kolbk, Zheng-Hua Tan, and Jesper Jensen. 2017. Permutation invariant training of deep models for speaker-independent multi-talker speech separation. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 241–245. Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
2018
244
A Structured Variational Autoencoder for Contextual Morphological Inflection Lawrence Wolf-Sonkin∗Jason Naradowsky∗ Sabrina J. Mielke∗ Ryan Cotterell∗ Department of Computer Science, Johns Hopkins University {lawrencews,narad,sjmielke,ryan.cotterell}@jhu.edu Abstract Statistical morphological inflectors are typically trained on fully supervised, type-level data. One remaining open research question is the following: How can we effectively exploit raw, token-level data to improve their performance? To this end, we introduce a novel generative latent-variable model for the semi-supervised learning of inflection generation. To enable posterior inference over the latent variables, we derive an efficient variational inference procedure based on the wake-sleep algorithm. We experiment on 23 languages, using the Universal Dependencies corpora in a simulated low-resource setting, and find improvements of over 10% absolute accuracy in some cases. 1 Introduction The majority of the world’s languages overtly encodes syntactic information on the word form itself, a phenomenon termed inflectional morphology (Dryer et al., 2005). In English, for example, the verbal lexeme with lemma talk has the four forms: talk, talks, talked and talking. Other languages, such as Archi (Kibrik, 1998), distinguish more than a thousand verbal forms. Despite the cornucopia of unique variants a single lexeme may mutate into, native speakers can flawlessly predict the correct variant that the lexeme’s syntactic context dictates. Thus, in computational linguistics, a natural question is the following: Can we estimate a probability model that can do the same? The topic of inflection generation has been the focus of a flurry of individual attention of late and, moreover, has been the subject of two shared tasks ∗All authors contributed equally. m1 m2 m3 m4 ℓ1 ℓ2 ℓ3 ℓ4 f1 f2 f3 f4 POS/morph. POS/morph. tag LM tag LM lemma lemma generator generator morphological morphological inflector inflector POS=PRN CASE=GEN POS=N NUM=PL POS=ADV POS=V TNS=PAST I wug gently weep my wugs gently wept Figure 1: A length-4 example of our generative model factorized as in Eq. (1) and overlayed with example values of the random variables in the sequence. We highlight that all the conditionals in the Bayesian network are recurrent neural networks, e.g., we note that mi depends on m<i because we employ a recurrent neural network to model the morphological tag sequence. (Cotterell et al., 2016, 2017). Most work, however, has focused on the fully supervised case—a source lemma and the morpho-syntactic properties are fed into a model, which is asked to produce the desired inflection. In contrast, our work focuses on the semi-supervised case, where we wish to make use of unannotated raw text, i.e., a sequence of inflected tokens. Concretely, we develop a generative directed graphical model of inflected forms in context. A contextual inflection model works as follows: Rather than just generating the proper inflection for a single given word form out of context (for example walking as the gerund of walk), our generative model is actually a fully-fledged language model. In other words, it generates sequences of inflected words. The graphical model is displayed in Fig. 1 and examples of words it may generate are pasted on top of the graphical model notation. That our model is a language model enables it to exploit both inflected lexicons and unlabeled raw text in a prin1 Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2631–2641 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics SG PL SG PL NOM Wort Wörter Herr Herren GEN Wortes Wörter Herrn Herren ACC Wort Wörter Herrn Herren DAT Worte Wörtern Herrn Herren Table 1: As an exhibit of morphological inflection, full paradigms (two numbers and four cases, 8 slots total) for the German nouns Wort (“word”) and Herr (“gentleman”), with abbreviated and tabularized UniMorph annotation. cipled semi-supervised way. In order to train using raw-text corpora (which is useful when we have less annotated data), we marginalize out the unobserved lemmata and morpho-syntactic annotation from unlabeled data. In terms of Fig. 1, this refers to marginalizing out m1, . . . , m4 and ℓ1, . . . , ℓ4. As this marginalization is intractable, we derive a variational inference procedure that allows for efficient approximate inference. Specifically, we modify the wake-sleep procedure of Hinton et al. (1995). It is the inclusion of raw text in this fashion that makes our model token level, a novelty in the camp of inflection generation, as much recent work in inflection generation (Dreyer et al., 2008; Durrett and DeNero, 2013; Nicolai et al., 2015; Ahlberg et al., 2015; Faruqui et al., 2016), trains a model on type-level lexicons. We offer empirical validation of our model’s utility with experiments on 23 languages from the Universal Dependencies corpus in a simulated lowresource setting.1 Our semi-supervised scheme improves inflection generation by over 10% absolute accuracy in some cases. 2 Background: Morphological Inflection 2.1 Inflectional Morphology To properly discuss models of inflectional morphology, we require a formalization. We adopt the framework of word-based morphology (Aronoff, 1976; Spencer, 1991). Note in the present paper, we omit derivational morphology. We define an inflected lexicon as a set of 4tuples consisting of a part-of-speech tag, a lexeme, an inflectional slot, and a surface form. A lexeme is a discrete object that indexes the word’s core meaning and part of speech. In place of such an abstract lexeme, lexicographers will often use a 1We make our code and data available at: https:// github.com/lwolfsonkin/morph-svae. lemma, denoted by ℓ, which is a designated2 surface form of the lexeme (such as the infinitive). For the remainder of this paper, we will use the lemma as a proxy for the lexeme, wherever convenient, although we note that lemmata may be ambiguous: bank is the lemma for at least two distinct nouns and two distinct verbs. For inflection, this ambiguity will rarely3 play a role—for instance, all senses of bank inflect in the same fashion. A part-of-speech (POS) tag, denoted t, is a coarse syntactic category such as VERB. Each POS tag allows some set of lexemes, and also allows some set of inflectional slots, denoted as σ, such as  TNS=PAST, PERSON=3  . Each allowed ⟨tag, lexeme, slot⟩triple is realized—in only one way—as an inflected surface form, a string over a fixed phonological or orthographic alphabet Σ. (In this work, we take Σ to be an orthographic alphabet.) Additionally, we will define the term morphological tag, denoted by m, which we take to be the POS-slot pair m = ⟨t, σ⟩. We will further define T as the set of all POS tags and M as the set of all morphological tags. A paradigm π(t, ℓ) is the mapping from tag t’s slots to the surface forms that “fill” those slots for lexeme/lemma ℓ. For example, in the English paradigm π(VERB, talk), the past-tense slot is said to be filled by talked, meaning that the lexicon contains the tuple ⟨VERB, talk, PAST, talked⟩. A cheat sheet for the notation is provided in Tab. 2. We will specifically work with the UniMorph annotation scheme (Sylak-Glassman, 2016). Here, each slot specifies a morpho-syntactic bundle of inflectional features such as tense, mood, person, number, and gender. For example, the German surface form Wörtern is listed in the lexicon with tag NOUN, lemma Wort, and a slot specifying the feature bundle  NUM=PL, CASE=DAT  . The full paradigms π(NOUN, Wort) and π(NOUN, Herr) are found in Tab. 1. 2.2 Morphological Inflection Now, we formulate the task of context-free morphological inflection using the notation developed in §2. Given a set of N form-tag-lemma triples 2A specific slot of the paradigm is chosen, depending on the part-of-speech tag – all these terms are defined next. 3One example of a paradigm where the lexeme, rather than the lemma, may influence inflection is hang. If one chooses the lexeme that licenses animate objects, the proper past tense is hanged, whereas it is hung for the lexeme that licenses inanimate objects. 2 object symbol example form f talking lemma ℓ talk POS t VERB slot σ  TNS=GERUND  morph. tag m  POS=V, TNS=GERUND  Table 2: Notational cheat sheet for the paper. {⟨fi, mi, ℓi⟩}N i=1, the goal of morphological inflection is to map the pair ⟨mi, ℓi⟩to the form fi. As the definition above indicates, the task is traditionally performed at the type level. In this work, however, we focus on a generalization of the task to the token level—we seek to map a bisequence of lemma-tag pairs to the sequence of inflected forms in context. Formally, we will denote the lemmamorphological tag bisequence as ⟨ℓ, m⟩and the form sequence as f. Foreshadowing, the primary motivation for this generalization is to enable the use of raw-text in a semi-supervised setting. 3 Generating Sequences of Inflections The primary contribution of this paper is a novel generative model over sequences of inflected words in their sentential context. Following the notation laid out in §2.2, we seek to jointly learn a distribution over sequences of forms f, lemmata ℓ, and morphological tags m. The generative procedure is as follows: First, we sample a sequence of tags m, each morphological tag coming from a language model over morphological tags: mi ∼pθ(· | m<i). Next, we sample the sequence of lemmata ℓgiven the previously sampled sequence of tags m— these are sampled conditioned only on the corresponding morphological tag: ℓi ∼pθ(· | mi). Finally, we sample the sequence of inflected words f, where, again, each word is chosen conditionally independent of other elements of the sequence: fi ∼pθ(· | ℓi, mi).4 This yields the factorized joint distribution: pθ(f, ℓ, m) = (1) |f| Y i=1 pθ(fi | ℓi, mi) | {z } morphological inflector 3 · pθ(ℓi | mi) | {z } lemma generator 2 ! · pθ(m) | {z } m-tag LM 1 4Note that we denote all three distributions as pθ to simplify notation and emphasize the joint modeling aspect; context will always resolve the ambiguity in this paper. We will discuss their parameterization in §4. We depict the corresponding directed graphical model in Fig. 1. Relation to Other Models in NLP. As the graphical model drawn in Fig. 1 shows, our model is quite similar to a Hidden Markov Model (HMM) (Rabiner, 1989). There are two primary differences. First, we remark that an HMM directly emits a form fi conditioned on the tag mi. Our model, in contrast, emits a lemma ℓi conditioned on the morphological tag mi and, then, conditioned on both the lemma ℓi and the tag mi, we emit the inflected form fi. In this sense, our model resembles the hierarchical HMM of Fine et al. (1998) with the difference that we do not have interdependence between the lemmata ℓi. The second difference is that our model is non-Markovian: we sample the ith morphological tag mi from a distribution that depends on all previous tags, using an LSTM language model (§4.1). This yields richer interactions among the tags, which may be necessary for modeling long-distance agreement phenomena. Why a Generative Model? What is our interest in a generative model of inflected forms? Eq. (1) is a syntax-only language model in that it only allows for interdependencies between the morphosyntactic tags in pθ(m). However, given a tag sequence m, the individual lemmata and forms are conditionally independent. This prevents the model from learning notions such as semantic frames and topicality. So what is this model good for? Our chief interest is the ability to train a morphological inflector on unlabeled data, which is a boon in a low-resource setting. As the model is generative, we may consider the latent-variable model: pθ(f) = X ⟨ℓ,m⟩ pθ(f, ℓ, m), (2) where we marginalize out the latent lemmata and morphological tags from raw text. The sum in Eq. (2) is unabashedly intractable—given a sequence f, it involves consideration of an exponential (in |f|) number of tag sequences and an infinite number of lemmata sequences. Thus, we will fall back on an approximation scheme (see §5). 4 Recurrent Neural Parameterization The graphical model from §3 specifies a family of models that obey the conditional independence assumptions dictated by the graph in Fig. 1. In this section we define a specific parameterization using 3 long short-term memory (LSTM) recurrent neural network (Hochreiter and Schmidhuber, 1997) language models (Sundermeyer et al., 2012). 4.1 LSTM Language Models Before proceeding, we review the modeling of sequences with LSTM language models. Given some alphabet ∆, the distribution over sequences x ∈∆∗can be defined as follows: p(x) = |x| Y j=1 p(xj | x<j), (3) where x<j = x1, . . . , xj−1. The prediction at time step j of a single element xj is then parametrized by a neural network: p(xj | x<j) = softmax (W · hj + b) , (4) where W ∈R|∆|×d and b ∈R|∆| are learned parameters (for some number of hidden units d) and the hidden state hj ∈Rd is defined through the recurrence given by Hochreiter and Schmidhuber (1997) from the previous hidden state and an embedding of the previous character (assuming some learned embedding function e: ∆→Rc for some number of dimensions c): hj = LSTM hj−1, e(xj−1)  (5) 4.2 Our Conditional Distributions We discuss each of the factors in Eq. (1) in turn. 1 Morphological Tag Language Model: pθ(m). We define pθ(m) as an LSTM language model, as defined in §4.1, where we take ∆= M, i.e., the elements of the sequence that are to be predicted are tags like  POS=V, TNS=GERUND  . Note that the embedding function e does not treat them as atomic units, but breaks them up into individual attributevalue pairs that are embedded individually and then summed to yield the final vector representation. To be precise, each tag is first encoded by a multi-hot vector, where each component corresponds to a attribute-value pair in the slot, and then this multihot vector is multiplied with an embedding matrix. 2 Lemma Generator: pθ(ℓi | mi). The next distribution in our model is a lemma generator which we define to be a conditional LSTM language model over characters (we take ∆= Σ), i.e., each xi is a single (orthographic) character. The language model is conditioned on ti (the part-ofspeech information contained in the morphological tag mi = ⟨ti, σi⟩), which we embed into a lowdimensional space and feed to the LSTM by concatenating its embedding with that of the current character. Thusly, we obtain the new recurrence relation for the hidden state: hj = LSTM  hj−1, h e [ℓi]j−1  ; e′ti  i , (6) where [ℓi]j denotes the jth character of the generated lemma ℓi and e′ : T →Rc′ for some c′ is a learned embedding function for POS tags. Note that we embed only the POS tag, rather than the entire morphological tag, as we assume the lemma depends on the part of speech exclusively. 3 Morphological Inflector: pθ(fi | ℓi, mi). The final conditional in our model is a morphological inflector, which we parameterize as a neural recurrent sequence-to-sequence model (Sutskever et al., 2014) with Luong dot-style attention (Luong et al., 2015). Our particular model uses a single encoder-decoder architecture (Kann and Schütze, 2016) for all tag pairs within a language and we refer to reader to that paper for further details. Concretely, the encoder runs over a string consisting of the desired slot and all characters of the lemma that is to be inflected (e.g. <w> V PST t a l k </w>), one LSTM running left-to-right, the other right-to-left. Concatenating the hidden states of both RNNs at each time step results in hidden states h(enc) j . The decoder, again, takes the form of an LSTM language model (we take ∆= Σ), producing the inflected form character by character, but at each time step not only the previous hidden state and the previously generated token are considered, but attention (a convex combination) over all encoder hidden states h(enc) j , with the distribution given by another neural network; see Luong et al. (2015). 5 Semi-Supervised Wake-Sleep We train the model with the wake-sleep procedure, which requires us to perform posterior inference over the latent variables. However, the exact computation in the model is intractable—it involves a sum over all possible lemmatizations and taggings of the sentence, as shown in Eq. (2). Thus, we fall back on a variational approximation (Jordan et al., 1999). We train an inference network qφ(ℓ, m | f) that approximates the true posterior over the latent variables pθ(ℓ, m | f).5 The 5Inference networks are also known as stochastic inverses (Stuhlmüller et al., 2013) or recognition models (Dayan et al., 4 variational family we choose in this work will be detailed in §5.5. We fit the distribution qφ using a semi-supervised extension of the wake-sleep algorithm (Hinton et al., 1995; Dayan et al., 1995; Bornschein and Bengio, 2014). We derive the algorithm in the following subsections and provide pseudo-code in Alg. 1. Note that the wake-sleep algorithm shows structural similarities to the expectation-maximization (EM) algorithm (Dempster et al., 1977), and, presaging the exposition, we note that the wake-sleep procedure is a type of variational EM (Beal, 2003). The key difference is that the E-step minimizes an inclusive KL divergence, rather than the exclusive one typically found in variational EM. 5.1 Data Requirements of Wake-Sleep We emphasize again that we will train our model in a semi-supervised fashion. Thus, we will assume a set of labeled sentences, Dlabeled, represented as a set of triples ⟨f, ℓ, m⟩, and a set of unlabeled sentences, Dunlabeled, represented as a set of surface form sequences f. 5.2 The Sleep Phase Wake-sleep first dictates that we find an approximate posterior distribution qφ that minimizes the KL divergences for all form sequences: DKL  pθ(·, ·, ·) | {z } full joint: Eq. (1) || qφ(·, · | ·) | {z } variational approximation  (7) with respect to the parameters φ, which control the variational approximation qφ. Because qφ is trained to be a variational approximation for any input f, it is called an inference network. In other words, it will return an approximate posterior over the latent variables for any observed sequence. Importantly, note that computation of Eq. (7) is still hard—it requires us to normalize the distribution pθ, which, in turn, involves a sum over all lemmatizations and taggings. However, it does lend itself to an efficient Monte Carlo approximation. As our model is fully generative and directed, we may easily take samples from the complete joint. Specifically, we will take K samples ⟨˜f, ˜ℓ, ˜m⟩∼pθ(·, ·, ·) by forward sampling and define them as eDsleep. We remark that we use a tilde to indicate that a form, lemmata or tag is sampled, rather than human annotated. Using K samples, 1995). we obtain the objective Sunsup = 1/K · X ⟨˜f,˜ℓ, ˜ m⟩∈e Dsleep log qφ(˜ℓ, ˜m | ˜f), (8) which we could maximize by fitting the model qφ through backpropagation (Rumelhart et al., 1986), as one would during maximum likelihood estimation. 5.3 The Wake Phase Now, given our approximate posterior qφ(ℓ, m | f), we are in a position to re-estimate the parameters of the generative model pθ(f, ℓ, m). Given a set of unannotated sentences Dunlabeled, we again first consider the objective Wunsup = 1/M · X ⟨f,˜ℓ, ˜ m⟩∈e Dwake log pθ(f, ˜ℓ, ˜m) (9) where eDwake is a set of triples ⟨f, ˜ℓ, ˜m⟩with f ∈Dunlabeled and ⟨˜ℓ, ˜m⟩∼qφ(·, · | f), maximizing with respect to the parameters θ (we may stochastically backprop through the expectation simply by backpropagating through this sum). Note that Eq. (9) is a Monte Carlo approximation of the inclusive divergence of the data distribution of Dunlabeled times qφ with pθ. 5.4 Adding Supervision to Wake-Sleep So far we presented a purely unsupervised training method that makes no assumptions about the latent lemmata and morphological tags. In our case, however, we have a very clear idea what the latent variables should look like. For instance, we are quite certain that the lemma of talking is talk and that it is in fact a GERUND. And, indeed, we have access to annotated examples Dlabeled in the form of an annotated corpus. In the presence of these data, we optimize the supervised sleep phase objective, Ssup = 1/N · X ⟨f,ℓ,m⟩∈Dlabeled log qφ(ℓ, m | f). (10) which is a Monte Carlo approximation of DKL(Dlabeled || qφ). Thus, when fitting our variational approximation qφ, we will optimize a joint objective S = Ssup + γsleep · Sunsup, where Ssup, to repeat, uses actual annotated lemmata and morphological tags; we balance the two parts of the objective with a scaling parameter γsleep. Note that on the first sleep phase iteration, we set γsleep = 0 since taking samples from an untrained pθ(·, ·, ·) 5 Algorithm 1 semi-supervised wake-sleep 1: input Dlabeled ▷labeled training data 2: input Dunlabeled ▷unlabeled training data 3: for i = 1 to I do 4: eDsleep ←∅ 5: if i > 1 then 6: for k = 1 to K do 7: ⟨˜f, ˜ℓ, ˜m⟩∼pθ(·, ·, ·) 8: eDsleep ←eDsleep ∪{⟨˜f, ˜ℓ, ˜m⟩} 9: maximize log qφ on Dlabeled ∪eDsleep ▷this corresponds to Eq. (10) + Eq. (8) 10: eDwake ←∅ 11: for f ∈Dunlabeled do 12: ⟨˜ℓ, ˜m⟩∼qφ (·, · | f) 13: eDwake ←eDwake ∪{⟨f, ˜ℓ, ˜m⟩} 14: maximize log pθ on Dlabeled ∪eDwake ▷this corresponds to Eq. (11) + Eq. (9) when we have available labeled data is of little utility. We will discuss the provenance of our data in §7.2. Likewise, in the wake phase we can neglect the approximation qφ in favor of the annotated latent variables found in Dlabeled; this leads to the following supervised objective Wsup = 1/N · X ⟨f,ℓ,m⟩∈Dlabeled log pθ(f, ℓ, m), (11) which is a Monte Carlo approximation of DKL(Dlabeled || pθ). As in the sleep phase, we will maximize W = Wsup + γwake · Wunsup, where γwake is, again, a scaling parameter. 5.5 Our Variational Family How do we choose the variational family qφ? In terms of NLP nomenclature, qφ represents a joint morphological tagger and lemmatizer. The opensource tool LEMMING (Müller et al., 2015) represents such an object. LEMMING is a higher-order linear-chain conditional random field (CRF; Lafferty et al., 2001), that is an extension of the morphological tagger of Müller et al. (2013). Interestingly, LEMMING is a linear model that makes use of simple character n-gram feature templates. On both the tasks of morphological tagging and lemmatization, neural models have supplanted linear models in terms of performance in the high-resource case (Heigold et al., 2017). However, we are interested in producing an accurate approximation to the posterior in the presence of minimal annotated examples and potentially noisy samples produced during the sleep phase, where linear models still outperform non-linear approaches (Cotterell and Heigold, 2017). We note that our variational approximation is compatible with any family. 5.6 Interpretation as an Autoencoder We may also view our model as an autoencoder, following Kingma and Welling (2013), who saw that a variational approximation to any generative model naturally has this interpretation. The crucial difference between Kingma and Welling (2013) and this work is that our model is a structured variational autoencoder in the sense that the space of our latent code is structured: the inference network encodes a sentence into a pair of lemmata and morphological tags ⟨ℓ, m⟩. This bisequence is then decoded back into the sequence of forms f through a morphological inflector. The reason the model is called an autoencoder is that we arrive at an auto-encoding-like objective if we combine the pθ and qφ as so: p(f | ˆf)= X ⟨ℓ,m⟩ pθ(f |ℓ, m) · qφ(ℓ, m| ˆf) (12) where ˆf is a copy of the original sentence f. Note that this choice of latent space sadly precludes us from making use of the reparametrization trick that makes inference in VAEs particularly efficient. In fact, our whole inference procedure is quite different as we do not perform gradient descent on both qφ and pθ jointly but alternatingly optimize both (using wake-sleep). We nevertheless call our model a VAE to uphold the distinction between the VAE as a model (essentially a specific Helmholtz machine (Dayan et al., 1995), justified by variational inference) and the end-to-end inference procedure that is commonly used. Another way of viewing this model is that it tries to force the words in the corpus through a syntactic bottleneck. Spiritually, our work is close to the conditional random field autoencoder of Ammar et al. (2014). We remark that many other structured NLP tasks can be “autoencoded” in this way and, thus, trained by a similar wake-sleep procedure. For instance, any two tasks that effectively function as inverses, e.g., translation and backtranslation, or language generation and parsing, can be treated with a similar variational autoencoder. While this work only 6 focuses on the creation of an improved morphological inflector pθ(f | ℓ, m), one could imagine a situation where the encoder was also a task of interest. That is, the goal would be to improve both the decoder (the generation model) and the encoder (the variational approximation). 6 Related Work Closest to our work is Zhou and Neubig (2017), who describe an unstructured variational autoencoder. However, the exact use case of our respective models is distinct. Our method models the syntactic dynamics with an LSTM language model over morphological tags. Thus, in the semisupervised setting, we require token-level annotation. Additionally, our latent variables are interpretable as they correspond to well-understood linguistic quantities. In contrast, Zhou and Neubig (2017) infer latent lemmata as real vectors. To the best of our knowledge, we are only the second attempt, after Zhou and Neubig (2017), to attempt to perform semi-supervised learning for a neural inflection generator. Other non-neural attempts at semi-supervised learning of morphological inflectors include Hulden et al. (2014). Models in this vein are non-neural and often focus on exploiting corpus statistics, e.g., token frequency, rather than explicitly modeling the forms in context. All of these approaches are designed to learn from a typelevel lexicon, rendering direct comparison difficult. 7 Experiments While we estimate all the parameters in the generative model, the purpose of this work is to improve the performance of morphological inflectors through semi-supervised learning with the incorporation of unlabeled data. 7.1 Low-Resource Inflection Generation The development of our method was primarily aimed at the low-resource scenario, where we observe a limited number of annotated data points. Why low-resource? When we have access to a preponderance of data, morphological inflection is close to being a solved problem, as evinced in SIGMORPHON’s 2016 shared task. However, the CoNLL-SIGMORPHON 2017 shared task showed there is much progress to be made in the lowresource case. Semi-supervision is a clear avenue. 7.2 Data As our model requires token-level morphological annotation, we perform our experiments on the Universal Dependencies (UD) dataset (Nivre et al., 2017). As this stands in contrast to most work on morphological inflection (which has used the UniMorph (Sylak-Glassman et al., 2015)6 datasets), we use a converted version of UD data, in which the UD morphological tags have been deterministically converted into UniMorph tags. For each of the treebanks in the UD dataset, we divide the training portion into three chunks consisting of the first 500, 1000 and 5000 tokens, respectively. These labeled chunks will constitute three unique sets Dlabeled. The remaining sentences in the training portion will be used as unlabeled data Dunlabeled for each language, i.e., we will discard those labels. The development and test portions will be left untouched. Languages. We explore a typologically diverse set of languages of various stocks: Indo-European, Afro-Asiatic, Turkic and Finno-Ugric, as well as the language isolate Basque. We have organized our experimental languages in Tab. 3 by genetic grouping, highlighting sub-families where possible. The Indo-European languages mostly exhibit fusional morphologies of varying degrees of complexity. The Basque, Turkic, and Finno-Ugric languages are agglutinative. Both of the Afro-Asiatic languages, Arabic and Hebrew, are Semitic and have templatic morphology with fusional affixes. 7.3 Evaluation The end product of our procedure is a morphological inflector, whose performance is to be improved through the incorporation of unlabeled data. Thus, we evaluate using the standard metric accuracy. We will evaluate at the type level, as is traditional in the morphological inflection literature, even though the UD treebanks on which we evaluate are token-level resources. Concretely, we compile an incomplete type-level morphological lexicon from the tokenlevel resource. To create this resource, we gather all unique form-lemma-tag triples ⟨f, ℓ, m⟩present 6The two annotation schemes are similar. For a discussion, we refer the reader to http: //universaldependencies.org/v2/features. html; sadly there are differences that render all numbers reported in this work incomparable with previous work, see §7.4. 7 FST NN SVAE 0% 20% 40% 60% 80% 100% (a) 500 training tokens FST NN SVAE 0% 20% 40% 60% 80% 100% (b) 1000 training tokens FST NN SVAE 0% 20% 40% 60% 80% 100% (c) 5000 training tokens Figure 2: Violin plots showing the distribution over accuracies. The structured variational autoencoder (SVAE) always outperforms the neural network (NN), but only outperformed the FST-based approach when trained on 5000 annotated tokens. Thus, while semi-supervised training helps neural models reduce their sample complexity, roughly 5000 annotated tokens are still required to boost their performance above more symbolic baselines. in the UD test data.7 7.4 Baselines As mentioned before, most work on morphological inflection has considered the task of estimating statistical inflectors from type-level lexicons. Here, in contrast, we require token-level annotation to estimate our model. For this reason, there is neither a competing approach whose numbers we can make a fair comparison to nor is there an open-source system we could easily run in the token-level setting. This is why we treat our token-level data as a list of “types”8 and then use two simple type-based baselines. First, we consider the probabilistic finite-state transducer used as the baseline for the CoNLLSIGMORPHON 2017 shared task.9 We consider this a relatively strong baseline, as we seek to generalize from a minimal amount of data. As described by Cotterell et al. (2017), the baseline performed quite competitively in the task’s low-resource setting. Note that the finite-state machine is created by heuristically extracting prefixes and suffixes from the word forms, based on an unsupervised alignment step. The second baseline is our neural inflector p(f | ℓ, m) given in §4 without the semisupervision; this model is state-of-the-art on the 7Some of these form-lemma-tag triples will overlap with those seen in the training data. 8Typical type-based inflection lexicons are likely not i.i.d. samples from natural utterances, but we have no other choice if we want to make use of only our token-level data and not additional resources like frequency and regularity of forms. 9https://sites.google.com/view/ conll-sigmorphon2017/ high-resource version of the task. We will refer to our baselines as follows: FST is the probabilistic transducer, NN is the neural sequence-to-sequence model without semisupervision, and SVAE is the structured variational autoencoder, which is equivalent to NN but also trained using wake-sleep and unlabeled data. 7.5 Results We ran the three models on 23 languages with the hyperparameters and experimental details described in App. A. We present our results in Fig. 2 and in Tab. 3. We also provide sample output of the generative model created using the dream step in App. B. The high-level take-away is that on almost all languages we are able to exploit the unlabeled data to improve the sequence-to-sequence model using unlabeled data, i.e., SVAE outperforms the NN model on all languages across all training scenarios. However, in many cases, the FST model is a better choice—the FST can sometimes generalize better from a handful of supervised examples than the neural network, even with semi-supervision (SVAE). We highlight three finer-grained observations below. Observation 1: FST Good in Low-Resource. As clearly evinced in Fig. 2, the baseline FST is still competitive with the NN, or even our SVAE when data is extremely scarce. Our neural architecture is quite general, and lacks the prior knowledge and inductive biases of the rule-based system, which become more pertinent in low-resource scenarios. Even though our semi-supervised strategy clearly 8 500 tokens 1000 tokens 5000 tokens lang FST NN SVAE ∆FST ∆NN FST NN SVAE ∆FST ∆NN FST NN SVAE ∆FST ∆NN ca 81.0 28.11 71.76 -9.24 43.65 85.0 42.58 78.46 -6.54 35.88 84.0 74.22 85.77 1.77 11.55 fr 84.0 36.25 74.75 -9.25 38.5 85.0 47.04 79.97 -5.03 32.93 85.0 79.21 83.96 -1.04 4.75 it 81.0 31.30 67.48 -13.52 36.18 81.0 43.58 77.37 -3.63 33.79 82.0 71.09 73.11 -8.89 2.02 la 21.0 14.02 29.12 8.12 15.10 26.0 19.62 27.06 1.06 7.44 30.0 41.00 47.32 17.32 6.32 Romance pt 81.0 31.58 72.54 -8.46 40.96 83.0 47.27 73.24 -9.76 25.97 82.0 74.17 86.13 4.13 11.96 ro 56.0 22.56 52.48 -3.52 29.92 62.0 34.68 58.30 -3.70 23.62 68.0 51.77 75.49 7.49 23.72 es 57.0 34.34 75.32 18.32 40.98 60.0 46.14 80.97 20.97 34.83 72.0 71.99 84.44 12.44 12.45 nl 63.0 19.22 49.14 -13.86 29.92 65.0 26.05 53.12 -11.88 27.07 70.0 53.70 65.97 -4.03 12.27 da 68.0 31.25 65.58 -2.42 34.33 73.0 44.51 72.82 -0.18 28.31 79.0 67.92 80.12 1.12 12.20 no 69.0 32.51 65.46 -3.54 32.95 71.0 46.26 74.49 3.49 28.23 79.0 71.31 81.25 2.25 9.94 Germanic nn 64.0 20.29 54.62 -9.38 34.33 65.0 24.32 60.97 -4.03 36.65 72.0 50.40 73.35 1.35 22.95 sv 63.0 19.02 58.15 -4.85 39.13 66.0 36.35 67.18 1.18 30.83 74.0 59.82 78.23 4.23 18.41 bg 44.0 15.51 47.22 3.22 31.71 51.0 21.00 57.18 6.18 36.18 59.0 49.06 71.15 12.15 22.09 pl 50.0 12.75 48.62 -1.38 35.87 57.0 19.88 55.90 -1.10 36.02 64.0 54.44 67.15 3.15 12.71 Slavic si 52.0 15.60 55.69 3.69 40.09 61.0 26.39 61.22 0.22 34.83 68.0 66.65 75.40 7.40 8.75 ar 14.0 31.47 63.53 49.53 32.06 17.0 48.53 71.52 54.52 22.99 34.0 68.16 80.72 46.72 12.56 Semit. he 60.0 37.61 71.11 11.11 33.50 66.0 50.28 76.32 10.32 26.04 72.0 64.37 86.60 14.6 22.23 hu 53.0 22.56 48.64 -4.36 26.08 56.0 28.62 60.74 4.74 32.12 61.0 66.45 72.84 11.84 6.39 et 39.0 21.81 42.16 3.16 20.35 45.0 29.66 51.75 6.75 22.09 49.0 46.82 58.91 9.91 12.09 Finn.-Urg. fi 37.0 12.97 35.78 -1.22 22.81 42.0 19.03 47.65 5.65 28.62 49.0 46.75 62.76 13.76 16.01 lv 57.0 17.16 48.29 -8.71 31.13 63.0 18.30 53.58 -9.42 35.28 66.0 51.84 66.12 0.12 14.28 eu 50.0 24.46 48.72 -1.28 24.26 54.0 35.14 53.39 -0.61 18.25 56.0 56.29 62.33 6.33 6.04 other tr 34.0 20.67 37.92 3.92 17.25 37.0 24.33 49.67 12.67 25.34 48.0 63.26 69.35 21.35 6.09 avg 55.57 24.04 55.83 0.26 31.79 59.61 33.89 62.73 3.12 6.90 65.35 60.90 73.41 8.06 12.51 Table 3: Type-level morphological inflection accuracy across different models, training scenarios, and languages improves the performance of NN, we cannot always recommend SVAE for the case when we only have 500 annotated tokens, but on average it does slightly better. The SVAE surpasses the FST when moving up to 1000 annotated tokens, becoming even more pronounced at 5000 annotated tokens. Observation 2: Agglutinative Languages. The next trend we remark upon is that languages of an agglutinating nature tend to benefit more from the semi-supervised learning. Why should this be? Since in our experimental set-up, every language sees the same number of tokens, it is naturally harder to generalize on languages that have more distinct morphological variants. Also, by the nature of agglutinative languages, relevant morphemes could be arbitrarily far from the edges of the string, making the (NN and) SVAE’s ability to learn more generic rules even more valuable. Observation 3: Non-concatenative Morphology. One interesting advantage that the neural models have over the FSTs is the ability to learn nonconcatenative phenomena. The FST model is based on prefix and suffix rewrite rules and, naturally, struggles when the correctly reinflected form is more than the concatenation of these parts. Thus we see that for the two semitic language, the SVAE is the best method across all resource settings. 8 Conclusion We have presented a novel generative model for morphological inflection generation in context. The model allows us to exploit unlabeled data in the training of morphological inflectors. As the model’s rich parameterization prevents tractable inference, we craft a variational inference procedure, based on the wake-sleep algorithm, to marginalize out the latent variables. Experimentally, we provide empirical validation on 23 languages. We find that, especially in the lower-resource conditions, our model improves by large margins over the baselines. References Malin Ahlberg, Markus Forsberg, and Mans Hulden. 2015. Paradigm classification in supervised learning of morphology. In Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 1024–1029, Denver, CO. Association for Computational Linguistics. Waleed Ammar, Chris Dyer, and Noah A. Smith. 2014. Conditional random field autoencoders for unsupervised structured prediction. In Advances in Neural Information Processing Systems, pages 3311–3319. Mark Aronoff. 1976. Word Formation in Generative Grammar. Number 1 in Linguistic Inquiry Monographs. MIT Press, Cambridge, MA. 9 Matthew James Beal. 2003. Variational Algorithms for Approximate Bayesian Inference. University College London. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Jörg Bornschein and Yoshua Bengio. 2014. Reweighted wake-sleep. CoRR, abs/1406.2751. Ryan Cotterell and Georg Heigold. 2017. Crosslingual character-level neural morphological tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 748–759, Copenhagen, Denmark. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017. The CoNLL-SIGMORPHON 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL-SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, Vancouver, Canada. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task— morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10–22, Berlin, Germany. Association for Computational Linguistics. Peter Dayan, Geoffrey E. Hinton, Radford M. Neal, and Richard S. Zemel. 1995. The Helmholtz machine. Neural Computation, 7(5):889–904. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38. Markus Dreyer, Jason Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1080–1089, Honolulu, Hawaii. Association for Computational Linguistics. Matthew S. Dryer, David Gil, Bernard Comrie, Hagen Jung, Claudia Schmidt, et al. 2005. The world atlas of language structures. Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1185–1195, Atlanta, Georgia. Association for Computational Linguistics. Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2016. Morphological inflection generation using character sequence to sequence learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 634–643, San Diego, California. Association for Computational Linguistics. Shai Fine, Yoram Singer, and Naftali Tishby. 1998. The hierarchical hidden Markov model: Analysis and applications. Machine Learning, 32(1):41–62. Georg Heigold, Guenter Neumann, and Josef van Genabith. 2017. An extensive empirical evaluation of character-based morphological tagging for 14 languages. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 505–513, Valencia, Spain. Association for Computational Linguistics. Geoffrey E. Hinton, Peter Dayan, Brendan J. Frey, and Radford M. Neal. 1995. The "wake-sleep" algorithm for unsupervised neural networks. Science, 268(5214):1158–1161. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Mans Hulden, Markus Forsberg, and Malin Ahlberg. 2014. Semi-supervised learning of morphological paradigms and lexicons. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 569– 578, Gothenburg, Sweden. Association for Computational Linguistics. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233. Katharina Kann and Hinrich Schütze. 2016. Singlemodel encoder-decoder with explicit morphological representation for reinflection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 555–560, Berlin, Germany. Association for Computational Linguistics. Aleksandr E. Kibrik. 1998. Archi. In Andrew Spencer and Arnold M. Zwicky, editors, The Handbook of Morphology, pages 455–476. Diederik P. Kingma and Max Welling. 2013. Autoencoding variational Bayes. arXiv preprint arXiv:1312.6114. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282–289. 10 Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with Lemming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2268– 2274, Lisbon, Portugal. Association for Computational Linguistics. Thomas Müller, Helmut Schmid, and Hinrich Schütze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322–332, Seattle, Washington, USA. Association for Computational Linguistics. Garrett Nicolai, Colin Cherry, and Grzegorz Kondrak. 2015. Inflection generation as discriminative string transduction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 922–931, Denver, Colorado. Association for Computational Linguistics. Joakim Nivre, Željko Agi´c, Lars Ahrenberg, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Cristina Bosco, Gosse Bouma, Sam Bowman, Marie Candito, Gül¸sen Cebiro˘glu Eryi˘git, Giuseppe G. A. Celano, Fabricio Chalub, Jinho Choi, Ça˘grı Çöltekin, Miriam Connor, Elizabeth Davidson, MarieCatherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, and Dobrovoljc. 2017. Universal dependencies 2.0. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ’UFAL), Faculty of Mathematics and Physics, Charles University. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 286. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science. Andrew Spencer. 1991. Morphological Theory: An Introduction to Word Structure in Generative Grammar. Wiley-Blackwell. Andreas Stuhlmüller, Jacob Taylor, and Noah Goodman. 2013. Learning stochastic inverses. In Advances in Neural Information Processing Systems, pages 3048–3056. Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In Thirteenth Annual Conference of the International Speech Communication Association. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. John Sylak-Glassman. 2016. The composition and use of the universal morphological feature schema (Unimorph schema). Technical report, Johns Hopkins University. John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A language-independent feature schema for inflectional morphology. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL), pages 674–680, Beijing, China. Association for Computational Linguistics. Matthew D. Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint:1212.5701. Chunting Zhou and Graham Neubig. 2017. Multispace variational encoder-decoders for semisupervised labeled sequence transduction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 310–320, Vancouver, Canada. Association for Computational Linguistics. 11 A Hyperparameters and Experimental Details Here, we list all the hyperparameters and other experimental details necessary for the reproduction of the numbers presented in Tab. 3. The final experiments were produced with the follow setting. We performed a modest grid search over various configurations in the search of the best option on development for each component. LSTM Morphological Tag Language Model. The morphological tag language model is a 2-layer vanilla LSTM trained with hidden size of 200. It is trained to for 40 epochs using SGD with a cross entropy loss objective, and an initial learning rate of 20 where the learning rate is quartered during any epoch where the loss on the validation set reaches a new minimum. We regularize using dropout of 0.2 and clip gradients to 0.25. The morphological tags are embedded (both for input and output) with a multi-hot encoding into R200, where any given tag has an embedding that is the sum of the embedding for its constituent POS tag and each of its constituent slots. Lemmata Generator. The lemma generator is a single-layer vanilla LSTM, trained for 10000 epochs using SGD with a learning rate of 4, using a batch size of 20000. The LSTM has 50 hidden units, embeds the POS tags into R5 and each token (i.e., character) into R5. We regularize using weight decay (1e-6), no dropout, and clip gradients to 1. When sampling lemmata from the model, we cool the distribution using a temperature of 0.75 to generate more “conservative” values. The hyperparameters were manually tuned on Latin data to produce sensible output and fit development data and then reused for all languages of this paper. Morphological Inflector. The reinflection model is a single-layer GRU-cell seq2seq model with a bidirectional encoder and multiplicative attention in the style of Luong et al. (2015), which we train for 250 iterations of AdaDelta (Zeiler, 2012). Our search over the remaining hyperparameters was as follows (optimal values in bold): input embedding size of [50, 100, 200, 300 ], hidden size of [50, 100, 150, 200], and a dropout rate of [0.0, 0.1, 0.2, 0.3, 0.4, 0.5]. Lemmatizer and Morphological Tagger. The joint lemmatizer and tagger is LEMMING as described in §5.5. It is trained with default parameters, the pretrained word vectors from Bojanowski et al. (2016) as type embeddings, and beam size 3. Wake-Sleep We run two iterations (I = 2) of wake-sleep. Note that each of the subparts of wakesleep: estimating pθ and estimating qφ are trained to convergence and use the hyperparameters described in the previous paragraphs. We set γwake and γsleep to 0.25, so we observe roughly 1/4 as many dreamt samples as true samples. The samples from the generative model often act as a regularizer, helping the variational approximation (as measured on morphological tagging and lemmatization accuracy) on the UD development set, but sometimes the noise lowers performance a mite. Due to a lack of space in the initial paper, we did not deeply examine the performance of the tagger-lemmatizer outside the context of improving inflection prediction accuracy. Future work will investigate question of how much tagging and lemmatization can be improved through the incorporation of samples from our generative model. In short, our efforts will evaluate the inference network in its own right, rather than just as a variational approximation to the posterior. B Fake Data from the Sleep Phase An example sentence ˜f sampled via ⟨˜f, ˜ℓ, ˜m⟩∼ pθ (·, ·, ·) in Portuguese: dentremeticamente » isso Procusas Da Fase » pos a acordítica Máisringeringe Ditudis A ana , Urevirao Da De O linsith.muital , E que chegou interalionalmente Da anundica De mêpinsuriormentais . and in Latin: inpremcret ita sacrum super annum pronditi avocere quo det tuam nunsidebus quod puela ? 12
2018
245
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2642–2652 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2642 Morphosyntactic Tagging with a Meta-BiLSTM Model over Context Sensitive Token Encodings Bernd Bohnet, Ryan McDonald, Gonc¸alo Sim˜oes, Daniel Andor, Emily Pitler, Joshua Maynez Google Inc. {bohnetbd,ryanmcd,gsimoes,andor,epitler,joshuahm}@google.com Abstract The rise of neural networks, and particularly recurrent neural networks, has produced significant advances in part-ofspeech tagging accuracy (Zeman et al., 2017). One characteristic common among these models is the presence of rich initial word encodings. These encodings typically are composed of a recurrent character-based representation with learned and pre-trained word embeddings. However, these encodings do not consider a context wider than a single word and it is only through subsequent recurrent layers that word or sub-word information interacts. In this paper, we investigate models that use recurrent neural networks with sentence-level context for initial character and word-based representations. In particular we show that optimal results are obtained by integrating these context sensitive representations through synchronized training with a meta-model that learns to combine their states. We present results on part-of-speech and morphological tagging with state-of-the-art performance on a number of languages. 1 Introduction Morphosyntactic tagging accuracy has seen dramatic improvements through the adoption of recurrent neural networks—specifically BiLSTMs (Schuster and Paliwal, 1997; Graves and Schmidhuber, 2005) to create sentence-level context sensitive encodings of words. A successful recipe is to first create an initial context insensitive word representation, which usually has three main parts: 1) A dynamically trained word embedding; 2) a fixed pre-trained word-embedding, induced from a large corpus; and 3) a sub-word character model, which itself is usually the final state of a recurrent model that ingests one character at a time. Such word/sub-word models originated with Plank et al. (2016). Recently, Dozat et al. (2017) used precisely such a context insensitive word representation as input to a BiLSTM in order to obtain context sensitive word encodings used to predict partof-speech tags. The Dozat et al. model had the highest accuracy of all participating systems in the CoNLL 2017 shared task (Zeman et al., 2017). In such a model, sub-word character-based representations only interact indirectly via subsequent recurrent layers. For example, consider the sentence I had shingles, which is a painful disease. Context insensitive character and word representations may have learned that for unknown or infrequent words like ‘shingles’, ‘s’ and more so ‘es’ is a common way to end a plural noun. It is up to the subsequent BiLSTM layer to override this once it sees the singular verb is to the right. Note that this differs from traditional linear models where word and sub-word representations are directly concatenated with similar features in the surrounding context (Gim´enez and Marquez, 2004). In this paper we aim to investigate to what extent having initial sub-word and word context insensitive representations affects performance. We propose a novel model where we learn context sensitive initial character and word representations through two separate sentence-level recurrent models. These are then combined via a metaBiLSTM model that builds a unified representation of each word that is then used for syntactic tagging. Critically, while each of these three models—character, word and meta—are trained synchronously, they are ultimately separate models using different network configurations, training hyperparameters and loss functions. Empirically, we found this optimal as it allowed control 2643 over the fact that each representation has a different learning capacity. We tested the system on the 2017 CoNLL shared task data sets and gain improvements compared to the top performing systems for the majority of languages for part-of-speech and morphological tagging. As we will see, a pattern emerged where gains were largest for morphologically rich languages, especially those in the Slavic family group. We also applied the approach to the benchmark English PTB data, where our model achieved 97.9 using the standard train/dev/test split, which constitutes a relative reduction in error of 12% over the previous best system. 2 Related Work While sub-word representations are often attributed to the advent of deep learning in NLP, it was, in fact, commonplace for linear featurized machine learning methods to incorporate such representations. While the literature is too large to enumerate, Gim´enez and Marquez (2004) is a good example of an accurate linear model that uses both word and sub-word features. Specifically, like most systems of the time, they use ngram affix features, which were made context sensitive via manually constructed conjunctions with features from other words in a fixed window. Collobert and Weston (2008) was perhaps the first modern neural network for tagging. While this first study used only word embeddings, a subsequent model extended the representation to include suffix embeddings (Collobert et al., 2011). The seminal dependency parsing paper of Chen and Manning (2014) led to a number of tagging papers that used their basic architecture of highly featurized (and embedded) feed-forward neural networks. Botha et al. (2017), for example, studied this architecture in a low resource setting using word, sub-word (prefix/suffix) and induced cluster features to obtain competitive accuracy with the state-of-the-art. Zhou et al. (2015), Alberti et al. (2015) and Andor et al. (2016) extended the work of Chen et al. to a structured prediction setting, the later two use again a mix of word and sub-word features. The idea of using a recurrent layer over characters to induce a complementary view of a word has occurred in numerous papers. Perhaps the earliest is Santos and Zadrozny (2014) who compare character-based LSTM encodings to traditional word-based embeddings. Ling et al. (2015) take this a step further and combine the word embeddings with a recurrent character encoding of the word—instead of just relying on one or the other. Alberti et al. (2017) use characters encodings for parsing. Peters et al. (2018) show that contextual embeddings using character convolutions improve accuracy for number of NLP tasks. Plank et al. (2016) is probably the jumping-off point for most current architectures for tagging models with recurrent neural networks. Specifically, they used a combined word embedding and recurrent character encoding as the initial input to a BiLSTM that generated context sensitive word encodings. Though, like most previous studies, these initial encodings were context insensitive and relied on subsequent layers to encode sentence-level interactions. Finally, Dozat et al. (2017) showed that subword/word combination representations lead to state-of-the-art morphosyntactic tagging accuracy across a number of languages in the CoNLL 2017 shared task (Zeman et al., 2017). Their word representation consisted of three parts: 1) A dynamically trained word embedding; 2) a fixed pretrained word embedding; 3) a character LSTM encoding that summed the final state of the recurrent model with vector constructed using an attention mechanism over all character states. Again, the initial representations are all context insensitive. As this model is currently the state-of-the-art in morphosyntactic tagging, it will serve as a baseline during our discussion and experiments. 3 Models In this section, we introduce models that we investigate and experiment with in §4. 3.1 Sentence-based Character Model The feature that distinguishes our model most from previous work is that we apply a bidirectional recurrent layer (LSTM) on all characters of a sentence to induce fully context sensitive initial word encodings. That is, we do not restrict the context of this layer to the words themselves (as in Figure 1b). Figure 1a shows the sentence-based character model applied to an example token in context. The character model uses, as input, sentences split into UTF8 characters. We include the spaces between the tokens1 in the input and map each 1As input, we assume the sentence has been tok2644 (a) Sentence-based Character Model. The representation for the token shingles is the concatenation of the four shaded boxes. Note the surrounding sentence context affects the representation. (b) Token-based Character Modela. The token is represented by the concatenation of attention over the lightly shaded boxes with the final cell (dark shaded box). The rest of the sentence has no impact on the representation. aThis is specifically the model of Dozat et al. (2017). Figure 1: Token representations are sensitive to the context in the sentence-based character model (§3.1) and are context-independent in the token-based character model (§3.2). character to a dynamically learned embedding. Next, a forward LSTM reads the characters from left to right and a backward LSTM reads sentences from right to left, in standard BiLSTM fashion. More formally, for an n-character sentence, we apply for each character embedding (echar 1 , ..., echar n ) a BiLSTM: f0 c,i, b0 c,i = BiLSTM(r0, (echar 1 , ..., echar n ))i As is also typical, we can have multiple such layers (l) that feed into each other through the concatenation of previous layer encodings. The last layer l has both forward (fl c,1, ..., fl c,n) and backward (bl c,1, ..., bl c,n) output vectors for each character. To create word encodings, we need to combine a relevant subset of these context sensitive character encodings. These word encodings can then be used in a model that assigns morphosyntactic tags to each word directly or via subsequent layers. To accomplish this, the model concatenates up to four character output vectors: the {forward, backward} output of the {first, last} character in the token (F1st(w), Flast(w), B1st(w), Blast(w)). In Figure 1a, the four shaded boxes indicate these four outputs for the example token. Thus, the proposed model concatenates all four of these and passes it as input to an multilayer perceptron (MLP): gi = concat(F1st(w), Flast(w), B1st(w), Blast(w)) (1) mchars i = MLP(gi) A tag can then be predicted with a linear classifier that takes as input the output of the MLP enized/segmented. mchars i , applies a softmax function and chooses for each word the tag with highest probability. Table 8 investigates the empirical impact of alternative definitions of gi that concatenate only subsets of {F1st(w), Flast(w), B1st(w), Blast(w)}. 3.2 Word-based Character Model To investigate whether a sentence sensitive character model is better than a character model where the context is restricted to the characters of a word, we reimplemented the word-based character model of Dozat et al. (2017) as shown in Figure 1a. This model uses the final state of a unidirectional LSTM over the characters of the word, combined with the attention mechanism of Cao and Rei (2016) over all characters. We refer the reader to those works for more details. Critically, however, all the information fed to this representation comes from the word itself, and not a wider sentence-level context. 3.3 Sentence-based Word Model We used a similar setup for our context sensitive word encodings as the character encodings. There are a few differences. Obviously, the inputs are the words of the sentence. For each of the words, we use pretrained word embeddings (pword 1 , ..., pword n ) summed with a dynamically learned word embedding for each word in the corpus (eword 1 , ..., eword n ): inword i = eword i + pword i The summed embeddings ini are passed as input to one or more BiLSTM layers whose output fl w,i, bl w,i is concatenated and used as the final encoding, which is then passed to an MLP oword i = concat(fl w,i, bl w,i) mword i = MLP(oword i ) 2645 It should be noted, that the output of this BiLSTM is essentially the Dozat et al. model before tag prediction, with the exception that the wordbased character encodings are excluded. 3.4 Meta-BiLSTM: Model Combination Given initial word encodings, both character and word-based, a common strategy is to pass these through a sentence-level BiLSTM to create context sensitive encodings, e.g., this is precisely what Plank et al. (2016) and Dozat et al. (2017) do. However, we found that if we trained each of the character-based and word-based encodings with their own loss, and combined them using an additional meta-BiLSTM model, we obtained optimal performance. In the meta-BiLSTM model, we concatenate the output, for each word, of its context sensitive character and word-based encodings, and put this through another BiLSTM to create an additional combined context sensitive encoding. This is followed by a final MLP whose output is passed to a linear layer for tag prediction. cwi = concat(mchar i , mword i ) fl m,i, bl m,i = BiLSTM(r0, (cw0, ..., cwn))i mcomb i = MLP(concat(fl m,i, bl m,i)) With this setup, each of the models can be optimized independently which we describe in more detail in §3.5. Figure 2b depicts the architecture of the combined system and contrasts it with that of the Dozat et al. model (Figure 2a). 3.5 Training Schema As mentioned in the previous section, the character and word-based encoding models have their own tagging loss functions, which are trained independently and joined via the meta-BiLSTM. I.e., the loss of each model is minimized independently by separate optimizers with their own hyperparameters. Thus, it is in some sense a multitask learning model and we must define a schedule in which individual models are updated. We opted for a simple synchronous schedule outline in Algorithm 1. Here, during each epoch, we update each of the models in sequence—character, word and meta—using the entire training data. In terms of model selection, after each epoch, the algorithm evaluates the tagging accuracy of the development set and keeps the parameters of the best model. Accuracy is measured using the Data: train-corpus, dev-corpus /* The following models are defined in §3. */ Input: char-model, word-model, meta-model /* Model optimizers */ Input: char-opt, word-opt, meta-opt /* Results are parameter sets for each model. */ Result: best-char, best-word, best-meta /* Initialize parameter sets (cf. Table 1) */ Initialize(pac, paw, pam) /* Iteration on over training corpus. */ for epoch = 1 to MAX do /* Update character model. */ char-logits, char-preds = char-model(train-corpus, pac) pac = char-opt.update(char-preds, train-data) /* Update word model. */ word-logits, word-preds = word-model(train-corpus, paw) paw = word-opt.update(char-preds, train-data) /* Update Meta-BiLSTM model. */ meta-preds = meta-model(train-corpus, pac, paw, pam) pam = meta-opt.update(train-corpus, meta-preds) /* Evaluate model due to dev set accuracy. */ F1 = DevEval(parc, parw, parm) /* Keep the best model. */ if F1 > best-F1 then best-char = pac; best-word = paw best-meta = pam; best-F1 = F1 end end Algorithm 1: Training procedure for learning initial character and word-based context sensitive encodings synchronously with metaBiLSTM. meta-BiLSTM tagging layer, which requires a forward pass through all three models. Though we use all three losses to update the models, only the meta-BiLSTM layer is used for model selection and test-time prediction. While each of the three models—character, word and meta—are trained with their own loss functions, it should be emphasized that training is synchronous in the sense that the meta-BiLSTM model is trained in tandem with the two encoding models, and not after those models have converged. Since accuracy from the meta-BiLSTM model on the development set determines the best parameters, training is not completely independent. We found this to improve accuracy overall. Crucially, when we allowed the meta-BiLSTM to back-propagate through the whole network, performance degraded regardless of whether one or multiple loss functions were used. 2646 Word Char Embeddings MLP classifier Word Embeddings Words of a Sentence 7 Attn Final (a) The overall architecture of Dozat et al. (2017) Chars of a Sentence Char Embeddings MLP classifier MLP classifier MLP classifier Word Embeddings Words of a Sentence 7 7 (b) The overall architecture of the system. The data flows along the arrows. The optimizers minimizes the loss of the classifiers independently and backpropagates along the bold arrows. Figure 2: Tagging architectures. (a) Dozat et al. (2017); (b) Meta-BiLSTM architecture of this work. Each language could in theory use separate hyperparameters, optimized for highest accuracy. However, for our main experiments we use identical settings for each language which worked well for large corpora and simplified things. We provide an overview of the selected hyperparameters in §4.1. We explored more settings for selected individual languages with a grid search and ablation experiments and present the results in §5. 4 Experiments and Results In this section, we present the experimental setup and the selected hyperparameter for the main experiments where we use the CoNLL Shared Task 2017 treebanks and compare with the best systems of the shared task. 4.1 Experimental Setup For our main results, we selected one network configuration and set of the hyperparameters. These settings are not optimal for all languages. However, since hyperparameter exploration is computationally demanding due to the number of languages we optimized these hyperparameters on initial development data experiments over a few languages. Table 1 shows an overview of the architecture, hyperparameters and the initialization settings of the network. The word embeddings are initialized with zero values and the pre-trained embeddings are not updated during training. The dropout used on the embeddings is achieved by a single dropout mask and we use dropout on the input and the states of the LSTM. Architecture Model Parameter Value Chr, Wrd BiLSTM layers 3 Mt BiLSTM layers 1 Chr, Wrd, Mt BiLSTM size 400 Chr, Wrd, Mt Dropout LSTMs 0.33 Chr, Wrd, Mt Dropout MLP 0.33 Wrd Dropout embeddings 0.33 Chr Dropout embeddings 0.05 Chr, Wrd, Mt Nonlinear act. (MLP) ELU Initialization Model Parameter Value Wrd embeddings Zero Chr embeddings Gaussian Chr, Wrd, Mt MLP Gaussian Training Model Parameter Value Chr, Wrd, Mt Optimizer Adam Chr, Wrd, Mt Loss Cross entropy Chr, Wrd, Mt Learning rate 0.002 Chr, Wrd, Mt Decay 0.999994 Chr, Wrd, Mt Adam epsilon 1e-08 Chr, Wrd, Mt beta1 0.9 Chr, Wrd, Mt beta2 0.999 Table 1: Selected hyperparameters and initialization of parameters for our models. Chr, Wrd, and Mt are used to indicate the character, word, and meta models respectively. The Gaussian distribution is used with a mean of 0 and variance of 1 to generate the random values. As is standard, model selection was done measuring development accuracy/F1 score after each epoch and taking the model with maximum value on the development set. 2647 4.2 Data Sets For the experiments, we use the data sets as provided by the CoNLL Shared Task 2017 (Zeman et al., 2017). For training, we use the training sets which were denoted as big treebanks 2. We followed the same methodology used in the CoNLL Shared Task. We use the training treebank for training only and the development sets for hyperparameter tuning and early stopping. To keep our results comparable with the Shared Task, we use the provided precomputed word embeddings. We excluded Gothic from our experiments as the available downloadable content did not include embeddings for this language. As input to our system—for both part-ofspeech tagging and morphological tagging—we use the output of the UDPipe-base baseline system (Straka and Strakov´a, 2017) which provides segmentation. The segmentation differs from the gold segmentation and impacts accuracy negatively for a number of languages. Most of the top performing systems for part-of-speech tagging used as input UDPipe to obtain the segmentation for the input data. For morphology, the top system for most languages (IMS) used its own segmentation (Bj¨orkelund et al., 2017). For the evaluation, we used the official evaluation script (Zeman et al., 2017). 4.3 Part-of-Speech Tagging Results In this section, we present the results of the application of our model to part-of-speech tagging. In our first experiment, we used our model in the setting of the CoNLL 2017 Shared Task to annotate words with XPOS3 tags (Zeman et al., 2017). We compare our results against the top systems of the CoNLL 2017 Shared Task. Table 2 contains the results of this task for the large treebanks. Because Dozat et al. (2017) won the challenge for the majority of the languages, we first compare our results with the performance of their system. Our model outperforms Dozat et al. (2017) in 32 of the 54 treebanks with 13 ties. These ties correspond mostly to languages where XPOS tagging anyhow obtains accuracies above 99%. Our model tends to produce better results, especially for morphologically rich languages (e.g. Slavic 2In the CONLL 2017 Shared Task, a big treebank is one that contains a development set. In total, there are 55 out of the 64 UD treebanks which are considered big treebanks. 3These are the language specific fine-grained part-ofspeech tags from the Universal Dependency Treebanks. CONLL DQM ours RRIE lang. Winner cs cac 95.16 95.16 96.91 36.2 cs 95.86 95.86 97.28 35.5 fi 97.37 97.37 97.81 16.7 sl 94.74 94.74 95.54 15.2 la ittb 94.79 94.79 95.56 14.8 grc 84.47 84.47 86.51 13.1 bg 96.71 96.71 97.05 10.3 ca 98.58 98.58 98.72 9.9 grc proiel 97.51 97.51 97.72 8.4 pt 83.04 83.04 84.39 8.0 cu 96.20 96.20 96.49 7.6 it 97.93 97.93 98.08 7.2 fa 97.12 97.12 97.32 6.9 ru 96.73 96.73 96.95 6.7 sv 96.40 96.40 96.64 6.7 ko 93.02 93.02 93.45 6.2 sk 85.00 85.00 85.88 5.9 nl 90.61 90.61 91.10 5.4 fiftb 95.31 95.31 95.56 5.3 de 97.29 97.29 97.39 4.7 tr 93.11 93.11 93.43 4.6 hi 97.01 97.01 97.13 4.0 es ancora 98.73 98.73 98.78 3.9 ro 96.98 96.98 97.08 3.6 la proiel 96.93 96.93 97.00 2.3 pl 91.97 91.97 92.12 1.9 ar 87.66 87.66 87.82 1.3 gl 97.50 97.50 97.53 1.2 sv lines 94.84 94.84 94.90 1.2 cs clt 89.98 89.98 90.09 1.1 lv 80.05 80.05 80.20 0.8 zh 88.40 85.07 85.10 0.2 da 100.00 99.96 99.96 0.0 es 99.81 99.69 99.69 0.0 eu 99.98 99.96 99.96 0.0 fr sequoia 99.49 99.06 99.06 0.0 fr 99.50 98.87 98.87 0.0 hr 99.93 99.93 99.93 0.0 hu 99.85 99.82 99.82 0.0 id 100.00 99.99 99.99 0.0 ja 98.59 89.68 89.68 0.0 nl lassy 99.99 99.93 99.93 0.0 no bok. 99.88 99.75 99.75 0.0 no nyn. 99.93 99.85 99.85 0.0 ru syn. 99.58 99.57 99.57 0.0 en lines 95.41 95.41 95.39 -0.4 ur 92.30 92.30 92.21 -1.2 he 83.24 82.45 82.16 -1.7 vi 75.42 73.56 73.12 -1.7 gl treegal 91.65 91.65 91.40 -3.0 en 94.82 94.82 94.66 -3.1 en partut 95.08 95.08 94.81 -5.5 pt br 98.22 98.22 98.11 -6.2 et 95.05 95.05 94.72 -6.7 el 97.76 97.76 97.53 -10.3 macro-avg 93.18 93.11 93.40 Table 2: Results for XPOS tags. The first column shows the language acronym, the column named DQM shows the results of Dozat et al. (2017). Our system outperforms Dozat et al. (2017) on 32 out of 54 treebanks and Dozat et al. outperforms our model on 10 of 54 treebanks, with 13 ties. RRIE is the relative reduction in error. We excluded ties in the calculation of macro-avg since these treebanks do not contain meaningful xpos tags. 2648 System Accuracy Søgaard (2011) 97.50 Huang et al. (2015) 97.55 Choi (2016) 97.64 Andor et al. (2016). 97.44 Dozat et al. (2017) 97.41 ours 97.96 Table 3: Results on WSJ test set. languages), whereas Dozat et al. (2017) showed higher performance in 10 languages in particular English, Greek, Brazilian Portuguese and Estonian. 4.4 Part-of-Speech Tagging on WSJ We also performed experiments on the Penn Treebank with the usual split in train, development and test set. Table 3 shows the results of our model in comparison to the results reported in state-ofthe-art literature. Our model significantly outperforms these systems, with an absolute difference of 0.32% in accuracy, which corresponds to a RRIE of 12%. 4.5 Morphological Tagging Results In addition to the XPOS tagging experiments, we performed experiments with morphological tagging. This annotation was part of the CONLL 2017 Shared Task and the objective was to predict a bundle of morphological features for each token in the text. Our model treats the morphological bundle as one tag making the problem equivalent to a sequential tagging problem. Table 4 shows the results. Our models tend to produce significantly better results than the winners of the CoNLL 2017 Shared Task (i.e., 1.8% absolute improvement on average, corresponding to a RRIE of 21.20%). The only cases for which this is not true are again languages that require significant segmentation efforts (i.e., Hebrew, Chinese, Vietnamese and Japanese) or when the task was trivial. Given the fact that Dozat et al. (2017) obtained the best results in part-of-speech tagging by a significant margin in the CoNLL 2017 Shared Task, it would be expected that their model would also perform significantly well in morphological tagging since the tasks are very similar. Since they did not participate in this particular challenge, we decided to reimplement their system to serve CONLL DQM ours RRIE lang. Winner Reimpl. cs cac 90.72 94.66 96.41 27.9 ru syn. 94.55 96.70 97.53 23.1 cs 93.14 96.32 97.14 22.3 la ittb 94.28 96.45 97.12 18.9 sl 90.08 95.26 96.03 16.2 ca 97.23 97.85 98.13 13.0 fiftb 93.43 95.96 96.42 11.4 no bok. 95.56 96.95 97.26 10.2 grc proiel 90.24 91.35 92.22 10.1 fr sequoia 96.10 96.62 97.62 10.1 la proiel 89.22 91.52 92.35 9.8 es ancora 97.72 98.15 98.32 9.7 da 94.83 96.62 96.94 9.5 fi 92.43 94.29 94.83 9.5 sv 95.15 96.52 96.84 9.2 pt 94.62 95.89 96.27 9.2 grc 88.00 90.39 91.13 9.0 no nyn. 95.25 96.79 97.08 9.0 de 83.11 89.78 90.70 9.0 ru 87.27 91.99 92.69 8.7 hi 91.03 90.72 91.78 8.1 cu 88.90 88.93 89.82 8.0 fa 96.34 97.23 97.45 7.9 tr 87.03 89.39 90.21 7.7 en partut 92.69 93.93 94.40 7.7 sk 81.23 87.54 88.48 7.5 eu 89.57 92.48 93.04 7.4 pt br 99.73 99.73 99.75 7.4 es 96.34 96.42 96.68 7.3 ko 99.41 99.44 99.48 7.1 ar 87.15 85.45 88.29 6.7 it 97.37 97.72 97.86 6.1 nl lassy 97.55 98.04 98.15 5.2 nl 90.04 92.06 92.47 5.2 pl 86.53 91.71 92.14 5.2 ur 81.03 83.16 84.02 5.1 bg 96.47 97.71 97.82 4.8 hr 85.82 90.64 91.50 3.8 he 85.06 79.34 79.76 2.0 et 84.62 88.18 88.25 0.6 zh 92.90 87.67 87.74 0.6 vi 86.92 82.23 82.30 0.4 ja 96.84 89.65 89.66 0.1 en lines 99.96 99.99 99.99 0.0 fr 96.12 95.98 95.98 0.0 gl 99.78 99.72 99.72 0.0 id 99.55 99.50 99.50 0.0 ro 96.24 97.26 97.26 0.0 sv lines 99.98 99.98 99.98 0.0 cs cltt 87.88 90.41 90.36 -0.5 lv 84.14 87.00 86.92 -0.6 el 91.37 94.00 93.92 -1.3 hu 72.61 82.67 82.44 -1.3 en 94.49 95.93 95.71 -5.4 macro-avg 91.51 92.89 93.31 Table 4: Results for morphological features. The column CoNLL Winner shows the top system of the ST 17, the DQM Reimpl. shows our reimplementation of Dozat et al. (2017), the column ours shows our system with a sentence-based character model; RRIE gives the relative reduction in error between the Reimpl. DQM and sentencebased character system. Our system outperforms the CoNLL Winner by 48 out of 54 treebanks and the reimplementation of DQM, by 43 of 54 treebanks, with 6 ties. 2649 as a strong baseline. As expected, our reimplementation of Dozat et al. (2017) tends to significantly outperform the winners of the CONLL 2017 Shared Task. However, in general, our models still obtain better results, outperforming Dozat et al. on 43 of the 54 treebanks, with an absolute difference of 0.42% on average. 5 Ablation Study The model proposed in this paper of a MetaBiLSTM with a sentence-based character model differs from prior work in multiple aspects. In this section, we perform ablations to determine the relative impact of each modeling decision. For the experimental setup of the ablation experiments, we report accuracy scores for the development sets. We split off 5% of the sentences from each training corpus and we use this part for early stopping. Ablation experiments are either performed on a few selected treebanks to show individual language results or averaged across all treebanks for which tagging is non-trivial. Impact of the Training Schema We first compare jointly training the three model components (Meta-BiLSTM, character model, word model) to training each separately. Table 5 shows that separately optimized models are significantly more accurate on average than jointly optimized models. Separate optimization leads to better accuracy for 34 out of 40 treebanks for the morphological features task and for 30 out of 39 treebanks for xpos tagging. Separate optimization outperformed joint optimization by up to 2.1 percent absolute, while joint never out-performed separate by more than 0.5% absolute. We hypothesize that separately training the models forces each submodel (word and character) to be strong enough to make high accuracy predictions and in some sense serves as a regularizer in the same way that dropout does for individual neurons. Impact of the Sentence-based Character Model We compared the setup with sentence-based character context (Figure 1a) to word-based character context (Figure 1b). We selected for these experiments a number of morphological rich languages. The results are shown in Table 6. The accuracy of the word-based character model joint with a word-based model were significantly lower than a sentence-based character model. We conclude also from these results and comparing with results Optimization Avg. F1 Score Avg. F1 Score morphology xpos separate 94.57 94.85 jointly 94.15 94.48 Table 5: Comparison of optimization methods: Separate optimization of the word, character and meta model is more accurate on average than full back-propagation using a single loss function.The results are statistically significant with two-tailed paired t-test for xpos with p<0.001 and for morphology with p <0.0001. dev. set word char model sentence char model el 89.05 93.41 la ittb 93.22 95.69 ru 88.94 92.31 tr 87.78 90.77 Table 6: F1 score for selected languages on sentence vs. word level character models for the prediction of morphology using late integration. dev. set num. mean mean mean stdev stdev stdev lang. exp. char word joint char word joint el 10 96.43 95.36 97.01 0.13 0.11 0.09 grc 10 88.28 73.52 88.85 0.21 0.29 0.22 la ittb 10 91.45 87.98 91.94 0.14 0.30 0.05 ru 10 95.98 93.50 96.61 0.06 0.17 0.07 tr 10 93.77 90.48 94.67 0.11 0.33 0.14 Table 7: F1 score for the character, word and joint models. The standard deviation of 10 random restarts of each model is show in the last three columns. The differences in means are all statistically significant at p < 0.001 (paired t-test). of the reimplementation of DQM that early integration of the word-based character model performs much better as late integration via MetaBiLSTM for a word-based character model. Impact of the Meta-BiLSTM Model Combination The proposed model trains word and character models independently while training a joint model on top. Here we investigate the part-ofspeech tagging performance of the joint model compared with the word and character models on their own (using hyperparameters from in 4.1). Table 5 shows, for selected languages, the results averaged over 10 runs in order to measure standard deviation. The examples show that the combined model has significantly higher accuracy compared with either the character and word models individually. 2650 dev. set. Flast F1st Flast F1st lang. B1st Blast Blast B1st DQM |xpos| el 96.6 96.6 96.2 96.1 95.9 16 grc 87.3 87.1 87.1 86.8 86.7 3130 la ittb 91.1 91.5 91.9 91.3 91.0 811 ru 95.6 95.4 95.6 95.3 95.8 49 tr 93.5 93.3 93.2 92.5 93.9 37 Table 8: F1 score of char models and their performance on the dev. set for selected languages with different gather strategies, concatenate to gi (Equation 1). DQM shows results for our reimplementation of Dozat et al. (2017) (cf. §3.2), where we feed in only the characters. The final column shows the number of xpos tags in the training set. Concatenation Strategies for the ContextSensitive Character Encodings The proposed model bases a token encoding on both the forward and the backward character representations of both the first and last character in the token (see Equation 1). Table 8 reports, for a few morphological rich languages, the part-of-speech tagging performance of different strategies to gather the characters when creating initial word encodings. The strategies were defined in §3.1. The Table also contains a column with results for our reimplementation of Dozat et al. (2017). We removed, for all systems, the word model in order to assess each strategy in isolation. The performance is quite different per language. E.g., for Latin, the outputs of the forward and backward LSTMs of the last character scored highest. Sensitivity to Hyperparameter Search We picked Vietnamese for a more in-depth analysis since it did not perform well and investigated the influence of the sizes of LSTMs for the word and character model on the accuracy of development set. With larger network sizes, the capacity of the network increases, however, on the other hand it is prune to overfitting. We fixed all the hyperparameters except those for the network size of the character model and the word model, and ran a grid search over dimension sizes from 200 to 500 in steps of 50. The surface plot in 3 shows that the grid peaks with more moderate settings around 350 LSTM cells which might lead to a higher accuracy. For all of the network sizes in the grid search, we still observed during training that the accuracy reach a high value and degrades with more iterations for the character and word model. This suggests that future variants of this model might benefit from higher regularization. Figure 3: 3D surface plot for development set accuracy for XPOS (y-axis) depending on LSTM size of the character and word model for the Vietnamese treebank. The snapshot is take after 195 training epochs and we average the values of neighboring epochs. Discussion Generally, the fact that different techniques for creating word encodings from character encodings and different network sizes can lead to different accuracies per language suggests that it should be possible to increase the accuracy of our model on a per language basis via a grid search over all possibilities. In fact, there are many variations on the models we presented in this work (e.g., how the character and word models are combined with the meta-BiLSTM). Since we are using separate losses, we could also change our training schema. For example, one could use methods like stack-propagation (Zhang and Weiss, 2016) where we burn-in the character and word models and then train the meta-BiLSTM backpropagating throughout the entire network. 6 Conclusions We presented an approach to morphosyntactic tagging that combines context-sensitive initial character and word encodings with a meta-BiLSTM layer to obtain state-of-the art accuracies for a wide variety of languages. Acknowledgments We would like to thank the anonymous reviewers as well as Terry Koo, Slav Petrov, Vera Axelrod, Kellie Websterk, Jan Botha, Kuzman Ganchev, Zhuoran Yu, Yuan Zhang, Eva Schlinger, Ji Ma, and John Alex for their helpful suggestions, comments and discussions. 2651 References Chris Alberti, Daniel Andor, Ivan Bogatyy, Michael Collins, Dan Gillick, Lingpeng Kong, Terry Koo, Ji Ma, Mark Omernick, Slav Petrov, Chayut Thanapirom, Zora Tung, and David Weiss. 2017. Syntaxnet models for the conll 2017 shared task http://arxiv.org/abs/1703.04929. Chris Alberti, David Weiss, Greg Coppola, and Slav Petrov. 2015. Improved transition-based parsing and tagging with neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 1354–1359. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 2442–2452. Anders Bj¨orkelund, Agnieszka Falenska, Xiang Yu, and Jonas Kuhn. 2017. Ims at the conll 2017 ud shared task: Crfs and perceptrons meet neural networks. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics, Vancouver, Canada, pages 40–51. Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan McDonald, and Slav Petrov. 2017. Natural language processing with small feed-forward networks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2879–2885. Kris Cao and Marek Rei. 2016. A joint model for word embedding and word morphology. In Proceedings of the 1st Workshop on Representation Learning for NLP. pages 18–26. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). pages 740–750. Jinho D. Choi. 2016. Dynamic Feature Induction: The Last Gist to the State-of-the-Art. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics. San Diego, CA, NAACL’16, pages 271– 281. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning. ACM, pages 160–167. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford’s graph-based neural dependency parser at the conll 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, Vancouver, Canada, August 3-4, 2017. pages 20–30. Jes´us Gim´enez and Lluis Marquez. 2004. Fast and accurate part-of-speech tagging: The svm approach revisited. Recent Advances in Natural Language Processing III pages 153–162. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 18(5):602–610. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging http://arxiv.org/abs/1508.01991. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 1520–1530. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations http://arxiv.org/abs/1802.05365. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 412–418. Cicero D Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML-14). pages 1818–1826. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673–2681. Anders Søgaard. 2011. Semisupervised condensed nearest neighbor for part-of-speech tagging. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT ’11, pages 48–52. 2652 Milan Straka and Jana Strakov´a. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics, Vancouver, Canada, pages 88–99. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, V´aclava Kettnerov´a, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missil¨a, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, H´ector Mart´ınez Alonso, C¸ a˘gr C¸ ¨oltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadov´a, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. Conll 2017 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics, Vancouver, Canada, pages 1–19. Yuan Zhang and David Weiss. 2016. Stackpropagation: Improved representation learning for syntax. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1557– 1566. Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structuredprediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Association for Computational Linguistics, Beijing, China, pages 1213– 1222.
2018
246
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2653–2663 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2653 Neural Factor Graph Models for Cross-lingual Morphological Tagging Chaitanya Malaviya and Matthew R. Gormley and Graham Neubig Language Technologies Institute, Machine Learning Department Carnegie Mellon University {cmalaviy,mgormley,gneubig}@cs.cmu.edu Abstract Morphological analysis involves predicting the syntactic traits of a word (e.g. {POS: Noun, Case: Acc, Gender: Fem}). Previous work in morphological tagging improves performance for low-resource languages (LRLs) through cross-lingual training with a high-resource language (HRL) from the same family, but is limited by the strict—often false—assumption that tag sets exactly overlap between the HRL and LRL. In this paper we propose a method for cross-lingual morphological tagging that aims to improve information sharing between languages by relaxing this assumption. The proposed model uses factorial conditional random fields with neural network potentials, making it possible to (1) utilize the expressive power of neural network representations to smooth over superficial differences in the surface forms, (2) model pairwise and transitive relationships between tags, and (3) accurately generate tag sets that are unseen or rare in the training data. Experiments on four languages from the Universal Dependencies Treebank (Nivre et al., 2017) demonstrate superior tagging accuracies over existing cross-lingual approaches.1 1 Introduction Morphological analysis (Hajiˇc and Hladk´a (1998), Oflazer and Kuru¨oz (1994), inter alia) is the task of predicting fine-grained annotations about the syntactic properties of tokens in a language such 1Our code and data is publicly available at www.github.com/chaitanyamalaviya/ NeuralFactorGraph. Figure 1: Morphological tags for a UD sentence in Portuguese and a translation in Spanish as part-of-speech, case, or tense. For instance, in Figure 1, the given Portuguese sentence is labeled with the respective morphological tags such as Gender and its label value Masculine. The accuracy of morphological analyzers is paramount, because their results are often a first step in the NLP pipeline for tasks such as translation (Vylomova et al., 2017; Tsarfaty et al., 2010) and parsing (Tsarfaty et al., 2013), and errors in the upstream analysis may cascade to the downstream tasks. One difficulty, however, in creating these taggers is that only a limited amount of annotated data is available for a majority of the world’s languages to learn these morphological taggers. Fortunately, recent efforts in morphological annotation follow a standard annotation schema for these morphological tags across languages, and now the Universal Dependencies Treebank (Nivre et al., 2017) has tags according to this schema in 60 languages. Cotterell and Heigold (2017) have recently shown that combining this shared schema with cross-lingual training on a related high-resource language (HRL) gives improved performance 2654 Figure 2: FCRF-LSTM Model for morphological tagging on tagging accuracy for low-resource languages (LRLs). The output space of this model consists of tag sets such as {POS: Adj, Gender: Masc, Number: Sing}, which are predicted for a token at each time step. However, this model relies heavily on the fact that the entire space of tag sets for the LRL must match those of the HRL, which is often not the case, either due to linguistic divergence or small differences in the annotation schemes between the two languages.2 For instance, in Figure 1 “refrescante” is assigned a gender in the Portuguese UD treebank, but not in the Spanish UD treebank. In this paper, we propose a method that instead of predicting full tag sets, makes predictions over single tags separately but ties together each decision by modeling variable dependencies between tags over time steps (e.g. capturing the fact that nouns frequently occur after determiners) and pairwise dependencies between all tags at a single time step (e.g. capturing the fact that infinitive verb forms don’t have tense). The specific model is shown in Figure 2, consisting of a factorial conditional random field (FCRF; Sutton et al. (2007)) with neural network potentials calculated by long short-term memory (LSTM; (Hochreiter and Schmidhuber, 1997)) at every variable node (§3). Learning and inference in the model is made 2In particular, the latter is common because many UD resources were created by full or semi-automatic conversion from treebanks with less comprehensive annotation schemes than UD. Our model can generate label values for these tags too, which could possibly aid the enhancement of UD annotations, although we do not examine this directly in our work. tractable through belief propagation over the possible tag combinations, allowing the model to consider an exponential label space in polynomial time (§3.5). This model has several advantages: • The model is able to generate tag sets unseen in training data, and share information between similar tag sets, alleviating the main disadvantage of previous work cited above. • Our model is empirically strong, as validated in our main experimental results: it consistently outperforms previous work in cross-lingual low-resource scenarios in experiments. • Our model is more interpretable, as we can probe the model parameters to understand which variable dependencies are more likely to occur in a language, as we demonstrate in our analysis. In the following sections, we describe the model and these results in more detail. 2 Problem Formulation and Baselines 2.1 Problem Formulation Formally, we define the problem of morphological analysis as the task of mapping a length-T string of tokens x = x1, . . . , xT into the target morphological tag sets for each token y = y1, . . . , yT . For the tth token, the target label yt = yt,1, . . . , yt,m defines a set of tags (e.g. {Gender: Masc, Number: Sing, POS: Verb}). An annotation schema defines a set S of M possible tag types and with the mth type (e.g. Gender) defining its set of possible labels Ym (e.g. {Masc, Fem, Neu}) such that yt,m ∈Ym. We must note that not all tags or attributes need to be specified for a token; usually, a subset of S is specified for a token and the remaining tags can be treated as mapping to a NULL ∈Ym value. Let Y = {(y1, . . . , yM) : y1 ∈Y1, . . . , yM ∈YM} denote the set of all possible tag sets. 2.2 Baseline: Tag Set Prediction Data-driven models for morphological analysis are constructed using training data D = {(x(i), y(i))}N i=1 consisting of N training examples. The baseline model (Cotterell and Heigold, 2017) we compare with regards the output space of the model as a subset ˜Y ⊂Y where ˜Y is the 2655 set of all tag sets seen in this training data. Specifically, they solve the task as a multi-class classification problem where the classes are individual tag sets. In low-resource scenarios, this indicates that | ˜Y| << |Y| and even for those tag sets existing in ˜Y we may have seen very few training examples. The conditional probability of a sequence of tag sets given the sentence is formulated as a 0th order CRF. p(y|x) = T Y t=1 p(yt|x) (1) Instead, we would like to be able to generate any combination of tags from the set Y, and share statistical strength among similar tag sets. 2.3 A Relaxation: Tag-wise Prediction As an alternative, we could consider a model that performs prediction for each tag’s label yt,m independently. p(y|x) = T Y t=1 M Y m=1 p(yt,m|x) (2) This formulation has an advantage: the tagpredictions within a single time step are now independent, it is now easy to generate any combination of tags from Y. On the other hand, now it is difficult to model the interdependencies between tags in the same tag set yi, a major disadvantage over the previous model. In the next section, we describe our proposed neural factor graph model, which can model not only dependencies within tags for a single token, but also dependencies across time steps while still maintaining the flexibility to generate any combination of tags from Y. 3 Neural Factor Graph Model Due to the correlations between the syntactic properties that are represented by morphological tags, we can imagine that capturing the relationships between these tags through pairwise dependencies can inform the predictions of our model. These dependencies exist both among tags for the same token (intra-token pairwise dependencies), and across tokens in the sentence (inter-token transition dependencies). For instance, knowing that a token’s POS tag is a Noun, would strongly suggest that this token would have a NULL label for the tag Tense, with very few exceptions (Nordlinger and Sadler, 2004). In a language where nouns follow adjectives, a tag set prediction {POS: Adj, Gender: Fem} might inform the model that the next token is likely to be a noun and have the same gender. The baseline model can not explicitly model such interactions given their factorization in equation 1. To incorporate the dependencies discussed above, we define a factorial CRF (Sutton et al., 2007), with pairwise links between cotemporal variables and transition links between the same types of tags. This model defines a distribution over the tag-set sequence y given the input sentence x as, p(y|x) = 1 Z(x) T Y t=1 Y α∈C ψα(yα, x, t) (3) where C is the set of factors in the factor graph (as shown in Figure 2), α is one such factor, and yα is the assignment to the subset of variables neighboring factor α. We define three types of potential functions: neural ψNN, pairwise ψP , and transition ψT , described in detail below. Figure 3: Factors in the Neural Factor Graph model (red: Pairwise, grey: Transition, green: Neural Network) 3.1 Neural Factors The flexibility of our formulation allows us to include any form of custom-designed potentials in our model. Those for the neural factors have a fairly standard log-linear form, ψNN,i(yt,m) = exp (X k λnn,kfnn,k(x, t) ) (4) except that the features fnn,k are themselves given by a neural network. There is one such factor per 2656 variable. We obtain our neural factors using a biLSTM over the input sequence x, where the input word embedding for each token is obtained from a character-level biLSTM embedder. This component of our model is similar to the model proposed by Cotterell and Heigold (2017). Given an input token xt = c1...cn, we compute an input embedding vt as, vt = [cLSTM(c1...cn); cLSTM(cn...c1)] (5) Here, cLSTM is a character-level LSTM function that returns the last hidden state. This input embedding vt is then used in the biLSTM tagger to compute an output representation et. Finally, the scores fnn(x, t) are obtained as, fnn(x, t) = Wlet + bl (6) We use a language-specific linear layer with weights Wl and bias bl. 3.2 Pairwise Factors As discussed previously, the pairwise factors are crucial for modeling correlations between tags. The pairwise factor potential for a tag i and tag j at timestep t is given in equation 7. Here, the dimension of fp is (|Yi|, |Yj|). These scores are used to define the neural factors as, ψPi,j(yt,i, yt,j) = exp (X k λp,kfp,k(yt,i, yt,j) ) (7) 3.3 Transition Factors Previous work has experimented with the use of a linear chain CRF with factors from a neural network (Huang et al., 2015) for sequence tagging tasks. We hypothesize that modeling transition factors in a similar manner can allow the model to utilize information about neighboring tags and capture word order features of the language. The transition factor for tag i and timestep t is given below for variables yt,i and yt+1,i. The dimension of fT is (|Yi|, |Yi|). ψTi,t(yt,i, yt+1,i) = exp (X k λT,kfT,k(yt,i, yt+1,i) ) (8) In our experiments, fp,k and fT,k are simple indicator features for the values of tag variables with no dependence on x. 3.4 Language-Specific Weights As an enhancement to the information encoded in the transition and pairwise factors, we experiment with training general and language-specific parameters for the transition and the pairwise weights. We define the weight matrix λgen to learn the general trends that hold across both languages, and the weights λlang to learn the exceptions to these trends. In our model, we sum both these parameter matrices before calculating the transition and pairwise factors. For instance, the transition weights λT are calculated as λT = λT, gen+λT, lang. 3.5 Loopy Belief Propagation Since the graph from Figure 2 is a loopy graph, performing exact inference can be expensive. Hence, we use loopy belief propagation (Murphy et al., 1999; Ihler et al., 2005) for computation of approximate variable and factor marginals. Loopy BP is an iterative message passing algorithm that sends messages between variables and factors in a factor graph. The message updates from variable vi, with neighboring factors N(i), to factor α is µi→α(vi) = Y α∈N(i)\α µα→i(vi) (9) The message from factor α to variable vi is µα→i(vi) = X vα:vα[i]=vi ψα(vα) Y j∈N(α)\i µj→α(vα[i]) (10) where vα denote an assignment to the subset of variables adjacent to factor α, and vα[i] is the assignment for variable vi. Message updates are performed asynchronously in our model. Our message passing schedule was similar to that of foward-backward: the forward pass sends all messages from the first time step in the direction of the last. Messages to/from pairwise factors are included in this forward pass. The backward pass sends messages in the direction from the last time step back to the first. This process is repeated until convergence. We say that BP has converged when the maximum residual error (Sutton and McCallum, 2007) over all messages is below some threshold. Upon convergence, we obtain the belief values of variables and factors as, bi(vi) = 1 κi Y α∈N(i) µα→i(vi) (11) bα(vα) = 1 κα ψα(vα) Y i∈N(α) µi→α(vα[i]) (12) 2657 where κi and κα are normalization constants ensuring that the beliefs for a variable i and factor α sum-to-one. In this way, we can use the beliefs as approximate marginal probabilities. 3.6 Learning and Decoding We perform end-to-end training of the neural factor graph by following the (approximate) gradient of the log-likelihood PN i=1 log p(y(i)|x(i)). The true gradient requires access to the marginal probabilities for each factor, e.g. p(yα|x) where yα denotes the subset of variables in factor α. For example, if α is a transition factor for tag m at timestep t, then yα would be yt,m and yt+1,m. Following (Sutton et al., 2007), we replace these marginals with the beliefs bα(yα) from loopy belief propagation.3 Consider the log-likelihood of a single example ℓ(i) = log p(y(i)|x(i)). The partial derivative with respect to parameter λg,k for each type of factor g ∈{NN, T, P} is the difference of the observed features with the expected features under the model’s (approximate) distribution as represented by the beliefs: ∂ℓ(i) ∂λg,k = X α∈Cg fg,k(y(i) α ) − X yα bα(yα)fg,k(yα) ! where Cg denotes all the factors of type g, and we have omitted any dependence on x(i) and t for brevity—t is accessible through the factor index α. For the neural network factors, the features are given by a biLSTM. We backpropagate through to the biLSTM parameters using the partial derivative below, ∂ℓ(i) ∂fNN,k(y(i) t,m, t) = λNN,k − X yt,m bt,m(yt,m)λNN,k where bt,m(·) is the variable belief corresponding to variable yt,m. To predict a sequence of tag sets ˆy at test time, we use minimum Bayes risk (MBR) decoding (Bickel and Doksum, 1977; Goodman, 1996) for Hamming loss over tags. For a variable yt,m representing tag m at timestep t, we take ˆyt,m = arg max l∈Ym bt,m(l). (13) where l ranges over the possible labels for tag m. Language Pair HRL Train Dev Test DA/SV 4,383 504 1219 RU/BG 3,850 1115 1116 FI/HU 12,217 441 449 ES/PT 14,187 560 477 Table 1: Dataset sizes. tgt size = 100 or 1,000 LRL sentences are added to HRL Train Language Pair Unique Tags Tag Sets DA/SV 23 224 RU/BG 19 798 FI/HU 27 2195 ES/PT 19 451 Table 2: Tag Set Sizes with tgt size=100 4 Experimental Setup 4.1 Dataset We used the Universal Dependencies Treebank UD v2.1 (Nivre et al., 2017) for our experiments. We picked four low-resource/high-resource language pairs, each from a different family: Danish/Swedish (DA/SV), Russian/Bulgarian (RU/BG), Finnish/Hungarian (FI/HU), Spanish/Portuguese (ES/PT). Picking languages from different families would ensure that we obtain results that are on average consistent across languages. The sizes of the training and evaluation sets are specified in Table 1. In order to simulate lowresource settings, we follow the experimental procedure from Cotterell and Heigold (2017). We restrict the number of sentences of the target language (tgt size) in the training set to 100 or 1000 sentences. We also augment the tag sets in our training data by adding a NULL label for all tags that are not seen for a token. It is expected that our model will learn which tags are unlikely to occur given the variable dependencies in the factor graph. The dev set and test set are only in the target language. From Table 2, we can see there is also considerable variance in the number of unique tags and tag sets found in each of these language pairs. 3Using this approximate gradient is akin to the surrogate likelihood training of (Wainwright, 2006). 2658 Language Model tgt size = 100 tgt size=1000 Accuracy F1-Micro F1-Macro Accuracy F1-Macro F1-Micro SV Baseline 15.11 8.36 10.37 68.64 76.36 76.50 Ours 29.47 54.09 54.36 71.32 84.42 84.46 BG Baseline 29.05 14.32 29.62 59.20 67.22 67.12 Ours 27.81 40.97 42.43 39.25 60.23 60.84 HU Baseline 21.97 13.30 16.67 50.75 58.68 62.79 Ours 33.32 54.88 54.69 45.90 74.05 73.38 PT Baseline 18.91 7.10 10.33 74.22 81.62 81.87 Ours 58.82 73.67 74.07 76.26 87.13 87.22 Table 3: Token-wise accuracy and F1 scores on mono-lingual experiments 4.2 Baseline Tagger As the baseline tagger model, we re-implement the SPECIFIC model from Cotterell and Heigold (2017) that uses a language-specific softmax layer. Their model architecture uses a character biLSTM embedder to obtain a vector representation for each token, which is used as input in a word-level biLSTM. The output space of their model is all the tag sets seen in the training data. This work achieves strong performance on several languages from UD on the task of morphological tagging and is a strong baseline. 4.3 Training Regimen We followed the parameter settings from Cotterell and Heigold (2017) for the baseline tagger and the neural component of the FCRF-LSTM model. For both models, we set the input embedding and linear layer dimension to 128. We used 2 hidden layers for the LSTM where the hidden layer dimension was set to 256 and a dropout (Srivastava et al., 2014) of 0.2 was enforced during training. All our models were implemented in the PyTorch toolkit (Paszke et al., 2017). The parameters of the character biLSTM and the word biLSTM were initialized randomly. We trained the baseline models and the neural factor graph model with SGD and Adam respectively for 10 epochs each, in batches of 64 sentences. These optimizers gave the best performances for the respective models. For the FCRF, we initialized transition and pairwise parameters with zero weights, which was important to ensure stable training. We considered BP to have reached convergence when the maximum residual error was below 0.05 or if the maximum number of iterations was reached (set to 40 in our experiments). We found that in crosslingual experiments, when tgt size = 100, the relatively large amount of data in the HRL was causing our model to overfit on the HRL and not generalize well to the LRL. As a solution to this, we upsampled the LRL data by a factor of 10 when tgt size = 100 for both the baseline and the proposed model. Evaluation: Previous work on morphological analysis (Cotterell and Heigold, 2017; Buys and Botha, 2016) has reported scores on average token-level accuracy and F1 measure. The average token level accuracy counts a tag set prediction as correct only it is an exact match with the gold tag set. On the other hand, F1 measure is measured on a tag-by-tag basis, which allows it to give partial credit to partially correct tag sets. Based on the characteristics of each evaluation measure, Accuracy will favor tag-set prediction models (like the baseline), and F1 measure will favor tag-wise prediction models (like our proposed method). Given the nature of the task, it seems reasonable to prefer getting some of the tags correct (e.g. Noun+Masc+Sing becomes Noun+Fem+Sing), instead of missing all of them (e.g. Noun+Masc+Sing becomes Adj+Fem+Plur). F-score gives partial credit for getting some of the tags correct, while tagset-level accuracy will treat these two mistakes equally. Based on this, we believe that F-score is intuitively a better metric. However, we report both scores for completeness. 5 Results and Analysis 5.1 Main Results First, we report the results in the case of monolingual training in Table 3. The first row for each language pair reports the results for our reimple2659 Language Model tgt size = 100 tgt size=1000 Accuracy F1-Micro F1-Macro Accuracy F1-Macro F1-Micro DA/SV Baseline 66.06 73.95 74.37 82.26 87.88 87.91 Ours 63.22 78.75 78.72 77.43 87.56 87.52 RU/BG Baseline 52.76 58.41 58.23 71.90 77.89 77.97 Ours 46.89 64.46 64.75 67.56 82.06 82.11 FI/HU Baseline 51.74 68.15 66.82 61.80 75.96 76.16 Ours 45.41 68.63 68.07 63.93 85.06 84.12 ES/PT Baseline 79.40 86.03 86.14 85.85 91.91 91.93 Ours 77.75 88.42 88.44 85.02 92.35 92.37 Table 4: Token-wise accuracy and F1 scores on cross-lingual experiments mentation of Cotterell and Heigold (2017), and the second for our full model. From these results, we can see that we obtain improvements on the Fmeasure over the baseline method in most experimental settings except BG with tgt size = 1000. In a few more cases, the baseline model sometimes obtains higher accuracy scores for the reason described in 4.3. In our cross-lingual experiments shown in Table 4, we also note F-measure improvements over the baseline model with the exception of DA/SV when tgt size = 1000. We observe that the improvements are on average stronger when tgt size = 100. This suggests that our model performs well with very little data due to its flexibility to generate any tag set, including those not observed in the training data. The strongest improvements are observed for FI/HU. This is likely because the number of unique tags is the highest in this language pair and our method scales well with the number of tags due to its ability to make use of correlations between the tags in different tag sets. Language Transition Pairwise F1-Macro HU × × 69.87 ✓ × 73.21 × ✓ 73.68 ✓ ✓ 74.05 FI/HU × × 79.57 ✓ × 84.41 × ✓ 84.73 ✓ ✓ 85.06 Table 5: Ablation Experiments (tgt size=1000) To examine the utility of our transition and pairwise factors, we also report results on ablation experiments by removing transition and pairwise factors completely from the model in Table 5. Ablation experiments for each factor showed decreases in scores relative to the model where both factors are present, but the decrease attributed to the pairwise factors is larger, in both the monolingual and cross-lingual cases. Removing both factors from our proposed model results in a further decrease in the scores. These differences were found to be more significant in the case when tgt size = 100. Upon looking at the tag set predictions made by our model, we found instances where our model utilizes variable dependencies to predict correct labels. For instance, for a specific phrase in Portuguese (um estado), the baseline model predicted {POS: Det, Gender: Masc, Number: Sing}t, {POS: Noun, Gender: Fem (X), Number: Sing}t+1, whereas our model was able to get the gender correct because of the transition factors in our model. 5.2 What is the Model Learning? Figure 4: Generic transition weights for POS from the RU/BG model One of the major advantages of our model is 2660 Figure 5: Generic pairwise weights between Verbform and Tense from the RU/BG model the ability to interpret what the model has learned by looking at the trained parameter weights. We investigated both language-generic and languagespecific patterns learned by our parameters: • Language-Generic: We found evidence for several syntactic properties learned by the model parameters. For instance, in Figure 4, we visualize the generic (λT, gen) transition weights of the POS tags in RU/BG. Several universal trends such as determiners and adjectives followed by nouns can be seen. In Figure 5, we also observed that infinitive has a strong correlation for NULL tense, which follows the universal phenomena that infinitives don’t have tense. Figure 6: Language-specific pairwise weights for RU between Gender and Tense from the RU/BG model • Language Specific Trends: We visualized the learnt language-specific weights and looked for evidence of patterns corresponding to linguistic phenomenas observed in a language of interest. For instance, in Russian, verbs are gender-specific in past tense but not in other tenses. To analyze this, we plotted pairwise weights for Gender/Tense in Figure 6 and verified strong correlations between the past tense and all gender labels. 6 Related Work There exist several variations of the task of prediction of morphological information from annotated data: paradigm completion (Durrett and DeNero, 2013; Cotterell et al., 2017b), morphological reinflection (Cotterell et al., 2017a), segmentation (Creutz et al., 2005; Cotterell et al., 2016) and tagging. Work on morphological tagging has broadly focused on structured prediction models such as CRFs, and neural network models. Amongst structured prediction approaches, M¨uller et al. (2013); M¨uller and Sch¨utze (2015) proposed the use of a higher-order CRF that is approximated using coarse-to-fine decoding. (M¨uller et al., 2015) proposed joint lemmatization and tagging using this framework. (Hajiˇc, 2000) was the first work that performed experiments on multilingual morphological tagging. They proposed an exponential model and the use of a morphological dictionary. Buys and Botha (2016); Kirov et al. (2017) proposed a model that used tag projection of type and token constraints from a resource-rich language to a low-resource language for tagging. Most recent work has focused on characterbased neural models (Heigold et al., 2017), that can handle rare words and are hence more useful to model morphology than word-based models. These models first obtain a character-level representation of a token from a biLSTM or CNN, which is provided to a word-level biLSTM tagger. Heigold et al. (2017, 2016) compared several neural architectures to obtain these character-based representations and found the effect of the neural network architecture to be minimal given the networks are carefully tuned. Cross-lingual transfer learning has previously boosted performance on tasks such as translation (Johnson et al., 2016) and POS tagging (Snyder et al., 2008; Plank et al., 2016). Cotterell and Heigold (2017) proposed a cross-lingual character-level neural morphological tagger. They experimented with different strategies to facilitate cross-lingual training: a language ID for each token, a language-specific softmax and a joint language identification and tagging model. We have used this work as a baseline model for comparing with our proposed method. In contrast to earlier work on morphological tagging, we use a hybrid of neural and graphical 2661 model approaches. This combination has several advantages: we can make use of expressive feature representations from neural models while ensuring that our model is interpretable. Our work is similar in spirit to Huang et al. (2015) and Ma and Hovy (2016), who proposed models that use a CRF with features from neural models. For our graphical model component, we used a factorial CRF (Sutton et al., 2007), which is a generalization of a linear chain CRF with additional pairwise factors between cotemporal variables. 7 Conclusion and Future Work In this work, we proposed a novel framework for sequence tagging that combines neural networks and graphical models, and showed its effectiveness on the task of morphological tagging. We believe this framework can be extended to other sequence labeling tasks in NLP such as semantic role labeling. Due to the robustness of the model across languages, we believe it can also be scaled to perform morphological tagging for multiple languages together. Acknowledgments The authors would like to thank David Mortensen, Soumya Wadhwa and Maria Ryskina for useful comments about this work. We would also like to thank the reviewers who gave valuable feedback to improve the paper. This project was supported in part by an Amazon Academic Research Award and Google Faculty Award. References Peter J. Bickel and Kjell A. Doksum. 1977. Mathematical Statistics: Basic Ideas and Selected Topics. Holden-Day Inc., Oakland, CA, USA. Jan Buys and Jan A. Botha. 2016. Cross-lingual morphological tagging for low-resource languages. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1954–1964. Ryan Cotterell and Georg Heigold. 2017. Crosslingual character-level neural morphological tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 748–759. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, G´eraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra K¨ubler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017a. Conll-sigmorphon 2017 shared task: Universal morphological reinflection in 52 languages. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection. Association for Computational Linguistics, Vancouver, pages 1–30. Ryan Cotterell, Arun Kumar, and Hinrich Sch¨utze. 2016. Morphological segmentation inside-out. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 2325–2330. Ryan Cotterell, Ekaterina Vylomova, Huda Khayrallah, Christo Kirov, and David Yarowsky. 2017b. Paradigm completion for derivational morphology. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copenhagen, Denmark, pages 714–720. Mathias Creutz, Krista Lagus, Krister Lind´en, and Sami Virpioja. 2005. Morfessor and hutmegs: Unsupervised morpheme segmentation for highlyinflecting and compounding languages . Greg Durrett and John DeNero. 2013. Supervised learning of complete morphological paradigms. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 1185–1195. Joshua Goodman. 1996. Efficient algorithms for parsing the DOP model. In Proceedings of EMNLP. Jan Hajiˇc. 2000. Morphological tagging: Data vs. dictionaries. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference. Association for Computational Linguistics, pages 94–101. Jan Hajiˇc and Barbora Hladk´a. 1998. Tagging inflective languages: Prediction of morphological categories for a rich, structured tagset. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1. Association for Computational Linguistics, pages 483– 490. Georg Heigold, Guenter Neumann, and Josef van Genabith. 2016. Neural morphological tagging from characters for morphologically rich languages. arXiv preprint arXiv:1606.06640 . Georg Heigold, Guenter Neumann, and Josef van Genabith. 2017. An extensive empirical evaluation of character-based morphological tagging for 14 languages. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 505–513. 2662 Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991 . Alexander T Ihler, W Fisher John III, and Alan S Willsky. 2005. Loopy belief propagation: Convergence and effects of message errors. Journal of Machine Learning Research 6(May):905–936. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2016. Google’s multilingual neural machine translation system: enabling zero-shot translation. arXiv preprint arXiv:1611.04558 . Christo Kirov, John Sylak-Glassman, Rebecca Knowles, Ryan Cotterell, and Matt Post. 2017. A rich morphological tagger for english: Exploring the cross-linguistic tradeoff between morphology and syntax. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. volume 2, pages 112–117. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1064–1074. Thomas M¨uller, Ryan Cotterell, Alexander Fraser, and Hinrich Sch¨utze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pages 2268–2274. Thomas M¨uller, Helmut Schmid, and Hinrich Sch¨utze. 2013. Efficient higher-order crfs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pages 322–332. Thomas M¨uller and Hinrich Sch¨utze. 2015. Robust morphological tagging with word representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pages 526–536. Kevin P Murphy, Yair Weiss, and Michael I Jordan. 1999. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., pages 467–475. Joakim Nivre et al. 2017. Universal dependencies 2.1. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Rachel Nordlinger and Louisa Sadler. 2004. Nominal tense in crosslinguistic perspective. Language 80(4):776–806. Kemal Oflazer and Ilker Kuru¨oz. 1994. Tagging and morphological disambiguation of turkish text. In Proceedings of the fourth conference on Applied natural language processing. Association for Computational Linguistics, pages 144–149. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch . Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Berlin, Germany, pages 412–418. Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2008. Unsupervised multilingual learning for pos tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1041–1050. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958. Charles Sutton and Andrew McCallum. 2007. Improved dynamic schedules for belief propagation. In Conference on Uncertainty in Artificial Intelligence (UAI). Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. Journal of Machine Learning Research 8(Mar):693–723. Reut Tsarfaty, Djam´e Seddah, Yoav Goldberg, Sandra K¨ubler, Marie Candito, Jennifer Foster, Yannick Versley, Ines Rehbein, and Lamia Tounsi. 2010. Statistical parsing of morphologically rich languages (spmrl): what, how and whither. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages. Association for Computational Linguistics, pages 1–12. Reut Tsarfaty, Djam´e Seddah, Sandra K¨ubler, and Joakim Nivre. 2013. Parsing morphologically rich languages: Introduction to the special issue. Computational linguistics 39(1):15–22. Ekaterina Vylomova, Trevor Cohn, Xuanli He, and Gholamreza Haffari. 2017. Word representation models for morphologically rich languages in neural machine translation. In Proceedings of the First 2663 Workshop on Subword and Character Level Models in NLP. Association for Computational Linguistics, Copenhagen, Denmark, pages 103–108. Martin J Wainwright. 2006. Estimating the“wrong”graphical model: Benefits in the computation-limited setting. Journal of Machine Learning Research 7(Sep):1829–1859.
2018
247
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2664–2675 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2664 Global Transition-based Non-projective Dependency Parsing Carlos Gómez-Rodríguez Universidade da Coruña [email protected] Tianze Shi Cornell University [email protected] Lillian Lee Cornell University [email protected] Abstract Shi, Huang, and Lee (2017a) obtained state-of-the-art results for English and Chinese dependency parsing by combining dynamic-programming implementations of transition-based dependency parsers with a minimal set of bidirectional LSTM features. However, their results were limited to projective parsing. In this paper, we extend their approach to support non-projectivity by providing the first practical implementation of the MH 4 algorithm, an Opn4q mildly nonprojective dynamic-programming parser with very high coverage on non-projective treebanks. To make MH 4 compatible with minimal transition-based feature sets, we introduce a transition-based interpretation of it in which parser items are mapped to sequences of transitions. We thus obtain the first implementation of global decoding for non-projective transition-based parsing, and demonstrate empirically that it is more effective than its projective counterpart in parsing a number of highly non-projective languages. 1 Introduction Transition-based dependency parsers are a popular approach to natural language parsing, as they achieve good results in terms of accuracy and efficiency (Yamada and Matsumoto, 2003; Nivre and Scholz, 2004; Zhang and Nivre, 2011; Chen and Manning, 2014; Dyer et al., 2015; Andor et al., 2016; Kiperwasser and Goldberg, 2016). Until very recently, practical implementations of transition-based parsing were limited to approximate inference, mainly in the form of greedy search or beam search. While cubic-time exact inference algorithms for several well-known projective transition systems had been known since the work of Huang and Sagae (2010) and Kuhlmann et al. (2011), they had been considered of theoretical interest only due to their incompatibility with rich feature models: incorporation of complex features resulted in jumps in asymptotic runtime complexity to impractical levels. However, the recent popularization of bidirectional long-short term memory networks (biLSTMs; Hochreiter and Schmidhuber, 1997) to derive feature representations for parsing, given their capacity to capture long-range information, has demonstrated that one may not need to use complex feature models to obtain good accuracy (Kiperwasser and Goldberg, 2016; Cross and Huang, 2016). In this context, Shi et al. (2017a) presented an implementation of the exact inference algorithms of Kuhlmann et al. (2011) with a minimal set of only two bi-LSTM-based feature vectors. This not only kept the complexity cubic, but also obtained state-of-the-art results in English and Chinese parsing. While their approach provides both accurate parsing and the flexibility to use any of greedy, beam, or exact decoding with the same underlying transition systems, it does not support nonprojectivity. Trees with crossing dependencies make up a significant portion of many treebanks, going as high as 63% for the Ancient Greek treebank in the Universal Dependencies1 (UD) dataset version 2.0 and averaging around 12% over all languages in UD 2.0. In this paper, we extend Shi et al.’s (2017a) approach to mildly nonprojective parsing in what, to our knowledge, is the first implementation of exact decoding for a non-projective transition-based parser. As in the projective case, a mildly non1http://universaldependencies.org/ 2665 projective decoder has been known for several years (Cohen et al., 2011), corresponding to a variant of the transition-based parser of Attardi (2006). However, its Opn7q runtime — or the Opn6q of a recently introduced improvedcoverage variant (Shi et al., 2018) — is still prohibitively costly in practice. Instead, we seek a more efficient algorithm to adapt, and thus develop a transition-based interpretation of GómezRodríguez et al.’s (2011) MH 4 dynamic programming parser, which has been shown to provide very good non-projective coverage in Opn4q time (Gómez-Rodríguez, 2016). While the MH 4 parser was originally presented as a non-projective generalization of the dynamic program that later led to the arc-hybrid transition system (GómezRodríguez et al., 2008; Kuhlmann et al., 2011), its own relation to transition-based parsing was not known. Here, we show that MH 4 can be interpreted as exploring a subset of the search space of a transition-based parser that generalizes the arc-hybrid system, under a mapping that differs from the “push computation” paradigm used by the previously-known dynamic-programming decoders for transition systems. This allows us to extend Shi et al. (2017a)’s work to non-projective parsing, by implementing MH 4 with a minimal set of transition-based features. Experimental results show that our approach outperforms the projective approach of Shi et al. (2017a) and maximum-spanning-tree nonprojective parsing on the most highly nonprojective languages in the CoNLL 2017 sharedtask data that have a single treebank. We also compare with the third-order 1-Endpoint-Crossing (1EC) parser of Pitler (2014), the only other practical implementation of an exact mildly nonprojective decoder that we know of, which also runs in Opn4q but without a transition-based interpretation. We obtain comparable results for these two algorithms, in spite of the fact that the MH 4 algorithm is notably simpler than 1EC. The MH 4 parser remains effective in parsing projective treebanks, while our baseline parser, the fully non-projective maximum spanning tree algorithm, falls behind due to its unnecessarily large search space in parsing these languages. Our code, including our re-implementation of the third-order 1EC parser with neural scoring, is available at https://github.com/tzshi/ mh4-parser-acl18. . . Jack . Dempseys . are . not .an . easy . cichlid .to . breed . compound . nsubj . cop . advmod . det . amod . root . mark . advcl Figure 1: A non-projective dependency parse from the UD 2.0 English treebank. 2 Non-projective Dependency Parsing In dependency grammar, syntactic structures are modeled as word-word asymmetrical subordinate relations among lexical entries (Kübler et al., 2009). These relations can be represented in a graph. For a sentence w “ w1, ..., wn, we first define a corresponding set of nodes t0, 1, 2, ..., nu, where 0 is an artificial node denoting the root of the sentence. Dependency relations are encoded by edges of the form ph, mq, where h is the head and m the modifier of the bilexical subordinate relation.2 As is conventional, we assume two more properties on dependency structures. First, each word has exactly one syntactic head, and second, the structure is acyclic. As a consequence, the edges form a directed tree rooted at node 0. We say that a dependency structure is projective if it has no crossing edges. While in the CoNLL and Stanford conversions of the English Penn Treebank, over 99% of the sentences are projective (Chen and Manning, 2014) — see Fig. 1 for a non-projective English example — for other languages’ treebanks, non-projectivity is a common occurrence (see Table 3 for some statistics). This paper is targeted at learning parsers that can handle non-projective dependency trees. 3 MH 4 Deduction System and Its Underlying Transition System 3.1 The MH 4 Deduction System The MH 4 parser is the instantiation for k “ 4 of Gómez-Rodríguez et al.’s (2011) more general MH k parser. MH k stands for “multi-headed with at most k heads per item”: items in its deduction system take the form rh1, . . . , hps for p ď k, indicating the existence of a forest of p dependency subtrees headed by h1, . . . , hp such that their yields are disjoint and the union of their 2To simplify exposition here, we only consider the unlabeled case. We use a separately-trained labeling module to obtain labeled parsing results in §5. 2666 Axiom: r0, 1s SHIFT: rh1, . . . , hms rhm, hm ` 1s phm ď nq COMBINE: rh1, . . . , hms rhm, hm`1, . . . , hps rh1, . . . , hps pp ď kq Goal: r0, n ` 1s LINK: rh1, . . . , hms rh1, . . . , hj´1, hj`1, . . . , hms hi Ñ hjp1 ď i ď m ^ 1 ă j ă m ^ j ‰ iq Figure 2: MH k’s deduction steps. yields is the contiguous substring h1 . . . hp of the input. Deduction steps, shown in Figure 2, can be used to join two such forests that have an endpoint in common via graph union (COMBINE); or to add a dependency arc to a forest that attaches an interior head as a dependent of any of the other heads (LINK). In the original formulation by GómezRodríguez et al. (2011), all valid items of the form ri, i ` 1s are considered to be axioms. In contrast, we follow Kuhlmann et al.’s (2011) treatment of MH 3: we consider r0, 1s as the only axiom and include an extra SHIFT step to generate the rest of the items of that form. Both formulations are equivalent, but including this SHIFT rule facilitates giving the parser a transition-based interpretation. Higher values of k provide wider coverage of non-projective structures at an asymptotic runtime complexity of Opnkq. When k is at its minimum value of 3, the parser covers exactly the set of projective trees, and in fact, it can be seen as a transformation3 of the deduction system described in Gómez-Rodríguez et al. (2008) that gave rise to the projective arc-hybrid parser (Kuhlmann et al., 2011). For k ě 4, the parser covers an increasingly larger set of non-projective structures. While a simple characterization of these sets has been lacking4, empirical evaluation on a large number of treebanks (Gómez-Rodríguez, 2016) has shown MH k to provide the best known tradeoff between asymptotic complexity and efficiency for k ą 4. When k “ 4, its coverage is second only to the 1-Endpoint-Crossing parser of Pitler et al. (2013). Both parsers fully cover well over 80% of the nonprojective trees observed in the studied treebanks. 3Formally, it is a step refinement; see Gómez-Rodríguez et al. (2011). 4This is a common issue with parsers based on the general idea of arcs between non-contiguous heads, such as those deriving from Attardi (2006). 3.2 The MH 4 Transition System Kuhlmann et al. (2011) show how the items of a variant of MH 3 can be given a transition-based interpretation under the “push computation” framework, yielding the arc-hybrid projective transition system. However, such a derivation has not been made for the non-projective case (k ą 3), and the known techniques used to derive previous associations between tabular and transition-based parsers do not seem to be applicable in this case. The specific issue is that the deduction systems of Kuhlmann et al. (2011) and Cohen et al. (2011) have in common that the structure of their derivations is similar to that of a Dyck (or balancedbrackets) language, where steps corresponding to shift transitions are balanced with those corresponding to reduce transitions. This makes it possible to group derivation subtrees, and the transition sequences that they yield, into “push computations” that increase the length of the stack by a constant amount. However, this does not seem possible in MH 4. Instead, we derive a transition-based interpretation of MH 4 by a generalization of that of MH 3 that departs from push computations. To do so, we start with the MH 3 interpretation of an item ri, js given by Kuhlmann et al. (2011). This item represents a set of computations (transition sequences) that start from a configuration of the form pσ, i|β, Aq (where σ is the stack and i|β is the buffer, with i being the first buffer node) and take the parser to a configuration of the form pσ|i, j|β1, Aq. That is, the computation has the net effect of placing node i on top of the previous contents of the stack, and it ends in a state where the first buffer element is j. Under this item semantics, the COMBINE deduction step of the MH 3 parser (i.e., the instantiation of the one in Fig. 2 for k “ 3) simply concatenates transition sequences. The SHIFT step generates a sequence with a single arc-hybrid sh 2667 transition: sh : pσ, hm|β, Aq $ pσ|hm, β, Aq and the two possible instantiations of the COMBINE step when k “ 3 take the antecedent transition sequence and add a transition to it, namely, one of the two arc-hybrid reduce transitions. Written in the context of the node indexes used in Figure 2, these are the following: pσ|h1|h2, h3|β, Aq $ pσ|h1, h3|β, A Y th3 Ñ h2uq pσ|h1|h2, h3|β, Aq $ pσ|h1, h3|β, A Y th1 Ñ h2uq where h1 and h3 respectively can be simplified out to obtain the well-known arc-hybrid transitions: la : pσ|h2, h3|β, Aq $ pσ, h3|β, A Y th3 Ñ h2uq ra : pσ|h1|h2, β, Aq $ pσ|h1, β, A Y th1 Ñ h2uq Now, we assume the following generalization of the item semantics: an item rh1, . . . , hms represents a set of computations that start from a configuration of the form pσ, h1|β, Aq and lead to a configuration of the form pσ|h1| . . . |hm´1, hm|β1, Aq. Note that this generalization no longer follows the “push computation” paradigm of Kuhlmann et al. (2011) and Cohen et al. (2011) because the number of nodes pushed onto the stack depends on the value of m. Under this item semantics, the SHIFT and COMBINE steps have the same interpretation as for MH 3. In the case of the LINK step, following the same reasoning as for the MH 3 case, we obtain the following transitions: la : pσ|h3, h4|β, Aq $ pσ, h4|β, A Y th4 Ñ h3uq ra : pσ|h2|h3, β, Aq $ pσ|h2, β, A Y th2 Ñ h3uq la1 : pσ|h2|h3, h4|β, Aq $ pσ|h3, h4|β, A Y th3 Ñ h2uq ra1 : pσ|h1|h2|h3, β, Aq $ pσ|h1|h3, β, A Y th1 Ñ h2uq la2 : pσ|h2|h3, h4|β, Aq $ pσ|h3, h4|β, A Y th4 Ñ h2uq ra2 : pσ|h1|h2|h3, β, Aq $ pσ|h1|h2, β, A Y th1 Ñ h3uq These transitions give us the MH 4 transition system: a parser with four projective reduce transitions (la,ra,la1,ra1) and two Attardi-like, nonadjacent-arc reduce transitions (la2 and ra2). It is worth mentioning that this MH 4 transition system we have obtained is the same as one of the variants of Attardi’s algorithm introduced by Shi et al. (2018), there called ALLs0s1. However, in that paper they show that it can be tabularized in Opn6q using the push computation framework. Here, we have derived it as an interpretation of the Opn4q MH 4 parser. However, in this case the dynamic programming algorithm does not cover the full search space of the transition system: while each item in the MH 4 parser can be mapped into a computation of this MH 4 transition-based parser, the opposite is not true. This tree: . .0 .1 .2 .3 .4 .5 . can be parsed by the transition system using the computation shp0q; shp1q; shp2q; la2p3Ñ1q; shp3q; shp4q; la2p5Ñ3q; shp5q; rap4Ñ5q; rap2Ñ4q; rap0Ñ2q but it is not covered by the dynamic programming algorithm, as no deduction sequence will yield an item representing this transition sequence. As we will see, this issue will not prevent us from implementing a dynamic-programming parser with transition-based scoring functions, or from achieving good practical accuracy. 4 Model Given the transition-based interpretation of the MH 4 system, the learning objective becomes to find a computation that gives the gold-standard parse. For each sentence w1, . . . , wn, we train parsers to produce the transition sequence t˚ that corresponds to the annotated dependency structure. Thus, the model consists of two components: a parameterized scorer Sptq, and a decoder that finds a sequence ˆt as prediction based on the scoring. As discussed by Shi et al. (2017a), there exists some tension between rich-feature scoring models and choices of decoders. Ideally, a globallyoptimal decoder finds the maximum-scoring transition sequence ˆt without brute-force searching the exponentially-large output space. To keep the runtime of our exact decoder at a practical loworder polynomial, we want its feature set to be 2668 Features ts0, b0u ts1, s0, b0u ts2, s1, s0, b0u UAS 49.83 85.17 85.27 Table 1: Performance of local parsing models with varying number of features. We report average UAS over 10 languages on UD 2.0. minimal, consulting as few stack and buffer positions as possible. In what follows, we use s0 and s1 to denote the top two stack items and b0 and b1 to denote the first two buffer items. 4.1 Scoring and Minimal Features This section empirically explores the lower limit on the number of necessary positional features. We experiment with both local and global decoding strategies. The parsers take features extracted from parser configuration c, and score each valid transition t with Spt; cq. The local parsers greedily take transitions with the highest score until termination, while the global parsers use the scores to find the globally-optimal solutions ˆt “ arg maxt Sptq, where Sptq is the sum of scores for the component transitions. Following prior work, we employ bi-LSTMs for compact feature representation. A bi-LSTM runs in both directions on the input sentence, and assigns a context-sensitive vector encoding to each token in the sentence: w1, . . . , wn. When we need to extract features, say, s0, from a particular stack or buffer position, say s0, we directly use the biLSTM vector wis0, where is0 gives the index of the subroot of s0 into the sentence. Shi et al. (2017a) showed that feature vectors ts0, b0u suffice for MH 3. Table 1 and Table 2 show the use of small feature sets for MH 4, for local and global parsing models, respectively. For a local parser to exhibit decent performance, we need at least ts1, s0, b0u, but adding s2 on top of that does not show any significant impact on the performance. Interestingly, in the case of global models, the two-vector feature set ts0, b0u already suffices. Adding s1 to the global setting (column “Hybrid” in Table 2) seems attractive, but entails resolving a technical challenge that we discuss in the following section. 4.2 Global Decoder In our transition-system interpretation of MH k, sh transitions correspond to SHIFT and reduce transitions reflect the LINK steps. Since the SHIFT Features ts0, b0u Hybrid UAS 86.79 87.27 Table 2: Performance of global parsing models with varying number of features. conclusions lose the contexts needed to score the transitions, we set the scores for all SHIFT rules to zero and delegate the scoring of the sh transitions to the COMBINE steps, as as in Shi et al. (2017a); for example, rh1, h2s : v1 rh2, h3, h4s : v2 rh1, h2, h3, h4s : v1 ` v2 ` Spsh; th1, h2uq Here the transition sequence denoted by rh2, h3, h4s starts from a sh, with h1 and h2 taking the s0 and b0 positions. If we further wish to access s1, such information is not readily available in the deduction step, apparently requiring extra bookkeeping that pushes the space and time complexity to an impractical Opn4q and Opn5q, respectively. But, consider the scoring for the reduce transitions in the LINK steps: rh1, h2, h3, h4s : v rh1, h2, h4s : v ` Spla; th2, h3, h4uq rh1, h2, h3s : v rh1, h3s : v ` Spla; th1, h2, h3uq The deduction steps already keep indices for s1 (h2 in the first rule, h1 in the second) and thus provide direct access without any modification. To resolve the conflict between including s1 for richer representations and the unavailability of s1 in scoring the sh transitions in the COMBINE steps, we propose a hybrid scoring approach — we use features ts0, b0u when scoring a sh transition, and features ts1, s0, b0u for consideration of reduce transitions. We call this method MH 4-hybrid, in contrast to MH 4-two, where we simply take ts0, b0u for scoring all transitions. 4.3 Large-Margin Training We train the greedy parsers with hinge loss, and the global parsers with its structured version (Taskar et al., 2005). The loss function for each sentence is formally defined as: max ˆt ` Spˆtq ` costpt˚,ˆtq ´ Spt˚q ˘ 2669 where the margin costpt˚,ˆtq counts the number of mis-attached nodes for taking sequence ˆt instead of t˚. Minimizing this loss can be thought of as optimizing for the attachment scores. The calculation of the above loss function can be solved as efficiently as the deduction system if the cost function decomposes into the dynamic program. We achieve this by replacing the scoring of each reduce step by its cost-augmented version: rh1, h2, h3, h4s : v rh1, h2, h4s : v ` Spla2; th2, h3, h4uq ` ∆ where ∆“ 1pheadpwh3q ‰ wh4q. This loss function encourages the model to give higher contrast between gold-standard and wrong predictions, yielding better generalization results. 5 Experiments Data and Evaluation We experiment with the Universal Dependencies (UD) 2.0 dataset used for the CoNLL 2017 shared task (Zeman et al., 2017). We restrict our choice of languages to be those with only one training treebank, for a better comparison with the shared task results.5 Among these languages, we pick the top 10 most non-projective languages. Their basic statistics are listed in Table 3. For all development-set results, we assume gold-standard tokenization and sentence delimitation. When comparing to the shared task results on test sets, we use the provided baseline UDPipe (Straka et al., 2016) segmentation. Our models do not use part-of-speech tags or morphological tags as features, but rather leverage such information via stack propagation (Zhang and Weiss, 2016), i.e., we learn to predict them as a secondary training objective. We report unlabeled attachment F1scores (UAS) on the development sets for better focus on comparing our (unlabeled) parsing modules. We report its labeled variant (LAS), the main metric of the shared task, on the test sets. For each experiment setting, we ran the model with 5 different random initializations, and report the mean and standard deviation. We detail the implementation details in the supplementary material. Baseline Systems For comparison, we include three baseline systems with the same underlying feature representations and scoring paradigm. All 5When multiple treebanks are available, one can develop domain transfer strategies, which is not the focus of this work. the following baseline systems are trained with the cost-augmented large-margin loss function. The MH 3 parser is the projective instantiation of the MH k parser family. This corresponds to the global version of the arc-hybrid transition system (Kuhlmann et al., 2011). We adopt the minimal feature representation ts0, b0u, following Shi et al. (2017a). For this model, we also implement a greedy incremental version. The edge-factored non-projective maximal spanning tree (MST) parser allows arbitrary non-projective structures. This decoding approach has been shown to be very competitive in parsing non-projective treebanks (McDonald et al., 2005), and was deployed in the top-performing system at the CoNLL 2017 shared task (Dozat et al., 2017). We score each edge individually, with the features being the bi-LSTM vectors th, mu, where h is the head, and m the modifier of the edge. The crossing-sensitive third-order 1EC parser provides a hybrid dynamic program for parsing 1-Endpoint-Crossing non-projective dependency trees with higher-order factorization (Pitler, 2014). Depending on whether an edge is crossed, we can access the modifier’s grandparent g, head h, and sibling si. We take their corresponding bi-LSTM features tg, h, m, siu for scoring each edge. This is a re-implementation of Pitler (2014) with neural scoring functions. Main Results Table 4 shows the developmentset performance of our models as compared with baseline systems. MST considers non-projective structures, and thus enjoys a theoretical advantage over projective MH 3, especially for the most non-projective languages. However, it has a vastly larger output space, making the selection of correct structures difficult. Further, the scoring is edge-factored, and does not take any structural contexts into consideration. This tradeoff leads to the similar performance of MST comparing to MH 3. In comparison, both 1EC and MH 4 are mildly non-projective parsing algorithms, limiting the size of the output space. 1EC includes higherorder features that look at tree-structural contexts; MH 4 derives its features from parsing configurations of a transition system, hence leveraging contexts within transition sequences. These considerations explain their significant improvements over MST. We also observe that MH 4 recovers more short dependencies than 1EC, while 1EC is better at longer-distance ones. 2670 Language Code # Sent. # Words Sentence Coverage (%) Edge Coverage (%) Proj. Ó MH 4 1EC Proj. MH 4 1EC Basque eu 5,396 72,974 66.48 91.48 93.29 95.98 99.27 99.42 Urdu ur 4,043 108,690 76.97 95.89 95.77 98.89 99.83 99.81 Gothic got 3,387 35,024 78.42 97.25 97.58 97.04 99.73 99.75 Hungarian hu 910 20,166 79.01 98.35 97.69 98.51 99.92 99.89 Old Church Slavonic cu 4,123 37,432 80.16 98.33 98.74 97.22 99.80 99.85 Danish da 4,383 80,378 80.56 97.70 98.97 98.60 99.87 99.94 Greek el 1,662 41,212 85.98 99.52 99.40 99.32 99.98 99.98 Hindi hi 13,304 281,057 86.16 98.38 98.95 99.26 99.92 99.94 German de 14,118 269,626 87.07 99.19 99.27 99.15 99.95 99.96 Romanian ro 8,043 185,113 88.61 99.42 99.52 99.42 99.97 99.98 Table 3: Statistics of selected training treebanks from Universal Dependencies 2.0 for the CoNLL 2017 shared task (Zeman et al., 2017), sorted by per-sentence projective ratio. Global Models Greedy Models Lan. MH 3 MST MH 4-two MH 4-hybrid 1EC MH 3 MH 4 eu 82.07˘0.17 83.61˘0.16 82.94˘0.24 84.13˘0.13 84.09˘0.19 81.27˘0.20 81.71˘0.33 ur 86.89˘0.18 86.78˘0.13 86.84˘0.26 87.06˘0.24 87.11˘0.11 86.40˘0.16 86.05˘0.18 got 83.72˘0.19 84.74˘0.28 83.85˘0.19 84.59˘0.38 84.77˘0.27 82.28˘0.18 81.40˘0.45 hu 83.05˘0.17 82.81˘0.49 83.69˘0.20 84.59˘0.50 83.48˘0.27 81.75˘0.47 80.75˘0.54 cu 86.70˘0.30 88.02˘0.25 87.57˘0.14 88.09˘0.28 88.27˘0.32 86.05˘0.23 86.01˘0.11 da 85.09˘0.16 84.68˘0.36 85.45˘0.43 85.77˘0.39 85.77˘0.16 83.90˘0.24 83.59˘0.06 el 87.82˘0.24 87.27˘0.22 87.77˘0.20 87.83˘0.36 87.95˘0.23 87.14˘0.25 86.95˘0.25 hi 93.75˘0.14 93.91˘0.26 93.99˘0.15 94.27˘0.08 94.24˘0.04 93.44˘0.09 93.02˘0.10 de 86.46˘0.13 86.34˘0.24 86.53˘0.22 86.89˘0.17 86.95˘0.32 84.99˘0.26 85.27˘0.32 ro 89.34˘0.27 88.79˘0.43 89.25˘0.15 89.53˘0.20 89.52˘0.25 88.76˘0.30 87.97˘0.31 Avg. 86.49 86.69 86.79 87.27 87.21 85.60 85.27 Table 4: Experiment results (UAS, %) on the UD 2.0 development set. Bold: best result per language. In comparison to MH 4-two, the richer feature representation of MH 4-hybrid helps in all our languages. Interestingly, MH 4 and MH 3 react differently to switching from global to greedy models. MH 4 covers more structures than MH 3, and is naturally more capable in the global case, even when the feature functions are the same (MH 4-two). However, its greedy version is outperformed by MH 3. We conjecture that this is because MH 4 explores only the same number of configurations as MH 3, despite the fact that introducing non-projectivity expands the search space dramatically. Comparison with CoNLL Shared Task Results (Table 5) We compare our models on the test sets, along with the best single model (#1; Dozat et al., 2017) and the best ensemble model (#2; Shi et al., 2017b) from the CoNLL 2017 shared task. MH 4 outperforms 1EC in 7 out of the 10 languages. Additionally, we take our non-projective parsing models (MST, MH 4-hybrid, 1EC) and combine them into an ensemble. The average result is competitive with the best CoNLL submissions. Interestingly, Dozat et al. (2017) uses fully non-projective parsing algorithms (MST), and our ensemble system sees larger gains in the more non-projective languages, confirming the potential benefit of global mildly non-projective parsing. Results on Projective Languages (Table 6) For completeness, we also test our models on the 10 most projective languages that have a single treebank. MH 4 remains the most effective, but by a much smaller margin. Interestingly, MH 3, which is strictly projective, matches the performance of 1EC; both outperform the fully nonprojective MST by half a point. 6 Related Work Exact inference for dependency parsing can be achieved in cubic time if the model is restricted to projective trees (Eisner, 1996). However, nonprojectivity is needed for natural language parsers to satisfactorily deal with linguistic phenomena like topicalization, scrambling and extraposition, which cause crossing dependencies. In UD 2.0, 68 out of 70 treebanks were reported to contain 2671 Same Model Architecture For Reference Lan. MH 3 MST MH 4-hybrid 1EC Ensemble CoNLL #1 CoNLL #2 eu 78.17˘0.33 79.90˘0.08 80.22˘0.48 ą 80.17˘0.32 81.55 81.44 79.61 ur 80.91˘0.10 80.05˘0.13 80.69˘0.19 ą 80.59˘0.19 81.37 82.28 81.06 got 67.10˘0.10 67.26˘0.45 67.92˘0.29 ą 67.66˘0.20 69.83 66.82 68.34 hu 76.09˘0.25 75.79˘0.36 76.90˘0.31 ą 76.07˘0.20 79.35 77.56 76.55 cu 71.28˘0.29 72.18˘0.20 72.51˘0.23 ă 72.53˘0.27 74.38 71.84 72.35 da 80.00˘0.15 79.69˘0.24 80.89˘0.17 ą 80.83˘0.27 82.09 82.97 81.55 el 85.89˘0.29 85.48˘0.25 86.28˘0.44 ą 86.07˘0.37 87.06 87.38 86.90 hi 89.88˘0.18 89.93˘0.12 90.22˘0.12 ă 90.28˘0.21 90.78 91.59 90.40 de 76.23˘0.21 75.99˘0.23 76.46˘0.20 ą 76.42˘0.35 77.38 80.71 77.17 ro 83.53˘0.35 82.73˘0.36 83.67˘0.21 ă 83.83˘0.18 84.51 85.92 84.40 Avg. 78.91 78.90 79.57 ą 79.44 80.83 80.85 79.83 Table 5: Evaluation results (LAS, %) on the test set using the CoNLL 2017 shared task setup. The best results for each language within each block are highlighted in bold. Same Model Architecture For Reference Lan. MH 3 MST MH 4-hybrid 1EC Ensemble CoNLL #1 CoNLL #2 ja 74.29˘0.10 73.93˘0.16 74.23˘0.11 74.12˘0.12 74.51 74.72 74.51 zh 63.54˘0.13 62.71˘0.17 63.48˘0.33 63.54˘0.26 64.65 65.88 64.14 pl 86.49˘0.19 85.76˘0.31 86.60˘0.26 86.36˘0.28 87.38 90.32 87.15 he 61.47˘0.24 61.28˘0.24 61.93˘0.22 61.75˘0.22 62.40 63.94 62.33 vi 41.26˘0.39 41.04˘0.19 41.33˘0.32 40.96˘0.36 42.95 42.13 41.68 bg 87.50˘0.20 87.03˘0.17 87.63˘0.17 87.56˘0.14 88.22 89.81 88.39 sk 80.48˘0.22 80.25˘0.32 81.27˘0.14 80.94˘0.25 82.38 86.04 81.75 it 87.90˘0.07 87.26˘0.23 88.06˘0.27 87.98˘0.19 88.74 90.68 89.08 id 77.66˘0.13 76.95˘0.32 77.64˘0.17 77.60˘0.18 78.27 79.19 78.55 lv 69.62˘0.55 69.33˘0.51 70.54˘0.51 69.52˘0.29 72.34 74.01 71.35 Avg. 73.02 72.55 73.27 73.03 74.18 75.67 73.89 Table 6: CoNLL 2017 test set results (LAS, %) on the most projective languages (sorted by projective ratio; ja (Japanese) is fully projective). non-projectivity (Wang et al., 2017). However, exact inference has been shown to be intractable for models that support arbitrary nonprojectivity, except under strong independence assumptions (McDonald and Satta, 2007). Thus, exact inference parsers that support unrestricted non-projectivity are limited to edge-factored models (McDonald et al., 2005; Dozat et al., 2017). Alternatives include treebank transformation and pseudo-projective parsing (Kahane et al., 1998; Nivre and Nilsson, 2005), approximate inference (e.g. McDonald and Pereira (2006); Attardi (2006); Nivre (2009); Fernández-González and Gómez-Rodríguez (2017)) or focusing on sets of dependency trees that allow only restricted forms of non-projectivity. A number of such sets, called mildly non-projective classes of trees, have been identified that both exhibit good empirical coverage of the non-projective phenomena found in natural languages and are known to have polynomial-time exact parsing algorithms; see Gómez-Rodríguez (2016) for a survey. However, most of these algorithms have not been implemented in practice due to their prohibitive complexity. For example, Corro et al. (2016) report an implementation of the WG1 parser, a Opn7q mildly non-projective parser introduced in Gómez-Rodríguez et al. (2009), but it could not be run for real sentences of length greater than 20. On the other hand, Pitler et al. (2012) provide an implementation of an Opn5q parser for a mildly non-projective class of structures called gap-minding trees, but they need to resort to aggressive pruning to make it practical, exploring only a part of the search space in Opn4q time. To the best of our knowledge, the only practical system that actually implements exact inference for mildly non-projective parsing is the 1Endpoint-Crossing (1EC) parser of Pitler (2013; 2014), which runs in Opn4q worst-case time like the MH 4 algorithm used in this paper. Thus, the system presented here is the second practical implementation of exact mildly non-projective pars2672 ing that has successfully been executed on real corpora.6 Comparing with Pitler (2014)’s 1EC, our parser has the following disadvantages: (´1) It has slightly lower coverage, at least on the treebanks considered by Gómez-Rodríguez (2016). (´2) The set of trees covered by MH 4 has not been characterized with a non-operational definition, while the set of 1-Endpoint-Crossing trees can be simply defined. However, it also has the following advantages: (+1) It can be given a transition-based interpretation, allowing us to use transition-based scoring functions and to implement the analogous algorithm with greedy or beam search apart from exact inference. No transition-based interpretation is known for 1EC. While a transition-based algorithm has been defined for a strict subset of 1-Endpoint-Crossing trees, called 2-Crossing Interval trees (Pitler and McDonald, 2015), this is a separate algorithm with no known mapping or relation to 1EC or any other dynamic programming model. Thus, we provide the first exact inference algorithm for a non-projective transitionbased parser with practical complexity. (+2) It is conceptually much simpler, with one kind of item and two deduction steps, while the 1-EndpointCrossing parser has five classes of items and several dozen distinct deduction steps. It is also a purely bottom-up parser, whereas the 1-EndpointCrossing parser does not have the bottom-up property. This property is necessary for models that involve compositional representations of subtrees (Dyer et al., 2015), and facilitates parallelization and partial parsing. (+3) It can be easily generalized to MH k for k ą 4, providing higher coverage, with time complexity Opnkq. Out of the mildly non-projective parsers studied in GómezRodríguez (2016), MH 4 provides the maximum coverage with respect to its complexity for k ą 4. (+4) As shown in §5, MH 4 obtains slightly higher accuracy than 1EC on average, albeit not by a conclusive margin. It is worth noting that 1EC has recently been ex6Corro et al. (2016) describe a parser that enforces mildly non-projective constraints (bounded block degree and wellnestedness), but it is an arc-factored model, so it is subject to the same strong independence assumptions as maximumspanning-tree parsers like McDonald et al. (2005) and does not support the greater flexibility in scoring that is the main advantage of mildly non-projective parsers over these. Instead, mild non-projectivity is exclusively used as a criterion to discard nonconforming trees. tended to graph parsing by Kurtz and Kuhlmann (2017), Kummerfeld and Klein (2017), and Cao et al. (2017a,b), with the latter providing a practical implementation of a parser for 1-EndpointCrossing, pagenumber-2 graphs. 7 Conclusion We have extended the parsing architecture of Shi et al. (2017a) to non-projective dependency parsing by implementing the MH 4 parser, a mildly non-projective Opn4q chart parsing algorithm, using a minimal set of transition-based bi-LSTM features. For this purpose, we have established a mapping between MH 4 items and transition sequences of an underlying non-projective transition-based parser. To our knowledge, this is the first practical implementation of exact inference for non-projective transition-based parsing. Empirical results on a collection of highly non-projective datasets from Universal Dependencies show improvements in accuracy over the projective approach of Shi et al. (2017a), as well as edge-factored maximumspanning-tree parsing. The results are on par with the 1-Endpoint-Crossing parser of Pitler (2014) (re-implemented under the same neural framework), but our algorithm is notably simpler and has additional desirable properties: it is purely bottom-up, generalizable to higher coverage, and compatible with transition-based semantics. Acknowledgments We thank the three anonymous reviewers for their helpful comments. CG has received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARESUDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1R) from MINECO, and from Xunta de Galicia (ED431B 2017/01). TS and LL were supported in part by a Google Focused Research Grant to Cornell University. LL was also supported in part by NSF grant SES-1741441. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or other sponsors. 2673 References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442–2452, Berlin, Germany. Association for Computational Linguistics. Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X), pages 166– 170, New York City, New York, USA. Junjie Cao, Sheng Huang, Weiwei Sun, and Xiaojun Wan. 2017a. Parsing to 1-endpoint-crossing, pagenumber-2 graphs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2110–2120, Vancouver, Canada. Association for Computational Linguistics. Junjie Cao, Sheng Huang, Weiwei Sun, and Xiaojun Wan. 2017b. Quasi-second-order parsing for 1endpoint-crossing, pagenumber-2 graphs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 24–34, Copenhagen, Denmark. Association for Computational Linguistics. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 740–750, Doha, Qatar. Shay B. Cohen, Carlos Gómez-Rodríguez, and Giorgio Satta. 2011. Exact inference for generative probabilistic non-projective dependency parsing. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1234–1245, Edinburgh, Scotland, UK. Caio Corro, Joseph Le Roux, Mathieu Lacroix, Antoine Rozenknop, and Roberto Wolfler Calvo. 2016. Dependency parsing with bounded block degree and well-nestedness via Lagrangian relaxation and branch-and-bound. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 355– 366, Berlin, Germany. Association for Computational Linguistics. James Cross and Liang Huang. 2016. Incremental parsing with minimal features using bi-directional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 32–37. Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford’s graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20–30, Vancouver, Canada. Association for Computational Linguistics. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96), pages 340– 345, Copenhagen. Daniel Fernández-González and Carlos GómezRodríguez. 2017. A full non-monotonic transition system for unrestricted non-projective parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 288–298, Vancouver, Canada. Association for Computational Linguistics. Carlos Gómez-Rodríguez. 2016. Restricted nonprojectivity: Coverage vs. efficiency. Computational Linguistics, 42(4):809–817. Carlos Gómez-Rodríguez, John Carroll, and David Weir. 2008. A deductive approach to dependency parsing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technology, pages 968–976. Carlos Gómez-Rodríguez, John Carroll, and David Weir. 2011. Dependency parsing schemata and mildly non-projective dependency parsing. Computational Linguistics, 37(3):541–586. Carlos Gómez-Rodríguez, David Weir, and John Carroll. 2009. Parsing mildly non-projective dependency structures. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 291–299, Athens, Greece. Association for Computational Linguistics. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1077–1086, Uppsala, Sweden. Sylvain Kahane, Alexis Nasr, and Owen Rambow. 1998. Pseudo-projectivity: A polynomially parsable non-projective dependency grammar. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 1, pages 646–652, Montreal, Quebec, Canada. Association for Computational Linguistics. 2674 Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Sandra Kübler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing, volume 2 of Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers. Marco Kuhlmann, Carlos Gómez-Rodríguez, and Giorgio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT), pages 673–682, Portland, Oregon, USA. Jonathan K. Kummerfeld and Dan Klein. 2017. Parsing with traces: An Opn4q algorithm and a structural representation. Transactions of the Association for Computational Linguistics, 5:441–454. Robin Kurtz and Marco Kuhlmann. 2017. Exploiting structure in parsing to 1-endpoint-crossing graphs. In Proceedings of the 15th International Conference on Parsing Technologies, pages 78–87, Pisa, Italy. Association for Computational Linguistics. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523–530, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies (IWPT), pages 121–132, Prague, Czech Republic. Association for Computational Linguistics. Ryan T. McDonald and Fernando C. N. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 81–88, Trento, Italy. Association for Computational Linguistics. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351–359, Suntec, Singapore. Association for Computational Linguistics. Joakim Nivre and Jens Nilsson. 2005. Pseudoprojective dependency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 99–106, Ann Arbor, Michigan. Association for Computational Linguistics. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of the 20th International Conference on Computational Linguistics, pages 64–70, Geneva, Switzerland. COLING. Emily Pitler. 2014. A crossing-sensitive third-order factorization for dependency parsing. Transactions of the Association for Computational Linguistics, 2:41–54. Emily Pitler, Sampath Kannan, and Mitchell Marcus. 2012. Dynamic programming for higher order parsing of gap-minding trees. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 478–488. Association for Computational Linguistics. Emily Pitler, Sampath Kannan, and Mitchell Marcus. 2013. Finding optimal 1-endpoint-crossing trees. Transactions of the Association of Computational Linguistics, 1:13–24. Emily Pitler and Ryan McDonald. 2015. A linear-time transition system for crossing interval trees. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 662–671, Denver, Colorado. Association for Computational Linguistics. Tianze Shi, Carlos Gómez-Rodríguez, and Lillian Lee. 2018. Improving coverage and runtime complexity for exact inference in non-projective transitionbased dependency parsers. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, page (in press), New Orleans, Louisiana. Association for Computational Linguistics. Tianze Shi, Liang Huang, and Lillian Lee. 2017a. Fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 12–23, Copenhagen, Denmark. Tianze Shi, Felix G. Wu, Xilun Chen, and Yao Cheng. 2017b. Combining global models for parsing Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 31–39, Vancouver, Canada. Association for Computational Linguistics. Milan Straka, Jan Hajic, and Jana Straková. 2016. UDPipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. In Proceedings 2675 of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd International Conference on Machine Learning, pages 896–903. Hao Wang, Hai Zhao, and Zhisong Zhang. 2017. A transition-based system for universal dependency parsing. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 191–197, Vancouver, Canada. Association for Computational Linguistics. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with Support Vector Machines. In Proceedings of the 8th International Workshop on Parsing Technologies, pages 195–206. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria dePaiva, Kira Droganova, Héctor Martínez Alonso, Ça˘grı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–19, Vancouver, Canada. Association for Computational Linguistics. Yuan Zhang and David Weiss. 2016. Stackpropagation: Improved representation learning for syntax. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1557–1566, Berlin, Germany. Association for Computational Linguistics. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 188–193, Portland, Oregon, USA. Association for Computational Linguistics.
2018
248
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 2676–2686 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 2676 Constituency Parsing with a Self-Attentive Encoder Nikita Kitaev and Dan Klein Computer Science Division University of California, Berkeley {kitaev, klein}@cs.berkeley.edu Abstract We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-ofthe-art discriminative constituency parser. The use of attention makes explicit the manner in which information is propagated between different locations in the sentence, which we use to both analyze our model and propose potential improvements. For example, we find that separating positional and content information in the encoder can lead to improved parsing accuracy. Additionally, we evaluate different approaches for lexical representation. Our parser achieves new state-ofthe-art results for single models trained on the Penn Treebank: 93.55 F1 without the use of any external data, and 95.13 F1 when using pre-trained word representations. Our parser also outperforms the previous best-published accuracy figures on 8 of the 9 languages in the SPMRL dataset. 1 Introduction In recent years, neural network approaches have led to improvements in constituency parsing (Dyer et al., 2016; Cross and Huang, 2016; Choe and Charniak, 2016; Stern et al., 2017a; Fried et al., 2017). Many of these parsers can broadly be characterized as following an encoder-decoder design: an encoder reads the input sentence and summarizes it into a vector or set of vectors (e.g. one for each word or span in the sentence), and then a decoder uses these vector summaries to incrementally build up a labeled parse tree. In contrast to the large variety of decoder architectures investigated in recent work, the encoders in recent parsers have predominantly been built using recurrent neural networks (RNNs), and in particular Long Short-Term Memory networks (LSTMs). Output Input Encoder Decoder market NN in IN the DT fled VBD and CC …(VP(VBD fled)(NP(DT the)(NN market))… Figure 1: Our parser combines a chart decoder with a sentence encoder based on self-attention. RNNs have largely replaced approaches such as the fixed-window-size feed-forward networks of Durrett and Klein (2015) in part due to their ability to capture global context. However, RNNs are not the only architecture capable of summarizing large global contexts: recent work by Vaswani et al. (2017) presented a new state-of-the-art approach to machine translation with an architecture that entirely eliminates recurrent connections and relies instead on a repeated neural attention mechanism. In this paper, we introduce a parser that combines an encoder built using this kind of self-attentive architecture with a decoder customized for parsing (Figure 1). In Section 2 of this paper, we describe the architecture and present our finding that self-attention can outperform an LSTM-based approach. A neural attention mechanism makes explicit the manner in which information is transferred between different locations in the sentence, which we can use to study the relative importance of different kinds of context to the parsing task. Different locations in the sentence can attend to each other based on their positions, but also based on their contents (i.e. based on the words at or around those positions). In Section 3 we present our find2677 ing that when our parser learns to make an implicit trade-off between these two types of attention, it predominantly makes use of position-based attention, and show that explicitly factoring the two types of attention can noticeably improve parsing accuracy. In Section 4, we study our model’s use of attention and reaffirm the conventional wisdom that sentence-wide global context is important for parsing decisions. Like in most neural parsers, we find morphological (or at least sub-word) features to be important to achieving good results, particularly on unseen words or inflections. In Section 5.1, we demonstrate that a simple scheme based on concatenating character embeddings of word prefixes/suffixes can outperform using part-of-speech tags from an external system. We also present a version of our model that uses a character LSTM, which performs better than other lexical representations – even if word embeddings are removed from the model. In Section 5.2, we explore an alternative approach for lexical representations that makes use of pre-training on a large unsupervised corpus. We find that using the deep contextualized representations proposed by Peters et al. (2018) can boost parsing accuracy. Our parser achieves 93.55 F1 on the Penn Treebank WSJ test set when not using external word representations, outperforming all previous singlesystem constituency parsers trained only on the WSJ training set. The addition of pre-trained word representations following Peters et al. (2018) increases parsing accuracy to 95.13 F1, a new stateof-the-art for this dataset. Our model also outperforms previous best published results on 8 of the 9 languages in the SPMRL 2013/2014 shared tasks. Code and trained English models are publicly available.1 2 Base Model Our parser follows an encoder-decoder architecture, as shown in Figure 1. The decoder, described in Section 2.1, is borrowed from the chart parser of Stern et al. (2017a) with additional modifications from Gaddy et al. (2018). Their parser is architecturally streamlined yet achieves the highest performance among discriminative single-system parsers trained on WSJ data only, which is why we selected it as the starting point for our experiments with encoder variations. Sections 2.2 and 2.3 de1https://github.com/nikitakit/self-attentive-parser scribe the base version of our encoder, where the self-attentive architecture described in Section 2.2 is adapted from Vaswani et al. (2017). 2.1 Tree Scores and Chart Decoder Our parser assigns a real-valued score s(T) to each tree T, which decomposes as s(T) = X (i,j,l)2T s(i, j, l) (1) Here s(i, j, l) is a real-valued score for a constituent that is located between fencepost positions i and j in a sentence and has label l. To handle unary chains, the set of labels includes a collapsed entry for each unary chain in the training set. The model handles n-ary trees by binarizing them and introducing a dummy label ? to nodes created during binarization, with the property that 8i, j : s(i, j, ?) = 0. Enforcing that scores associated with the dummy labels are always zero ensures that (1) continues to hold for all possible binarizations of an n-ary tree. At test time, the model-optimal tree ˆT = arg max T s(T) can be found efficiently using a CKY-style inference algorithm. Given the correct tree T ?, the model is trained to satisfy the margin constraints s(T ?) ≥s(T) + ∆(T, T ?) for all trees T by minimizing the hinge loss max ⇣ 0, max T6=T ? [s(T) + ∆(T, T ?)] −s(T ?) ⌘ Here ∆is the Hamming loss on labeled spans, and the tree corresponding to the most-violated constraint can be found using a slight modification of the inference algorithm used at test time. For further details, see Gaddy et al. (2018). The remainder of this paper concerns itself with the functional form of s(i, j, l), which is calculated using a neural network for all l 6= ?. 2.2 Context-Aware Word Representations The encoder portion of our model is split into two parts: a word-based portion that assigns a contextaware vector representation yt to each position t in the sentence (described in this section), and a chart portion that combines the vectors yt to generate span scores s(i, j, l) (Section 2.3). The architecture for generating the vectors yt is adapted from Vaswani et al. (2017). 2678 Multi-Head Attention Feed Forward LayerNorm LayerNorm tag position word 8 layers Figure 2: An overview of our encoder, which produces a context-aware summary vector for each word in the sentence. The multi-headed attention mechanism is the only means by which information may propagate between different positions in the sentence. The encoder takes as input a sequence of word embeddings [w1, w2, . . . , wT ], where the first and last embeddings are of special start and stop tokens. All word embeddings are learned jointly with other parts of the model. To better generalize to words that are not seen during training, the encoder also receives a sequence of part-of-speech tag embeddings [m1, m2, . . . , mT ] based on the output of an external tagger (alternative lexical representations are discussed in Section 5). Additionally, the encoder stores a learned table of position embeddings, where every number i 2 1, 2, . . . (up to some maximum sentence length) is associated with a vector pi. All embeddings have the same dimensionality, which we call dmodel, and are added together at the input of the encoder: zt = wt + mt + pt. The vectors [z1, z2, . . . , zT ] are transformed by a stack of 8 identical layers, as shown in Figure 2. Each layer consists of two stacked sublayers: a multi-headed attention mechanism and a positionwise feed-forward sublayer. The output of each sublayer given an input x is LayerNorm(x + SubLayer(x)), i.e. each sublayer is followed by a residual connection and a Layer Normalization (Ba et al., 2016) step. As a result, all sublayer outputs, including final outputs yt, are of size dmodel. 2.2.1 Self-Attention The first sublayer in each of our 8 layers is a multi-headed self-attention mechanism, which is the only means by which information may propagate between positions in the sentence. The input k1 kt kT v1 vt vT kt vt qt vt_ p(t→1) query key value p(t→T) xt Figure 3: A single attention head. An input xt is split into three vectors that participate in the attention mechanism: a query qt, a key kt, and a value vt. The query qt is compared with all keys to form a probability distribution p(t ! ·), which is then used to retrieve an average value ¯vt. to the attention mechanism is a T ⇥dmodel matrix X, where each row vector xt corresponds to word t in the sentence. We first consider a single attention head, as illustrated in Figure 3. Learned parameter matrices WQ, WK, and WV are used to map an input xt to three vectors: a query qt = W > Q xt, a key kt = W > Kxt, and a value vt = W > V xt. Query and key vectors have the same number of dimensions, which we call dk. The probability that word i attends to word j is then calculated as p(i ! j) / exp( qi·kj pdk ). The values vj for all words that have been attended to are aggregated to form an average value ¯vi = P j p(i ! j)vj, which is projected back to size dmodel using a learned matrix WO. In matrix form, the behavior of a single attention head is: SingleHead(X) =  Softmax ✓QK> pdk ◆ V ( WO where Q = XWQ; K = XWK; V = XWV Rather than using a single head, our model sums together the outputs from multiple heads: MultiHead(X) = 8 X n=1 SingleHead(n)(X) Each of the 8 heads has its own trainable parameters W (n) Q , W (n) K , W (n) V , and W (n) O . This allows a word to gather information from up to 8 remote locations in the sentence at each attention sublayer. 2679 2.2.2 Position-Wise Feed-Forward Sublayer We use the same form as Vaswani et al. (2017): FeedForward(x) = W2relu(W1x + b1) + b2 Here relu denotes the Rectified Linear Unit nonlinearity, and distinct sets of learned parameters are used at each of the 8 instances of the feedforward sublayer in our model. The input and output dimensions are the same because of the use of residual connections throughout the model, but we can vary the number of parameters by adjusting the size of the intermediate vector that the nonlinearity is applied to. 2.3 Span Scores The outputs yt from the word-based encoder portion described in the previous section are combined to form span scores s(i, j, ·) following the method of Stern et al. (2017a). Concretely, s(i, j, ·) = M2relu(LayerNorm(M1v + c1)) + c2 where LayerNorm denotes Layer Normalization, relu is the Rectified Linear Unit nonlinearity, and v = [ !y j − !y i; y j+1 − y i+1] combines summary vectors for relevant positions in the sentence. A span endpoint to the right of the word potentially requires different information from the endpoint to the left, so a word at a position k is associated with two annotation vectors ( !y k and y k). Stern et al. (2017a) define !y k and y k in terms of the output of the forward and backward portions, respectively, of their BiLSTM encoder; we instead construct each of !y k and y k by splitting in half2 the outputs yk from Section 2.2. We also introduce a Layer Normalization step to match the use of Layer Normalization throughout our model. 2.4 Results The model presented above achieves a score of 92.67 F1 on the Penn Treebank WSJ development set. Details regarding hyperparameter choice and optimizer settings are presented in the supplementary material. For comparison, a model that uses the same decode procedure with an LSTM-based encoder achieves a development set score of 92.24 (Gaddy et al., 2018). These results demonstrate that an RNN-based encoder is not required for 2To avoid an adverse interaction with material described in Section 3, when a vector yk is split in half the even coordinates contribute to !y k and the odd coordinates contribute to y k. building a good parser; in fact, self-attention can achieve better results. 3 Content vs. Position Attention The primary mechanism for information transfer throughout our encoder is self-attention, where words can attend to each other using both content features and position information. In Section 2, we described an encoder that takes as input a component-wise addition between a word, tag, and position embedding for each word in the sentence. Content and position information are intermingled throughout the network. While ideally the network would learn to balance the different types of information, in practice it does not. In this section we show that factoring the model to explicitly separate content and position information results in increased parsing accuracy. To help gauge the relative importance of the two types of attention, we trained a modified version of our model that was only allowed to use position attention. This constraint was enforced by making the query and key vectors used for the attention mechanism be linear transformations of the corresponding word’s position embedding: Q(n) = PW (n) Q and K(n) = PW (n) K . The perhead weight matrices now multiply a matrix P containing the same position embeddings that are used at the input to the encoder, rather than the layer input X (as in Section 2.2.1). However, value vectors V (n) = XW (n) V remain unchanged and continue to carry content-related information. We expected our parser to still achieve reasonable performance when restricted to only use positional attention because the resulting architecture can be viewed as a generalization of a multi-layer convolutional neural network. The 8 attention heads at each layer of our model can mimic the behavior of a size-8 convolutional filter, but can also determine their attention targets dynamically and need not respect any translationinvariance properties. Disabling content-based attention throughout all 8 layers of the network results in a development-set accuracy decrease of only 0.27 F1. While we expected reasonable parsing performance in this setting, it seems strange that content-based attention benefits our model to such a small degree. We next investigate the possibility that intermingling content and position information in a single vector can cause one type of attention to domi2680 nate over the other and compromise the network’s ability to find the optimal balance of the two. To do this we propose a factored version of our model that explicitly separates content and position information. A first step is to replace the component-wise addition zt = wt+mt+pt (where wt, mt, and pt represent word, tag, and position embeddings, respectively) with a concatenation zt = [wt + mt; pt]. We preserve the size of the vector zt by cutting the dimensionality of embeddings in half for the concatenative scheme. However, simply isolating the position-related components of the input vectors in this manner does not improve the performance of our network: the concatenative network achieves a development-set F1 of 92.60 (not much different from 92.67 F1 using the model in Section 2). The issue with intermingling information is not the component-wise addition per se. In fact, concatenation and addition often perform similarly in high dimensions (especially when the resulting vector is immediately multiplied by a matrix that intermingles the two sources of information). On that note, we can examine how the mixed vectors are used later in the network, and in particular in the query-key dot products for the attention mechanism. If we have a query-key dot product q · k (see Section 2.2.1) where we imagine q decomposing into content and positional information as q = q(c) + q(p) (and likewise for k), we have q · k = (q(c) + q(p)) · (k(c) + k(p)). This formulation includes cross-terms such as q(c) · k(p); for example it is possible to learn a network where the word the always attends to the 5th position in the sentence. Such cross-attention seems of limited use compared to the potential for overfitting that it introduces. To complete our factored model, we find all cases where a vector x = [x(c); x(p)] is multiplied by a parameter matrix, and replace the matrix multiplication c = Wx with a split form c = [c(c); c(p)] = [W (c)x(c); W (p)x(p)]. This causes a number of intermediate quantities in our model to be factored, including all query and key vectors. Query-key dot products now decompose as q·k = q(c)·k(c)+q(p)·k(p). The result of factoring a single attention head, shown in Figure 4, can also be viewed as separately applying attention to x(c) and x(p), except that the log-probabilities in the two halves are added together prior to value k1(p) kt(p) kT(p) v1(p) vt(p) vT(p) qt(p) kt(p) vt(p) query key value k1(c) kt(c) kT(c) v1(c) vt(c) vT(c) vt(c) kt(c) qt(c) value key query vt(p) _ vt(c) _ position word tag Figure 4: A single attention head, after factoring content and position information. Attention probabilities are calculated separately for the two types of information, and a combined probability distribution is then applied to both types of input information. lookup. The feed-forward sublayers in our model (Section 2.2.2) are likewise split into two independent portions that operate on position and content information. Alternatively, factoring can be seen as enforcing the block-sparsity constraint W = W (c) 0 0 W (p) ( on parameter matrices throughout our model. We maintain the same vector sizes as in Section 2, which means that factoring strictly reduces the number of trainable parameters. For simplicity, we split each vector into equal halves that contain position and content information, cutting the number of model parameters roughly in half. This factored scheme is able to achieve 93.15 development-set F1, an improvement of almost 0.5 F1 over the unfactored model. These results suggest that factoring different types of information leads to a better parser, but there is in principle a confound: perhaps by making all matrices block-sparse we’ve stumbled across a better hyperparameter configuration. For example, these gains could be due to a difference in the number of trainable parameters alone. To control for this confound we also evaluated a version of our model that enforces block-sparsity throughout, but retains the use of componentwise addition at the inputs. This model achieves 92.63 F1 (not much different from the unfactored model), which supports our hypothesis that true factoring of information is important. 2681 Attention Content Position F1 All 8 layers All 8 layers 93.15 All 8 layers Disabled 72.45 Disabled All 8 layers 90.84 First 4 layers only All 8 layers 91.77 Last 4 layers only All 8 layers 92.82 First 6 layers only All 8 layers 92.42 Last 6 layers only All 8 layers 92.90 Table 1: Development-set F1 scores when content and/or position attention is selectively disabled at test-time only for a subset of the layers in our model. Position attention is the most important contributor to our model, but content attention is also helpful (especially at the final layers of the encoder). 4 Analysis of our Model The defining feature of our encoder is the use of self-attention, which is the only mechanism for transfer of information between different locations throughout a sentence. The attention is further factored into types: content-based attention and position-based attention. In this section, we analyze the manner in which our model uses this attention mechanism to make its predictions. 4.1 Content vs. Position Attention To examine the relative utilization of contentbased vs. position-based attention in our architecture, we perturb a trained model at test-time by selectively zeroing out the contribution of either the content or the position component to any attention mechanism. This can be done independently at different layers; the results of this experiment are shown in Table 1. We can see that our model learns to use a combination of the two attention types, with positionbased attention being the most important. We also see that content-based attention is more useful at later layers in the network, which is consistent with the idea that the initial layers of our model act similarly to a dilated convolutional network while the upper layers have a greater balance between the two attention types. 4.2 Windowed Attention We can also examine our model’s use of longdistance context information by applying windowDistance F1 (strict) F1 (relaxed) 5 81.65 89.82 10 89.83 92.20 15 91.72 92.78 20 92.48 92.91 30 93.01 93.09 40 93.04 93.12 1 93.15 Table 2: Development-set F1 scores when attention is constrained to not exceed a particular distance in the sentence at test time only. In the relaxed setting, the first and last two tokens of the sentence can attend to any word and be attended to by any word, to allow for sentence-wide pooling of information. ing to the attention mechanism. We begin by taking our trained model and windowing the attention mechanism at test-time only. As shown in Table 2, strict windowing yields poor results: even a window of size 40 causes a loss in parsing accuracy compared to the original model. When we began to investigate how the model makes use of long-distance attention, we immediately found that there are particular attention heads at some layers in our model that almost always attend to the start token. This suggests that the start token is being used as the location for some sentence-wide pooling/processing, or perhaps as a dummy target location when a head fails to find the particular phenomenon that it’s learned to search for. In light of this observation, we introduce a relaxed variation on the windowing scheme, where the start token, first word, last word, and stop token can participate in all possible uses of attention, but pairs of other words in the sentence can only attend to each other if they are within a given window. We include three other positions in addition to the start token to do our best to cover possible locations for global pooling by our model. Results for relaxed windowing at test-time only are also shown in Table 2. Even when we allow global processing to take place at designated locations such as the start token, our model is able to make use of long-distance dependencies at up to length 40. Next, we examine whether the parser’s use of long-distance dependencies is actually essential to performing the task by retraining our model subject to windowing. To evaluate the role of global 2682 Distance F1 (strict) F1 (relaxed) 5 92.74 92.94 10 92.92 93.00 20 93.06 93.17 1 93.15 Table 3: Development-set F1 scores when attention is constrained to not exceed a particular distance in the sentence during training and at test time. In the relaxed setting, the first and last two tokens of the sentence can attend to any word and be attended to by any word, to allow for sentencewide pooling of information. computation, we consider both strict and relaxed windowing. In principle we could have replaced relaxed windowing at training time with explicit provisions for global computation, but for analysis purposes we choose to minimize departures from our original architecture. The results, shown in Table 3, demonstrate that long-distance dependencies continue to be essential for achieving maximum parsing accuracy using our model. Note that when a window of size 10 was imposed at training time, this was per-layer and the series of 8 layers actually had an effective context size of around 80 – which was still insufficient to recover the performance of our full parser (with either approach to windowing). The sideby-side comparison of strict and relaxed windowing shows that the ability to pool global information, using the designated locations that are always available in the relaxed scheme, consistently translates to accuracy gains but is insufficient to compensate for small window sizes. This suggests that not only must the information signal from longdistance tokens be available in principle, but that it also helps to have this information be directly accessible without an intermediate bottleneck. 5 Lexical Models The models described in previous sections all rely on pretagged input sentences, where the tags are predicted using the Stanford tagger. We use the same pretagged dataset as Cross and Huang (2016). In this section we explore two alternative classes of lexical models: those that use no external systems or data of any kind, as well as word vectors that are pretrained in an unsupervised manner. Word embeddings 3 7 None 92.20 – Tags 93.15 – CharLSTM 93.40 93.61 CharConcat 93.32 93.35 Table 4: Development-set F1 scores for different approaches to handling morphology, with and without the addition of learned word embeddings. 5.1 Models with Subword Features If tag embeddings are removed from our model and only word embeddings remain (where word embeddings are learned jointly with other model parameters), performance suffers by around 1 F1. To restore performance without introducing any dependencies on an external system, we explore incorporating lexical features directly into our model. The results for different approaches we describe in this section are shown in Table 4. We first evaluate an approach (CHARLSTM) that independently runs a bidirectional LSTM over the characters in each word and uses the LSTM outputs in place of part-of-speech tag embeddings. We find that this approach performs better than using predicted part-of-speech tags. We can further remove the word embeddings (leaving the character LSTMs only), which does not seem to hurt and can actually help increase parsing accuracy. Next we examine the importance of recurrent connections by constructing and evaluating a simpler alternative. Our approach (CHARCONCAT) is inspired by Hall et al. (2014), who found it effective to replace words with frequently-occurring suffixes, and the observation that our original tag embeddings are rather high-dimensional. To represent a word, we extract its first 8 letters and last 8 letters, embed each letter, and concatenate the results. If we use 32-dimensional embeddings, the 16 letters can be packed into a 512-dimensional vector – the same size as the inputs to our model. This size for the inputs in our model was chosen to simplify the use of residual connections (by matching vector dimensions), even though the inputs themselves could have been encoded in a smaller vector. This allows us to directly replace tag embeddings with the 16-letter prefix/suffix concatenation. For short words, embeddings of 2683 a padding token are inserted as needed. Words longer than 16 letters are represented in a lossy manner by this concatenative approach, but we hypothesize that prefix/suffix information is enough for our task. We find this simple scheme remarkably effective: it is able to outperform pretagging and can operate even in the absence of word embeddings. However, its performance is ultimately not quite as good as using a character LSTM. Given the effectiveness of the self-attentive encoder at the sentence level, it is aesthetically appealing to consider it as a sub-word architecture as well. However, it was empirically much slower, did not parallelize better than a character-level LSTM (because words tend to be short), and initial results underperformed the LSTM. One explanation is that in a lexical model, one only wants to compute a single vector per word, whereas the self-attentive architecture is better adapted for producing context-aware summaries at multiple positions in a sequence. 5.2 External Embeddings Next, we consider a version of our model that uses external embeddings. Recent work by Peters et al. (2018) has achieved state-of-the-art performance across a range of NLP tasks by augmenting existing models with a new technique for word representation called ELMo (Embeddings from Language Models). Their approach is able to capture both subword information and contextual clues: the embeddings are produced by a network that takes characters as input and then uses an LSTM to capture contextual information when producing a vector representation for each word in a sentence. We evaluate a version of our model that uses ELMo as the sole lexical representation, using publicly available ELMo weights. These pre-trained word representations are 1024dimensional, whereas all of our factored models thus far have 512-dimensional content representations; we found that the most effective way to address this mismatch is to project the ELMo vectors to the required dimensionality using a learned weight matrix. With the addition of contextualized word representations, we hypothesized that a full 8 layers of self-attention would no longer be necessary. This proved true in practice: our best development set result of 95.21 F1 was obtained with a 4-layer encoder. Encoder Architecture F1 (dev) ∆ LSTM (Gaddy et al., 2018) 92.24 -0.43 Self-attentive (Section 2) 92.67 0.00 + Factored (Section 3) 93.15 0.48 + CharLSTM (Section 5.1) 93.61 0.94 + ELMo (Section 5.2) 95.21 2.54 Table 5: A comparison of different encoder architectures and their development-set performance relative to our base self-attentive model. LR LP F1 Single model, WSJ only Vinyals et al. (2015) – – 88.3 Cross and Huang (2016) 90.5 92.1 91.3 Gaddy et al. (2018) 91.76 92.41 92.08 Stern et al. (2017b) 92.57 92.56 92.56 Ours (CharLSTM) 93.20 93.90 93.55 Multi-model/External Durrett and Klein (2015) – – 91.1 Vinyals et al. (2015) – – 92.8 Dyer et al. (2016) – – 93.3 Choe and Charniak (2016) – – 93.8 Liu and Zhang (2017) – – 94.2 Fried et al. (2017) – – 94.66 Ours (ELMo) 94.85 95.40 95.13 Table 6: Comparison of F1 scores on the WSJ test set. 6 Results 6.1 English (WSJ) The development set scores of the parser variations presented in previous sections are summarized in Table 5. Our best-performing parser used a factored self-attentive encoder over ELMo word representations. The results of evaluating our model on the test set are shown in Table 6. The test score of 93.55 F1 for our CharLSTM parser exceeds the previous best numbers for single-system parsers trained on the Penn Treebank (without the use of any external data, such as pre-trained word embeddings). When our parser is augmented with ELMo word representations, it achieves a new state-of-the-art score of 95.13 F1 on the WSJ test set. Our WSJ-only parser took 18 hours to train using a single Tesla K80 GPU and can parse the 2684 Arabic Basque French German Hebrew Hungarian Korean Polish Swedish Avg Dev (all lengths) Coavoux and Crabb´e (2017) 83.07 88.35 82.35 88.75 90.34 91.22 86.78b 94.0 79.64 87.16 Ours (CharLSTM only) 85.94 90.05 84.27 91.26 90.50 92.23 87.90 93.94 79.34 88.38 Ours (CharLSTM + word embeddings) 85.59 89.31 84.42 91.39 90.78 92.32 87.62 93.76 79.71 88.32 Test (all lengths) Bj¨orkelund et al. (2014), ensemble 81.32a 88.24 82.53 81.66 89.80 91.72 83.81 90.50 85.50 86.12 Cross and Huang (2016) – – 83.31 – – – – – – – Coavoux and Crabb´e (2017) 82.92b 88.81 82.49 85.34 89.87 92.34 86.04 93.64 84.0 87.27 Ours (model selected on dev) 85.61 89.71 84.06 87.69 90.35 92.69 86.59 93.69 84.45 88.32 ∆: Ours - Best Previous +2.69 +0.90 +0.75 +2.35 +0.48 +0.35 +0.55 +0.05 -1.05 Table 7: Results on the SPMRL dataset. All values are F1 scores calculated using the version of evalb distributed with the shared task. aBj¨orkelund et al. (2013) bUses character LSTM, whereas other results from Coavoux and Crabb´e (2017) use predicted part-of-speech tags. 1,700-sentence WSJ development set in 8 seconds. When using ELMo embeddings, training time was 13 hours (not including the time needed to pretrain the word embeddings) and parsing the development set takes 24 seconds. Training and inference times are dominated by neural network computations; our single-threaded Cython implementation of the chart decoder (Section 2.1) consumes a negligible fraction of total running time. 6.2 Multilingual (SPMRL) We tested our model’s ability to generalize across languages by training it on the nine languages represented in the SPMRL 2013/2014 shared tasks (Seddah et al., 2013). To verify that our lexical representations can function for morphologicallyrich languages and smaller treebanks, we restricted ourselves to running a subset of the exact models that we evaluated on English. In particular, we evaluated the model that uses a character-level LSTM, with and without the addition of learned word embeddings. We did not evaluate ELMo in the multilingual setting because pre-trained ELMo weights were only available for English. Hyperparameters were unchanged compared to the English model with the exception of the learning rate, which we adjusted for some of the smaller datasets in the SPMRL task (see Table 9 in the supplementary material). Results are shown in Table 7. Development set results show that the addition of word embeddings to a model that uses a character LSTM has a mixed effect: it improves performance for some languages, but hurts for others. For each language, we selected the trained model that performed better on the development set and evaluated it on the test set. On 8 of the 9 languages, our test set result exceeds the previous best-published numbers from any system we are aware of. The exception is Swedish, where the model of Bj¨orkelund et al. (2014) continues to be state-of-the-art despite a number of approaches proposed in the intervening years that have achieved better performance on other languages. We note that their model uses ensembling (via product grammars) and a reranking step, whereas our model was only evaluated in the single-system condition. 7 Conclusion In this paper, we show that the choice of encoder can have a substantial effect on parser performance. In particular, we demonstrate state-of-theart parsing results with a novel encoder based on factored self-attention. The gains we see come not only from incorporating more information (such as subword features or externally-trained word representations), but also from structuring the architecture to separate different kinds of information from each other. Our results suggest that further research into different ways of encoding utterances can lead to additional improvements in both parsing and other natural language processing tasks. Acknowledgments NK is supported by an NSF Graduate Research Fellowship. This research used the Savio computational cluster provided by the Berkeley Research Computing program at the University of California, Berkeley. 2685 References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer Normalization. arXiv:1607.06450 [cs, stat]. ArXiv: 1607.06450. Anders Bj¨orkelund, Ozlem Cetinoglu, Agnieszka Fale´nska, Rich´ard Farkas, Thomas Mueller, Wolfgang Seeker, and Zsolt Sz´ant´o. 2014. The IMSWrocław-Szeged-CIS entry at the SPMRL 2014 shared task: Reranking and morphosyntax meet unlabeled data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of NonCanonical Languages, pages 97–102. Anders Bj¨orkelund, Ozlem Cetinoglu, Rich´ard Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (Re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 135–145, Seattle, Washington, USA. Association for Computational Linguistics. Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331–2336. Association for Computational Linguistics. Maximin Coavoux and Benoit Crabb´e. 2017. Multilingual lexicalized constituency parsing with wordlevel auxiliary tasks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 331–336. Association for Computational Linguistics. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1–11. Association for Computational Linguistics. Greg Durrett and Dan Klein. 2015. Neural CRF parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302–312. Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209. Association for Computational Linguistics. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161–166. Association for Computational Linguistics. David Gaddy, Mitchell Stern, and Dan Klein. 2018. What’s going on in neural constituency parsers? An analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. David Hall, Greg Durrett, and Dan Klein. 2014. Less grammar, more features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 228–237. Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5:413–424. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D. Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi´orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, Alina Wr´oblewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146–182. Association for Computational Linguistics. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017a. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818–827. Association for Computational Linguistics. Mitchell Stern, Daniel Fried, and Dan Klein. 2017b. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695–1700. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, 2686 H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28, pages 2755– 2763. Curran Associates, Inc.
2018
249
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 263–272 Melbourne, Australia, July 15 - 20, 2018. c⃝2018 Association for Computational Linguistics 263 Probabilistic Embedding of Knowledge Graphs with Box Lattice Measures Luke Vilnis⇤ Xiang Li⇤ Shikhar Murty Andrew McCallum College of Information and Computer Sciences University of Massachusetts Amherst {luke,xiangl,smurty,mccallum}@cs.umass.edu Abstract Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model. * Equal contribution. 1 Introduction Structured embeddings based on regions, densities, and orderings have gained popularity in recent years for their inductive bias towards the essential asymmetries inherent in problems such as image captioning (Vendrov et al., 2016), lexical and textual entailment (Erk, 2009; Vilnis and McCallum, 2015; Lai and Hockenmaier, 2017; Athiwaratkun and Wilson, 2018), and knowledge graph completion and reasoning (He et al., 2015; Nickel and Kiela, 2017; Li et al., 2017). Models that easily encode asymmetry, and related properties such as transitivity (the two components of commonplace relations such as partially ordered sets and lattices), have great utility in these applications, leaving less to be learned from the data than arbitrary relational models. At their best, they resemble a hybrid between embedding models and structured prediction. As noted by Vendrov et al. (2016) and Li et al. (2017), while the models learn sets of embeddings, these parameters obey rich structural constraints. The entire set can be thought of as one, sometimes provably consistent, structured prediction, such as an ontology in the form of a single directed acyclic graph. While the structured prediction analogy applies best to Order Embeddings (OE), which embeds consistent partial orders, other region- and density-based representations have been proposed for the express purpose of inducing a bias towards asymmetric relationships. For example, the Gaussian Embedding (GE) model (Vilnis and McCallum, 2015) aims to represent the asymmetry and uncertainty in an object’s relations and attributes by means of uncertainty in the representation. However, while the space of representations is a manifold of probability distributions, the model is not truly probabilistic in that it does not model asymmetries and relations in terms of prob264 abilities, but in terms of asymmetric comparison functions such as the originally proposed KL divergence and the recently proposed thresholded divergences (Athiwaratkun and Wilson, 2018). Probabilistic models are especially compelling for modeling ontologies, entailment graphs, and knowledge graphs. Their desirable properties include an ability to remain consistent in the presence of noisy data, suitability towards semisupervised training using the expectations and uncertain labels present in these large-scale applications, the naturality of representing the inherent uncertainty of knowledge they store, and the ability to answer complex queries involving more than 2 variables. Note that the final one requires a true joint probabilistic model with a tractable inference procedure, not something provided by e.g. matrix factorization. We take the dual approach to density-based embeddings and model uncertainty about relationships and attributes as explicitly probabilistic, while basing the probability on a latent space of geometric objects that obey natural structural biases for modeling transitive, asymmetric relations. The most similar work are the probabilistic order embeddings (POE) of Lai (Lai and Hockenmaier, 2017), which apply a probability measure to each order embedding’s forward cone (the set of points greater than the embedding in each dimension), assigning a finite and normalized volume to the unbounded space. However, POE suffers severe limitations as a probabilistic model, including an inability to model negative correlations between concepts, which motivates the construction of our box lattice model. Our model represents objects, concepts, and events as high-dimensional products-of-intervals (hyperrectangles or boxes), with an event’s unary probability coming from the box volume and joint probabilities coming from overlaps. This contrasts with POE’s approach of defining events as the forward cones of vectors, extending to infinity, integrated under a probability measure that assigns them finite volume. One desirable property of a structured representation for ordered data, originally noted in (Vendrov et al., 2016) is a “slackness” shared by OE, POE, and our model: when the model predicts an “edge” or lack thereof (i.e. P(a|b) = 0 or 1, or a zero constraint violation in the case of OE), being exposed to that fact again will not update the model. Moreover, there are large degrees of freedom in parameter space that exhibit this slackness, giving the model the ability to embed complex structure with 0 loss when compared to models based on symmetric inner products or distances between embeddings, e.g. bilinear GLMs (Collins et al., 2002), Trans-E (Bordes et al., 2013), and other embedding models which must always be pushing and pulling parameters towards and away from each other. Our experiments demonstrate the power of our approach to probabilistic ordering-biased relational modeling. First, we investigate an instructive 2-dimensional toy dataset that both demonstrates the way the model self organizes its box event space, and enables sensible answers to queries involving arbitrary numbers of variables, despite being trained on only pairwise data. We achieve a new state of the art in denotational probability modeling on the Flickr entailment dataset (Lai and Hockenmaier, 2017), and a matching state-of-the-art on WordNet hypernymy (Vendrov et al., 2016; Miller, 1995) with the concurrent work on thresholded Gaussian embedding of Athiwaratkun and Wilson (2018), achieving our best results by training on additional co-occurrence expectations aggregated from leaf types. We find that the strong empirical performance of probabilistic ordering models, and our box lattice model in particular, and their endowment of new forms of training and querying, make them a promising avenue for future research in representing structured knowledge. 2 Related Work In addition to the related work in structured embeddings mentioned in the introduction, our focus on directed, transitive relational modeling and ontology induction shares much with the rich field of directed graphical models and causal modeling (Pearl, 1988), as well as learning the structure of those models (Heckerman et al., 1995). Work in undirected structure learning such the Graphical Lasso (Friedman et al., 2008) is also relevant due to our desire to learn from pairwise joint/conditional probabilities and moment matrices, which are closely related in the setting of discrete variables. Especially relevant research in Bayesian networks are applications towards learning taxonomic structure of relational data (Bansal et al., 265 2014), although this work is often restricted towards tree-shaped ontologies, which allow efficient inference by Chu-Liu-Edmonds’ algorithm (Chu and Liu, 1995), while we focus on arbitrary DAGs. As our model is based on populating a latent “event space” into boxes (products of intervals), it is especially reminiscent of the Mondrian process (Roy and Teh, 2009). However, the Mondrian process partitions the space as a high dimensional tree (a non-parametric kd-tree), while our model allows the arbitrary box placement required for DAG structure, and is much more tractable in high dimensions compared to the Mondrian’s Bayesian non-parametric inference. Embedding applications to relational learning constitute a huge field to which it is impossible to do justice, but one general difference between our approaches is that e.g. a matrix factorization model treats the embeddings as objects to score relation links with, as opposed to POE or our model in which embeddings represent subsets of probabilistic event space which are directly integrated. They are full probabilistic models of the joint set of variables, rather than embedding-based approximations of only low-order joint and conditional probabilities. That is, any set of our parameters can answer any arbitrary probabilistic question (possibly requiring intractable computation), rather than being fixed to modeling only certain subsets of the joint. Embedding-based learning’s large advantage over the combinatorial structure learning presented by classical PGM approaches is its applicability to large-scale probability distributions containing hundreds of thousands of events or more, as in both our WordNet and Flickr experiments. 3 Background 3.1 Partial Orders and Lattices A non-strict partial ordered set (poset) is a set P equipped with a binary relation ⪯such that for all a, b, c 2 P, • a ⪯a (reflexivity) • a ⪯b ⪯a implies a = b (antisymmetry) • a ⪯b ⪯c implies a ⪯c (transitivity) This is simply a generalization of a totally ordered set that allows some elements to be incomparable, and is a good model for the kind of acyclic directed graph data found in knowledge bases. A lattice is a poset where any subset has a a unique least upper and greatest lower bound, which will be true of all posets (lattices) considered in this paper. The least upper bound of two elements a, b 2 P is called the join, denoted a _ b, and the greatest lower bound is called the meet, denoted a ^ b. Additionally, in a bounded lattice we have two extra elements, called top, denoted > and bottom, denoted ?, which are respectively the least upper bound and greatest lower bound of the entire space. Using the extended real number line (adding points at infinity), all lattices considered in this paper are bounded lattices. 3.2 Order Embeddings (OE) Vendrov et al. (2016) introduced a method for embedding partially ordered sets and a task, partial order completion, an abstract term for things like hypernym or entailment prediction (learning transitive relations). The goal is to learn a mapping from the partially-ordered data domain to some other partially-ordered space that will enable generalization. Definition 1. Vendrov et al. (2016) A function f : (X, ⪯X) ! (Y, ⪯Y ) is an orderembedding if for all u, v 2 X u ⪯X v () f(u) ⪯Y f(v) They choose Y to be a vector space, and the order ⪯Y to be based on the reverse product order on Rn +, which specifies x ⪯y () 8i 2 {1..n}, xi ≥yi so an embedding is below another in the hierarchy if all of the coordinates are larger, and 0 provides a top element. Although Vendrov et al. (2016) do not explicitly discuss it, their model does not just capture partial orderings, but is a standard construction of a vector (Hilbert) lattice, in which the operations of meet and join can be defined as taking the pointwise maximum and minimum of two vectors, respectively (Zaanen, 1997). This observation is also used in (Li et al., 2017) to generate extra constraints for training order embeddings. As noted in the original work, these single point embeddings can be thought of as regions, i.e. the 266 cone extending out from the vector towards infinity. All concepts “entailed” by a given concept must lie in this cone. This ordering is optimized from examples of ordered elements and negative samples via a maxmargin loss. 3.3 Probabilistic Order Embeddings (POE) Lai and Hockenmaier (2017) built on the “region” idea to derive a probabilistic formulation (which we will refer to as POE) to model entailment probabilities in a consistent, hierarchical way. Noting that all of OE’s regions obviously have the same infinite area under the standard (Lebesgue) measure of Rn +, they propose a probabilistic interpretation where the Bernoulli probability of each concept a or joint set of concepts {a, b} with corresponding vectors {x, y} is given by its volume under the exponential measure: p(a) = exp(− X i xi) = Z z⪯x exp(−kzk1)dz p(a, b) = p(x ^ y) = exp(−k max(xi, yi)k1) since the meet of two vectors is simply the intersection of their area cones, and replacing sums with `1 norms for brevity since all coordinates are positive. While having the intuition of measuring the areas of cones, this also automatically gives a valid probability distribution over concepts since this is just the product likelihood under a coordinatewise exponential distribution. However, they note a deficiency of their model — it can only model positive (Pearson) correlations between concepts (Bernoulli variables). Consider two Bernoulli variables a and b, whose probabilities correspond to the areas of cones x and y. Recall the Bernoulli covariance formula (we will deal with covariances instead of correlations when convenient, since they always have the same sign): cov(a, b) = p(a, b) −p(a)p(b) = exp(−k max(xi, yi)k1) −exp(−kxi + yik1) Since the sum of two positive vectors can only be greater than the sum of their pointwise maximum, this quantity will always be nonnegative. This has real consequences for probabilistic modeling in KBs: conditioning on more concepts will only make probabilities higher (or unchanged), e.g. p(dog|plant) ≥p(dog). 3.4 Probabilistic Asymmetric Transitive Relations Probabilistic models have pleasing consistency properties for modeling asymmetric transitive relations, in particular compared to density-based embeddings — a pairwise conditional probability table can almost always (in the technical sense) be asymmetrized to produce a DAG by simply taking an edge if P(a|b) > P(b|a). A matrix of pairwise Gaussian KL divergences cannot be consistently asymmetrized in this manner. These claims are proven in Appendix C. While a high P(a|b) does not always indicate an edge in an ontology due to confounding variables, existing graphical model structure learning methods can be used to further prune on the base graph without adding a cycle, such as Graphical Lasso or simple thresholding (Fattahi and Sojoudi, 2017). 4 Method We develop a probabilistic model for lattices based on hypercube embeddings that can model both positive and negative correlations. Before describing this, we first motivate our choice to abandon OE/POE type cone-based models for this purpose. 4.1 Correlations from Cone Measures Claim. For a pair of Bernoulli variables p(a) and p(b), cov(a, b) ≥0 if the Bernoulli probabilities come from the volume of a cone as measured under any product (coordinate-wise) probability measure p(x) = Qn i pi(xi) on Rn, where Fi, the associated CDF for pi, is monotone increasing. Proof. For any product measure we have Z z⪯x p(z)dz = n Y i Z xizi pi(zi)dzi = n Y i 1 −Fi(xi) This is just the area of the unique box corresponding to Qn i [Fi(xi), 1] 2 [0, 1]n, under the uniform measure. This box is unique as a monotone increasing univariate CDF is bijective with (0, 1) — cones in Rn can be invertibly mapped to boxes of equivalent measure inside the unit hypercube [0, 1]n. These boxes have only half their degrees of freedom, as they have the form [Fi(xi), 1] per dimension, (intuitively, they have one end ”stuck at infinity” since the cone integrates to infinity. So W.L.O.G. we can consider two transformed cones x and y corresponding to our Bernoulli 267 variables a and b, and letting Fi(xi) = ui and Fi(yi) = vi, their intersection in the unit hypercube is Qn i [max(ui, vi), 1]. Pairing terms in the right-hand product, we have p(a, b) −p(a)p(b) = n Y i (1 −max(ui, vi)) − n Y i (1 −ui)(1 −vi) ≥0 since the right contains all the terms of the left and can only grow smaller. This argument is easily modified to the case of the nonnegative orthant, mutatis mutandis. An open question for future work is what nonproduct measures this claim also applies to. Note that some non-product measures, such as multivariate Gaussian, can be transformed into product measures easily (whitening) and the above proof would still apply. It seems probable that some measures, nonlinearly entangled across dimensions, could encode negative correlations in cone volumes. However, it is not generally tractable to integrate high-dimensional cones under arbitrary non-product measures. 4.2 Box Lattices The above proof gives us intuition about the possible form of a better representation. Cones can be mapped into boxes within the unit hypercube while preserving their measure, and the lack of negative correlation seems to come from the fact that they always have an overly-large intersection due to “pinning” the maximum in each dimension to 1. To remedy this, we propose to learn representations in the space of all boxes (axis-aligned hyperrectangles), gaining back an extra degree of freedom. These representations can be learned with a suitable probability measure in Rn, the nonnegative orthant Rn +, or directly in the unit hypercube with the uniform measure, which we elect. We associate each concept with 2 vectors, the minimum and maximum value of the box at each dimension. Practically for numerical reasons these are stored as a minimum, a positive offset plus an ✏term to prevent boxes from becoming too small and underflowing. Let us define our box embeddings as a pair of vectors in [0, 1]n, (xm, xM), representing the maximum and minimum at each coordinate. Then we can define a partial ordering by inclusion of boxes, and a lattice structure as x ^ y = ? if x and y disjoint, else x ^ y = Y i [max(xm,i, ym,i), min(xM,i, yM,i)] x _ y = Y i [min(xm,i, ym,i), max(xM,i, yM,i)] where the meet is the intersecting box, or bottom (the empty set) where no intersection exists, and join is the smallest enclosing box. This lattice, considered on its own terms as a non-probabilistic object, is strictly more general than the order embedding lattice in any dimension, which is proven in Appendix B. However, the finite sizes of all the lattice elements lead to a natural probabilistic interpretation under the uniform measure. Joint and marginal probabilities are given by the volume of the (intersection) box. For concept a with associated box (xm, xM), probability is simply p(a) = Qn i (xM,i −xm,i) (under the uniform measure). p(?) is of course zero since no probability mass is assigned to the empty set. It remains to show that this representation can represent both positive and negative correlations. Claim. For a pair of Bernoulli variables p(a) and p(b), corr(a, b) can take on any value in [−1, 1] if the probabilities come from the volume of associated boxes in [0, 1]n. Proof. Boxes can clearly model disjointness (exactly −1 correlation if the total volume of the boxes equals 1). Two identical boxes give their concepts exactly correlation 1. The area of the meet is continuous with respect to translations of intersecting boxes, and all other terms in correlation stay constant, so by continuity of the correlation function our model can achieve all possible correlations for a pair of variables. This proof can be extended to boxes in Rn with product measures by the previous reduction. Limitations: Note that this model cannot perfectly describe all possible probability distributions or concepts as embedded objects. For example, the complement of a box is not a box. However, queries about complemented variables can be calculated by the Inclusion-Exclusion principle, made more efficient by the fact that all nonnegated terms can be grouped and calculated exactly. We show some toy exact calculations with 268 negated variables in Appendix A. Also, note that in a knowledge graph often true complements are not required — for example mortal and immortal are not actually complements, because the concept color is neither. Additionally, requiring the total probability mass covered by boxes to equal 1, or exactly matching marginal box probabilities while modeling all correlations is a difficult box-packing-type problem and not generally possible. Modeling limitations aside, the union of boxes having mass < 1 can be seen as an open-world assumption on our KB (not all points in space have corresponding concepts, yet). 4.3 Learning While inference (calculation of pairwise joint, unary marginal, and pairwise conditional probabilities) is quite straightforward by taking intersections of boxes and computing volumes (and their ratios), learning does not appear easy at first glance. While the (sub)gradient of the joint probability is well defined when boxes intersect, it is non-differentiable otherwise. Instead we optimize a lower bound. Clearly p(a _ b) ≥p(a [ b), with equality only when a = b, so this can give us a lower bound: p(a ^ b) = p(a) + p(b) −p(a [ b) ≥p(a) + p(b) −p(a _ b) Where probabilities are always given by the volume of the associated box. This lower bound always exists and is differentiable, even when the joint is not. It is guaranteed to be nonpositive except when a and b intersect, in which case the true joint likelihood should be used. While a negative bound on a probability is odd, inspecting the bound we see that its gradient will push the enclosing box to be smaller, while increasing areas of the individual boxes, until they intersect, which is a sensible learning strategy. Since we are working with small probabilities it is advisable to negate this term and maximize the negative logarithm: −log(p(a _ b) −p(a) −p(b)) This still has an unbounded gradient as the lower bound approaches 0, so it is also useful to add a constant within the logarithm function to avoid numerical problems. Since the likelihood of the full data is usually intractable to compute as a conjunction of many negations, we optimize binary conditional and unary marginal terms separately by maximum likelihood. In this work, we parametrize the boxes as (min, ∆= max −min), with Euclidean projections after gradient steps to keep our parameters in the unit hypercube and maintain the minimum/delta constraints. Now that we have the ability to compute probabilities and (surrogate) gradients for arbitrary marginals in the model, and by extension conditionals, we will see specific examples in the experiments. 5 Experiments 5.1 Warmup: 2D Embedding of a Toy Lattice We begin by investigating properties of our model in modeling a small toy problem, consisting of a small hand constructed ontology over 19 concepts, aggregated from atomic synthetic examples first into a probabilistic lattice (e.g. some rabbits are brown, some are white), and then a full CPD. We model it using only 2 dimensions to enable visualization of the way the model self-organizes its “event space”, training the model by minimize weighted cross-entropy with both the unary marginals and pairwise conditional probabilities. We also conduct a parallel experiment with POE as embedded in the unit cube, where each representation is constrained to touch the faces x = 1, y = 1. In Figure 2, we show the representation of lattice structures by POE and the box lattice model as compared to the abstract probabilistic lattice used to construct the data, shown in Figure 1, and compare the conditional probabilities produced by our model to the ground truth, demonstrating the richer capacity of the box model in capturing strong positive and negative correlations. In Table 1, we perform a series of multivariable conditional queries and demonstrate intuitive results on high-order queries containing up to 4 variables, despite the model being trained on only 2-way information. 5.2 WordNet We experiment on WordNet hypernym prediction, using the same train, development and test split as Vendrov et al. (2016), created by randomly taking 4,000 hypernym pairs from the 837,888269 (a) Original lattice (b) Ground truth CPD Figure 1: Representation of the toy probabilistic lattice used in Section 5.1. Darker color corresponds to more unary marginal probability. The associated CPD is obtained by a weighted aggregation of leaf elements. (a) POE lattice (b) Box lattice (c) POE CPD (d) Box CPD Figure 2: Lattice representations and conditional probabilities from POE vs. box lattice. Note how the box lattice model’s lack of “anchoring” to a corner allows it vastly more expressivity in matching the ground truth CPD seen in Figure 1. 270 P(grizzly bear | ... ) P(cactus | ... ) P(plant | ... ) P(grizzly bear) 0.12 P(cactus) 0.10 P(plant) 0.20 omnivore 0.29 green 0.16 green 0.37 white 0.00 plant 0.39 snake 0.00 brown 0.30 american, green 0.19 carnivore 0.00 omnivore, white 0.00 plant, green, american 0.40 cactus 0.78 omnivore, brown 0.38 american, carnivore 0.00 american, cactus 0.85 Table 1: Multi-way queries: conditional probabilities adjust when adding additional evidence or contradiction. In constrast, POE can only raise or preserve probability when conditioning. term1 term2 craftsman.n.02 shark.n.03 homogenized milk.n.01 apple juice.n.01 tongue depresser.n.01 paintbrush.n.01 deerstalker.n.01 bathing cap.n.01 skywriting.n.01 transcript.n.01 Table 2: Negatively correlated variables produced by the model. Method Test Accuracy % transitive 88.2 word2gauss 86.6 OE 90.6 Li et al. (2017) 91.3 DOE (KL) 92.3 POE 91.6 POE (100 dim) 91.7 Box 92.2 Box + CPD 92.3 Table 3: Classification accuracy on WordNet test set. edge transitive closure of the WordNet hypernym hierarchy as positive training examples for the development set, 4,000 for the test set, and using the rest as training data. Negative training examples are created by randomly corrupting a train/development/test edge (u, v) by replacing either u or v with a randomly chosen negative node. We use their specific train/dev/test split, while Athiwaratkun and Wilson (2018) use a different train/dev split with the same test set (personal communication) to examine the effect of different negative sampling techniques. We cite their best performing model, called DOE (KL). Since our model is probabilistic, we would like a sensible value for P(n), where n is a node. We assign these marginal probabilities by looking at the number of descendants in the hierarchy under a node, and normalizing over all nodes, taking P(n) = | descendants(n) | | nodes | . Furthermore, we use the graph structure (only of the subset of edges in the training set to avoid leaking data) to augment the data with approximate conditional probabilities P(x|y). For each leaf, we consider all of its ancestors as pairwise co-occurences, then aggregate and divide by the number of leaves to get an approximate joint probability distribution, P(x, y) = | x, y co-occur in ancestor set | | leaves | . With this and the unary marginals, we can create a conditional probability table, which we prune based on the difference of P(x|y) and P(y|x) and add cross entropy with these conditional “soft edges” to the training data. We refer to experiments using this additional data as Box + CPD in Table 3. We use 50 dimensions in our experiments. Since our model has 2 parameters per dimension, we also perform an apples-to-apples comparison with a 100D POE model. As seen in Table 3, we outperform POE significantly even with this added representational power. We also observe sensible negatively correlated examples, shown in 2, in the trained box model, while POE cannot represent such relationships. We tune our models on the development set, with parameters documented in Appendix D.1. We observe that not only does our model outperform POE, it beats all previous results on WordNet, aside from the concurrent work of Athiwaratkun and Wilson (2018) (using different train/dev negative examples), the baseline POE model does as well. This indicates that probabilistic embeddings for transitive relations are a promising avenue for future work. Additionally, the ability of the model to learn from the expected ”soft edges” improves it to state-of-the-art level. We expect that co-occurrence counts gathered from real textual corpora, rather than merely 271 aggregating up the WordNet lattice, would further strengthen this effect. 5.3 Flickr Entailment Graph Figure 3: R between model and gold probabilities. P(x|y) Full test data KL Pearson R POE 0.031 0.949 POE* 0.031 0.949 Box 0.020 0.967 Unseen pairs POE 0.048 0.920 POE* 0.046 0.925 Box 0.025 0.957 Unseen words POE 0.127 0.696 POE* 0.084 0.854 Box 0.050 0.900 Table 4: KL and Pearson correlation between model and gold probability. We conduct experiments on the large-scale Flickr entailment dataset of 45 million image caption pairs. We use the exactly same train/dev/test from Lai and Hockenmaier (2017). We use a slightly different unseen word pairs and unseen words test data, obtained from the author. We include their published results and also use their published code, marked ⇤, for comparison. For these experiments, we relax our boxes from the unit hypercube to the nonnegative orthant and obtain probabilities under the exponential measure, p(x) = exp(−x). We enforce the nonnegativity constraints by clipping the LSTMgenerated embedding (Hochreiter and Schmidhuber, 1997) for the box minimum with a ReLU, and parametrize our ∆embeddings using a softplus activation to prevent dead units. As in Lai and Hockenmaier (2017), we use 512 hidden units in our LSTM to compose sentence vectors. We then apply two single-layer feed-forward networks with 512 units applied to the final LSTM state to produce the embeddings. As we can see from Table 4, we note large improvements in KL and Pearson correlation to the ground truth entailment probabilities. In further analysis, Figure 3 demonstrates that while the box model outperforms POE in nearly every regime, the highest gains come from the comparatively difficult to calibrate small entailment probabilities, indicating the greater capability of our model to produce fine-grained distinctions. 6 Conclusion and Future Work We have only scratched the surface of possible applications. An exciting direction is the incorporation of multi-relational data for general knowledge representation and inference. Secondly, more complex representations, such as 2n-dimensional products of 2-dimensional convex polyhedra, would offer greater flexibility in tiling event space. Improved inference of the latent boxes, either through better optimization or through Bayesian approaches is another natural extension. Our greatest interest is in the application of this powerful new tool to the many areas where other structured embeddings have shown promise. 7 Acknowledgments We thank Alice Lai for making the code from her original paper public, and for providing the additional unseen pairs and unseen words data. We also thank Haw-Shiuan Chang, Laurent Dinh, and Ben Poole for helpful discussions. We also thank the anonymous reviewers for their constructive feedback. This work was supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction., and in part by the National Science Foundation under Grant No. IIS1514053. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. 272 References Ben Athiwaratkun and Andrew Gordon Wilson. 2018. On modeling hierarchical data via probabilistic order embeddings. In International Conference on Learning Representations. Mohit Bansal, David Burkett, Gerard De Melo, and Dan Klein. 2014. Structured learning for taxonomy induction with belief propagation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1041–1051. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Y. J. Chu and T. H. Liu. 1995. On the shortest arborescence of a directed graph. Science Sinica, 20. Michael Collins, Sanjoy Dasgupta, and Robert E Schapire. 2002. A generalization of principal components analysis to the exponential family. In Advances in neural information processing systems, pages 617–624. Katrin Erk. 2009. Representing words as regions in vector space. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, CoNLL ’09, pages 57–65, Stroudsburg, PA, USA. Association for Computational Linguistics. Salar Fattahi and Somayeh Sojoudi. 2017. Graphical lasso and thresholding: Equivalence and closedform solutions. arXiv preprint arXiv:1708.09479. Jerome Friedman, Trevor Hastie, and Robert Tibshirani. 2008. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432– 441. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 249–256. Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. 2015. Learning to represent knowledge graphs with gaussian embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pages 623– 632, New York, NY, USA. ACM. David Heckerman, Dan Geiger, and David M Chickering. 1995. Learning bayesian networks: The combination of knowledge and statistical data. Machine learning, 20(3):197–243. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Alice Lai and Julia Hockenmaier. 2017. Learning to predict denotational probabilities for modeling entailment. In EACL. Xiang Li, Luke Vilnis, and Andrew McCallum. 2017. Improved representation learning for predicting commonsense ontologies. NIPS Workshop on Structured Prediction. George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM. Maximillian Nickel and Douwe Kiela. 2017. Poincar´e embeddings for learning hierarchical representations. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6338–6347. Curran Associates, Inc. Judea Pearl. 1988. Probabilistic reasoning in intelligent systems. Daniel M Roy and Yee W Teh. 2009. The mondrian process. In Advances in neural information processing systems, pages 1377–1384. Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In ICLR. Luke Vilnis and Andrew McCallum. 2015. Word representations via gaussian embedding. In ICLR. Adriaan C. Zaanen. 1997. Introduction to Operator Theory in Riesz Spaces. Springer Berlin Heidelberg.
2018
25